dpkirchner
Repos
19
Followers
8

Events

Unexpected CSR replacement on v4.0.1

I've committed a fix to a local copy of the provider. It seems to work but I don't know if it is doing it right.

The problem: The state file contains a SHA1 hash of some PEMs, so when the plan is compared against the state a difference is identified. I think this only occurs when some other value in the resource changes (e.g. the street_address or whatever).

The "fix": If the SHA1 of the PEM in the plan matches the value in state, ignore the change. Or, rather, overwrite the plan value with the state value.

https://github.com/dpkirchner/terraform-provider-tls/commit/6610b48339251661ab112aa23215d128d7736e0f

Created at 1 month ago

Fix spurious certificate updates due to SHA1ing keys in state

Created at 1 month ago
dpkirchner create branch hash-before-compare
Created at 1 month ago
Unexpected CSR replacement on v4.0.1

I wonder if the private_key_pem update is a red herring. I'm seeing changes to cert_request_pem that are reported as changes from a checksum to a CSR:

      ~ ca_cert_pem           = (sensitive)
      ~ ca_private_key_pem    = (sensitive value)
      ~ cert_request_pem      = <<-EOT
          - 56683b36b91b66c5b78cfdcfac4817f8a304b04d
          + -----BEGIN CERTIFICATE REQUEST-----
          + MIICdjCCAV4CAQAwMTESMBAGA1...

I've verified that the checksum (566..) is indeed the shasum for the CSR, so won't actually change anything. I think it's the same thing for private_key_pem based on the output of terraform show -out=json.

FWIW, I've tested this with v4.0.2.

Created at 1 month ago
Unexpected CSR replacement on v4.0.1

A partial workaround for this is to pull down your state file (terraform state pull), find the tls_cert_request.example resource, and copy the values in the subject object over to your .tf files. For example:

"subject": [
  {
    "common_name": "clientname",
    "country": "",
    "locality": "",
    "organization": "my org name",
    "organizational_unit": "",
    "postal_code": "",
    "province": "",
    "serial_number": "",
    "street_address": []
  }
],

becomes

  subject {
    common_name         = "clientname"
    organization        = "my org name"
    country             = ""
    locality            = ""
    organizational_unit = ""
    postal_code         = ""
    province            = ""
    serial_number       = ""
    street_address      = []
  }

You may still see ~ private_key_pem = (sensitive value) but at least it changes from a replacement to an update. (It's still not something I'm comfortable applying but maybe this'll help others that are OK with the private key changing).

Created at 1 month ago
Slack Notifier failed to unmarshal templating JSON

Seems like this is the commit: https://github.com/GoogleCloudPlatform/cloud-build-notifiers/commit/b0668daa20c1f653e240545f8a289f12b027fab1

In which case, I'm wondering if one of the variables being passed to the template is has embedded quotes that need to be escaped. I'm looking at my cloud build pub/sub messages and I don't see an obvious problem, however. My best, crazy guess is that it has something to do with some unicode encoded equal sign in the "logUrl" variable.

The template looks for "projectId", "id", "status", and "logUrl". The first three are just plain ol' ASCII, no special characters, no quotes, etc. The fourth, logUrl, is https://console.cloud.google.com/cloud-build/builds/19f99583-3263-48f5-968f-a7e101823d42?project\u003d12345678. I'm thinking maybe it's that \u003d.

Created at 1 month ago
[pub/sub]cant find resource when create subscription different project

@j0hnsmith FWIW this documentation suggests using ".name": https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/pubsub_subscription#example-usage---pubsub-subscription-different-project

Created at 1 month ago
issue comment
first_value() without group by every column

Sweet. Btw, I figured out a better alternative solution that only requires two queries.

These are rough, may not be syntactically correct, but it's basically:

SELECT foo.*, other_tbl.column_a
(SELECT
	DISTINCT tbl.id,
	tbl.name,
	first_value(other_tbl.id) OVER w AS first_other
FROM
	tbl JOIN other_tbl ON tbl.id = other_tbl.tbl_id
WHERE w AS (PARTITION tbl.id)
) AS foo JOIN other_tbl ON foo.id = other_foo.tbl_id

followed by a bog standard count aggregation query.

Created at 2 months ago
issue comment
first_value() without group by every column

Whoops! Accidentally submitted this before I typed it all up, just a minute...

Created at 2 months ago
opened issue
first_value() without group by every column

Is your feature request related to a problem? Please describe. It would be useful to have a way to get the first value used in a sorted aggregate function without having to group by every column.

Describe the solution you'd like

SELECT
	tbl.id,
	tbl.name,
	count(other_tbl.id),
	first_value(other_tbl.column_a) OVER (
		ORDER BY other_tbl.created
	)
FROM
	tbl JOIN other_tbl ON tbl.id = other_tbl.tbl_id
GROUP BY
	tbl.id

should return:

'tbl-uuid-1', 'tbl row 1', 2, 'other tbl a'

and using ORDER BY other_tbl.created DESC should return the same thing but with the last column other tbl b.

when using schemas:

CREATE TABLE tbl (
	id UUID NOT NULL DEFAULT gen_random_uuid(),
	name STRING NOT NULL,
	CONSTRAINT tbl_pkey PRIMARY KEY (id ASC)
);
CREATE TABLE other_tbl (
	id
		UUID NOT NULL DEFAULT gen_random_uuid(),
	tbl_id
		UUID NOT NULL,
	created
		TIMESTAMP DEFAULT now():::TIMESTAMP NOT NULL,
	column_a STRING NOT NULL,
	CONSTRAINT other_tbl_pkey PRIMARY KEY (id ASC),
	CONSTRAINT other_tbl_tbl_id_fkey
		FOREIGN KEY (tbl_id)
		REFERENCES tbl (id)
		ON DELETE CASCADE
);

and data:

insert into tbl (name) values ('tbl row 1'), ('tbl row 2');

insert into other_tbl (tbl_id, created, column_a) values ((select id from tbl limit 1), '2021-01-01 00:00:00', 'other tbl a');
insert into other_tbl (tbl_id, created, column_a) values ((select id from tbl limit 1), '2021-02-01 00:00:00', 'other tbl a');

Describe alternatives you've considered N+1 queries where there's always a query against tbl to get its ids and name and N queries, one per tbl row, that uses first_value, e.g.:

Additional context Add any other context or screenshots about the feature request here.

Created at 2 months ago
issue comment
Add missing CDN config options

Huh, ok. I'm surprised that the schema version wasn't bumped when the fields were added. Thanks for spotting that, I totally missed it.

Created at 2 months ago
issue comment
Add missing CDN config options

I believe this is specific to GKE in that it involves cloud.google.com/v1's BackendConfig. The new attributes were added to a file in this repo:

https://github.com/kubernetes/ingress-gce/blob/8bd4a6a42c60ad184258995c385523b4b2aaccc1/pkg/apis/backendconfig/v1/types.go#L126

They're just not showing up in the kubectl explain output (the schema) and are unavailable for use. Here's the output from edit:

# backendconfigs.cloud.google.com "cdn" was not valid:
# * <nil>: Invalid value: "The edited file failed validation": ValidationError(BackendConfig.spec.cdn): unknown field "cacheMode" in com.google.cloud.v1.BackendConfig.spec.cdn
#
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  generation: 2
  name: cdn
  namespace: default
spec:
  cdn:
    cacheMode: USE_ORIGIN_HEADERS
    cachePolicy:
      includeHost: true
      includeProtocol: true
      includeQueryString: true
    enabled: true
  timeoutSec: 90

I guess it's possible that all of my clusters (v1.21.11) are running older versions of ingress-gce, although I don't know how to confirm that. Given that the clusters are using cloud.google.com/v1's BackendConfig and that's where the fields are in the code (best I can tell), I believe I should be on the correct version.

Created at 2 months ago
issue comment
Add missing CDN config options

It looks like this issue was merged (/closed) prematurely given the new fields aren't in the schema (and thus can't be used without dangerous workarounds). I know this won't work but:

/reopen

Created at 2 months ago