juliusv
Repos
76
Followers
506
Following
4

Events

issue comment
Switch from 'sanity' to more inclusive lanuage

:+1: Thanks!

Created at 17 hours ago
issue comment
PromQL parser mistakingly interpreting UTF8 escape character

@exitflynn Yay! Go ahead :)

Created at 2 days ago
issue comment
PromQL parser mistakingly interpreting UTF8 escape character

Yep, both the autocompletion and the table output is broken, but the latter is more blatantly broken :P

If anyone wants to pick this up before I or Augustin get to it, basically you'd need to escape label values according to Go string escaping rules in the table output in both the formatted variant (https://github.com/prometheus/prometheus/blob/dfa5cd55db548b8f1355b5aabb7cf869491272a2/web/ui/react-app/src/pages/graph/SeriesName.tsx#L37) and the unformatted (with many series) variant (https://github.com/prometheus/prometheus/blob/dfa5cd55db548b8f1355b5aabb7cf869491272a2/web/ui/react-app/src/utils/index.ts#L37). The formatted variant of this is also used for the legend in the graph view.

Created at 2 days ago
issue comment
PromQL parser mistakingly interpreting UTF8 escape character

@roidelapluie Btw., regarding our string interpretation: https://prometheus.io/docs/prometheus/latest/querying/basics/#string-literals

Created at 2 days ago
issue comment
PromQL parser mistakingly interpreting UTF8 escape character

@brancz Hm yeah. Optimally the autocompletion would be able to understand the string context it's in (double-quoted string vs. backtick-quoted) and escape any inserted label values accordingly, if necessary.

Created at 2 days ago
issue comment
PromQL parser mistakingly interpreting UTF8 escape character

@roidelapluie That's a different issue. Double-quoted strings in PromQL interpret escape sequences starting with \, and \. is not a valid Go string escape sequence. Try backticks for no interpretation of escape sequences within the string instead:

{a=~`1\.2`}
                            
Created at 2 days ago
issue comment
PromQL parser mistakingly interpreting UTF8 escape character

The issue is really just that the table output is really dumb and doesn't do any escaping (besides automatic HTML safety stuff): https://github.com/prometheus/prometheus/blob/dfa5cd55db548b8f1355b5aabb7cf869491272a2/web/ui/react-app/src/pages/graph/SeriesName.tsx#L37

The rest seems to work fine.

Created at 2 days ago
issue comment
UI: Fix starting screen flash before showing page

But if this doesn't sound good to you I think rendering a possibly broken page at first is fine. That's what happened before the starting screen existed and is arguably the more correct behavior (as opposed to what happens now which is rendering the starting screen first even if the instance isn't actually starting).

I think it'd be a little worse than what we did before, since we flicker in something broken, but then remove it again once we know it isn't supposed to work yet. Seems more confusing/unclean to me.

So I'd prefer showing either a blank screen or better yet, a notice saying what is actually happening, like "Loading server state..." (rather than the TSDB startup message). If we really want to get rid of any startup flicker, we could even inject the current readiness state into the page to begin with, like we do for other values at https://github.com/prometheus/prometheus/blob/main/web/web.go#L400-L402. But I don't expect that :)

Created at 3 days ago
issue comment
UI: Fix starting screen flash before showing page

Hmmm, the only issue I see with this is that it initially renders various pages with an error (if the TSDB is still loading).

For example, the graph page will initially render this until the status is fetched:

image

And the TSDB page will show:

image

Or just the status page:

image

I provoked this by making both the TSDB startup and the status fetch really slow in a local checkout. Now normally the status fetch should be fast at least, but it still seems a bit unclean to first attempt a broken render...

Created at 4 days ago
issue comment
Cut v2.40.3

:+1:

Created at 4 days ago

Modify Grafana Auth Method (#60)

  • Update backend.go

Signed-off-by: Hassan Warsi hassanw2400@gmail.com

  • Update backend.go

Signed-off-by: Hassan Warsi hassanw2400@gmail.com

Signed-off-by: Hassan Warsi hassanw2400@gmail.com

Created at 1 week ago
pull request closed
Modify Grafana Auth Method

Hello. I was wondering if instead of Add, the Set method can be used to ensure that any other Authorizations that might exist in the case of proxying, the Grafana API call can go through successfully rather than adding an additional Authorization Bearer when one already exists.

In something that I'm currently working on, I need this Add to become a Set to provide proper functionality so promlens can properly connect to the UI.

Created at 1 week ago
issue comment
Modify Grafana Auth Method

Thanks :+1: Yeah, we can always make it configurable in the future if really needed.

Created at 1 week ago
issue comment
Modify Grafana Auth Method

Hmm, good question. I wonder if there are ever situations where you want both headers to be passed through, but I don't know enough about Auth header use cases in HTTP. I guess I'm fine changing it this way until anyone complains :) There's a second instance in the other function in this file as well though that would also need to be changed.

Created at 1 week ago

Add postgres as a db (#59)

  • add postgres as a db

Signed-off-by: Evgeny Kuzin evgeny@hudson-trading.com

  • DCO

Signed-off-by: Evgeny Kuzin evgeny@hudson-trading.com

  • case instead of if/elseif + newline for gitignore

Signed-off-by: Evgeny Kuzin evgeny@hudson-trading.com

  • rollback gitignore changes

Signed-off-by: Evgeny Kuzin evgeny@hudson-trading.com

Signed-off-by: Evgeny Kuzin evgeny@hudson-trading.com Co-authored-by: Evgeny Kuzin evgeny@hudson-trading.com

Created at 1 week ago
pull request closed
Add postgres as a db

github.com/lib/pq is in maintenance mode, but it seems like the best candidate still because we don't need all these fancy features that pgx provides and its fits as a database/sql Interface. Basically, I chose it based on this

  • @juliusv
Created at 1 week ago
issue comment
Add postgres as a db

:+1: Wonderful, thank you :)

Created at 1 week ago
issue comment
Merge back release-2.40 branch

:+1:

Created at 1 week ago
issue comment
Makefile: Fix targets order

:+1: Looks good to me in any case, but just curious what whether something was broken due to this?

Created at 1 week ago

README: Remove mentions of circleci (#11580)

Signed-off-by: Julien Pivotto roidelapluie@o11y.eu

Signed-off-by: Julien Pivotto roidelapluie@o11y.eu

Created at 1 week ago
pull request closed
README: Remove mentions of circleci

Signed-off-by: Julien Pivotto roidelapluie@o11y.eu

Created at 1 week ago
issue comment
README: Remove mentions of circleci

:+1:

Created at 1 week ago
issue comment
Cut v2.40.2

:+1:

Created at 1 week ago

Set offset for the query end time to -12 minutes.

Signed-off-by: Nikola Milikic nikola.milikic@sysdig.com

Merge pull request #95 from nikola-milikic/Offset-query-end-time

Set offset for the query end time to -12 minutes

Created at 1 week ago
pull request closed
Set offset for the query end time to -12 minutes

When running the PromQL Compliance test, we can sometimes observe the queries with negative offset failing. The reason for this is that by default query end time is now() - 2m. And we have queries with negative offsets of -5m and -10m. That, generally, means that we are querying into the future for 3m for the first query (with offset -5), and 8m for the second query (with offset -10). When running the compliance test, sometimes a race condition can be observed with data ingestion where one storage still hasn't ingested the same sample that the other system already has ingested. As a result, queries with negative offsets into the future can fail since the resulting time series will have additional samples at the end (the one that the other storage system still hasn't ingested at that point in time). This issue is raised in #94.

This PR offsets the query end time to -12m (previously it was -2m). This will ensure that even for the query with offset -10 we don't query into the future and thus we avoid the ingestion race condition problem.

Created at 1 week ago
issue comment
Set offset for the query end time to -12 minutes

:+1: Thanks! :)

Created at 1 week ago