This PR adds a warning when one or more dependencies — that aren't otherwise disambiguated by specifying a registry
key, or being of the path
or git
types — are available in more than one registry source.
The underlying issue here is defined as a dependency confusion attack in #9162: a user with multiple registries defined may believe that they are using a package from one registry, but are in fact getting it from a different registry.
Given a package that uses rand
via rand = "0.8.5"
, but which also sees a different rand
package in a different registry, the output looks like this:
There is, at present, no way to disable this warning. I would be happy to explore adding a flag or environment variable check to do so if that would be useful.
Implementation wise, this takes the form of a new function alongside PackageSet::warn_no_lib_packages_and_artifact_libs_overlapping_deps
, invoked when resolving packages. A small sub-module has been added with a minimal state machine to drive this check — originally, this was all inline in the new PackageSet
function, but the inlined version just made me want spaghetti. :spaghetti:
The Source
trait has also gained a new contains_package_name
function. Some implementations are simpler than others.
To be clear, I'm not very attached to the exact mechanics here, and since I'm essentially learning Cargo as I go here, it's likely that I've missed at least one nuance in terms of how to walk through the dependency graph and their associated sources.
This PR also updates the couple of affected tests that already existed, and adds a set of new test cases to exercise the various possibilities.
Fixes #9162.
Another wording tweak.
Moar test.
lowercase things, add docs
Add new, specific tests.
Improve output.
Improve fidelity of source checks.
Split implementation up.
Make package name checks consistent.
Initial round of test updates.
Initial round of test updates.
Add short circuit.
Don't warn if an explicit registry is provided.
docs(contrib): Pull impl info out of architecture
Personally, I found mixing this stuff with architecture less than ideal as it buried the more practical information among details that might not have been as important. With us moving architecture information into doc comments, this provides us an opportunity to rectify this.
Not a fan of the name of this chapter but its a start.
I've left in the old architecture chapter as there is still content to find a home for (resolver).
docs(contrib): Move higher level resolver docs into doc comments
I chose ops::resolve
as the place for this as the docs cover the
higher level details, including the Cargo.lock
file, while
core::resolver
is more of the algorithm.
docs(contrib): Move some lower resolver details from ops to core
docs(contrub): Remove unused file
This was added by accident in #11869
docs(contrib): Replace architecture with redirects
I also went back to pages moved in #11869 and added redirects for those as well.
Auto merge of #11869 - epage:impl, r=weihanglo
docs(contrib): Pull impl info out of architecture
This is a follow up to #11809.
Personally, I found mixing this stuff with architecture less than ideal as it buried the more practical information among details that might not have been as important. With us moving architecture information into doc comments, this provides us an opportunity to rectify this.
Not a fan of the name of this chapter but its a start.
I've left in the old architecture chapter as there is still content to find a home for (resolver).
Auto merge of #11870 - epage:registry, r=Eh2406
docs(contrib): Move higher level resolver docs into doc comments
This is a follow up to #11809.
I chose ops::resolve
for most of the old documentation as this because the docs cover the
higher level details that include it, like Cargo.lock
file, while
core::resolver
is more of the algorithm.
Auto merge of #11873 - epage:contrib, r=weihanglo
docs(contrub): Remove unused file
This was added by accident in #11869
docs: fix typos in cargo_compile/mod.rs
Auto merge of #11874 - TomAFrench:patch-1, r=ehuss
docs: fix typos in cargo_compile/mod.rs
We've got a couple of typos in the docs for cargo_compile/mod.rs
so I've addressed them.
Just needs a quick proofread
N/A
Auto merge of #11876 - epage:arch, r=ehuss
docs(contrib): Replace architecture with redirects
I also went back to pages moved in #11869 and added redirects for those as well.
doc: Fix registries.name.index for sparse
Auto merge of #11880 - ehuss:registries-doc-git, r=weihanglo
doc: Fix registries.name.index for sparse
A very minor documentation update. registries.name.index
doesn't have to be a git URL, it can also be a sparse one.
Added new GitHub RSA Host Key
GitHub rotated their RSA host key which means that cargo needs to update it. Thankfully the other keys were not rotated so the impact depends on how cargo connected to github.
Refs https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/
Auto merge of #11883 - mitsuhiko:feature/new-github-rsa-host-key, r=arlosi
Added new GitHub RSA Host Key
GitHub rotated their RSA host key which means that cargo needs to update it. Thankfully the other keys were not rotated so the impact depends on how cargo connected to github.
Refs https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/
Initial WIP.
TODO:
do you know if it's possible to write a regression test for this?
I played around with this a bit, but I don't see an obvious way to programmatically reproduce the bug inside the Ember test runner. The problem here is that the JS runtime has to be running in a time zone that (a) has DST transitions (so not UTC), and (b) has a backward transition at a known date. While we could make a reasonable effort to find a backward transition in the current time zone and then set up a test from there, the fact that it wouldn't work in all time zones feels like it might make it too likely to be skipped to be useful.
I also don't think there's a reliable way to specify which time zone is in use — I guess we could change package.json
to run the test
command with TZ=America/Vancouver
or something, but that still feels pretty brittle.
I can put some more time into this if you want, but I don't see an easy win in terms of automated testing.
Most importantly, I drew a Ferris.
cc: @graciegregory
When falling back, a bug in date-fns^0 results in the 23 hour day being skipped when building the dates
array in toChartData
if the current time is between 00:00 and 01:00 UTC. This results in that day's data essentially going missing, resulting in the subtly broken charts noted in #5477.
This commit adds a workaround adapted from a comment on date-fns/date-fns#571^1 by @bertho-zero, which correctly adjusts the new date based on the time zone offsets, and means that dates
is built as expected.
Fixes #5477.
Note that November 6 is missing: there's only one "down" day that weekend:
Now note that there are two days:
I tested this by hacking the crate download backend endpoint to shift the current data to be between 2022-09-03 and 2022-12-01, no matter what the current date is, and the frontend JS to use 2022-12-01 as "now". (These dates were chosen to capture last year's "fall back" in America/Vancouver
, which is my local time zone.)
The bug then became evident at 17:00 PST, which corresponds to 00:00 UTC.
Should you wish to reproduce this, here is the (genuinely awful) diff on top of this branch to do so:
diff --git a/app/components/download-graph.js b/app/components/download-graph.js
index 4c951f14a..893dcbe43 100644
--- a/app/components/download-graph.js
+++ b/app/components/download-graph.js
@@ -97,7 +97,8 @@ export function toChartData(data) {
let versions = new Map();
let crate = null;
- let now = new Date();
+ // let now = new Date();
+ let now = new Date('2022-12-01');
for (let i = 0; i < 90; i++) {
let date = subDays(now, i);
dates[date.toISOString().slice(0, 10)] = { date, cnt: {} };
diff --git a/src/controllers/krate/downloads.rs b/src/controllers/krate/downloads.rs
index 9b22ed0ae..0f32ee2fc 100644
--- a/src/controllers/krate/downloads.rs
+++ b/src/controllers/krate/downloads.rs
@@ -3,8 +3,11 @@
//! The endpoint for downloading a crate and exposing version specific
//! download counts are located in `version::downloads`.
+use std::cell::RefCell;
use std::cmp;
+use chrono::{Duration, NaiveDateTime};
+
use crate::controllers::frontend_prelude::*;
use crate::models::{Crate, CrateVersions, Version, VersionDownload};
@@ -26,14 +29,20 @@ pub async fn downloads(state: AppState, Path(crate_name): Path<String>) -> AppRe
.sort_by_cached_key(|version| cmp::Reverse(semver::Version::parse(&version.num).ok()));
let (latest_five, rest) = versions.split_at(cmp::min(5, versions.len()));
+ let start = NaiveDateTime::from_timestamp_opt(1669881600, 0).unwrap() - Duration::days(90);
+ let dater = FakeDater::new(&start);
let downloads = VersionDownload::belonging_to(latest_five)
.filter(version_downloads::date.gt(date(now - 90.days())))
.order(version_downloads::date.asc())
.load(conn)?
.into_iter()
- .map(VersionDownload::into)
+ .map(|mut vd: VersionDownload| -> EncodableVersionDownload {
+ vd.date = dater.next().date();
+ vd.into()
+ })
.collect::<Vec<EncodableVersionDownload>>();
+ let dater = FakeDater::new(&start);
let sum_downloads = sql::<BigInt>("SUM(version_downloads.downloads)");
let extra: Vec<ExtraDownload> = VersionDownload::belonging_to(rest)
.select((
@@ -43,7 +52,13 @@ pub async fn downloads(state: AppState, Path(crate_name): Path<String>) -> AppRe
.filter(version_downloads::date.gt(date(now - 90.days())))
.group_by(version_downloads::date)
.order(version_downloads::date.asc())
- .load(conn)?;
+ .load(conn)?
+ .into_iter()
+ .map(|mut xd: ExtraDownload| {
+ xd.date = dater.next().date().to_string();
+ xd
+ })
+ .collect();
#[derive(Serialize, Queryable)]
struct ExtraDownload {
@@ -60,3 +75,23 @@ pub async fn downloads(state: AppState, Path(crate_name): Path<String>) -> AppRe
})
.await
}
+
+struct FakeDater {
+ start: NaiveDateTime,
+ days: RefCell<i64>,
+}
+
+impl FakeDater {
+ fn new(start: &NaiveDateTime) -> Self {
+ Self {
+ start: start.clone(),
+ days: RefCell::new(0),
+ }
+ }
+
+ fn next(&self) -> NaiveDateTime {
+ let mut days = self.days.borrow_mut();
+ *days += 1;
+ self.start.clone() + Duration::days(*days)
+ }
+}
Just noticed this looking at the live post:
More detail is in the individual commits, but this addresses two issues I ran into while setting up a local crates.io environment:
recent_crate_downloads
isn't populated.Update import script to get the database name from the environment
Specifically, we'll try to use $PGDATABASE
, then $DATABASE_URL
(as
defined in .env
), then we'll fall back to the hard coded
cargo_registry
if nothing is set.
As PostgreSQL requires that you not be connected to a database when
dropping it, this uses postgres
instead of $DATABASE_NAME
for those
commands, which is usually available.
Refresh the materialised view after importing
The home page errors immediately after importing right now because the
summary endpoint expects recent_crate_downloads
to be populated, but
importing the database dump doesn't refresh the materialised view.
Update import script to get the database name from the environment
This also requires that the PostgreSQL connection be to a different
database when dropping/creating the database, so this uses postgres
instead of $DATABASE_NAME
for those commands, which is usually
available.
Refresh the materialised view after importing
The home page errors immediately after importing right now because the
summary endpoint expects recent_crate_downloads
to be populated, but
importing the database dump doesn't refresh the materialised view.
Update Rust crate toml to v0.7.3 (#6190)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Lock file maintenance (Rust) (#6187)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Update dependency pnpm to v7.29.2 (#6193)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Update Rust crate serde to v1.0.156 (#6194)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Fix typo in downloads.rs (#6195)
Update CONTRIBUTING.md to install diesel_cli 2.x (#6198)
Diesel was upgraded to version 2 in #4892, but the contributing
instructions still have the reader install diesel_cli 1.x, which will
result in some fun errors when it attempts to regenerate
src/schema.rs
when running diesel migration run
, as recommended
later in the contributing guide.
Update dependency @babel/eslint-parser to v7.21.3 (#6197)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Update tj-actions/changed-files action to v35.7.1 (#6196)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Update import script to get the database name from the environment
This also requires that the PostgreSQL connection be to a different
database when dropping/creating the database, so this uses postgres
instead of $DATABASE_NAME
for those commands, which is usually
available.
Refresh the materialised view after importing
The home page errors immediately after importing right now because the
summary endpoint expects recent_crate_downloads
to be populated, but
importing the database dump doesn't refresh the materialised view.
Diesel was upgraded to version 2 in #4892, but the contributing instructions still have the reader install diesel_cli 1.x, which will result in some fun errors when it attempts to regenerate src/schema.rs
when running diesel migration run
, as recommended later in the contributing guide.
Corrects a copy/paste error from the earlier poll
field.