ipfs
Repos
187

Technical specifications for the IPFS protocol stack

1027
197

Peer-to-peer hypermedia protocol

21579
1396

An unobtrusive and user-friendly desktop application for IPFS on Windows, Mac and Linux.

4712
692

Browser extension that simplifies access to IPFS resources on the web

1805
250

IPFS implementation in JavaScript

7169
1214

An IPFS implementation in Go

14469
2658

Events

started
Created at 4 minutes ago
issue comment
don't append "/" to prefix in queries

@noot I don't think this would be an effective way to make the DHT more privacy focused, it would be quite hard to estimate the size of the network and how big your prefix should be. bafkreidlasregfrefuxtigrw74w56y is so big it would certainly only match one record and don't provide any privacy. You would need to have prefixes so small you match a huge number of records else it would be easy to do correlation attacks with the block links.

I think you should join the #ipfs-content-routing-wg channel on https://filecoin.io/slack, for DHT privacy we are more intrested in double hashing.

Anyway about this change:

About doing this actually, the Query package is generally not fast with O(N) behaviours in many places, but it could be made faster. Ensuring query works by namespaces allows us to write bucketing code (where keys can be bucketed by namespace).

I think it can work right now if you use a query that use a KeyPrefix filter. AFAIT this would looks like this:

q := query.Query{
  Prefix: "/florb/keys/thing/",
  Filters: []query.Filter{ query.FilterKeyPrefix{"bafkreidlasregfrefuxtigrw74w56y"} },
}
Created at 6 minutes ago
issue comment
fix: Only limit resources per peer

Hi @ajnavarro ,

In general bounding at the peer scope by itself is not enough to protect against DoS attacks since it's trivial to create lots of peers (as discussed in the comments).

That said, we still are bounding by the amount of memory that go-libp2p will use, and even with you're proposed change, we we won't OOM because we cap System.Memory. This is certainly better than nothing.

I agree it's hard to pick defaults, but I think we can likely do better here...

Why am I even trying to pick better defaults?

Yes we need to protect against OOM. That said, if we don't bound incoming connections and streams, we are still vulnerable to CPU exhaustion processing these connections and streams. I don't know how much CPU is spent managing a connection or stream, but I know it's not free.

Proposal

With Kubo 0.17 we set the default memory that is used to calculate go-libp2p resource limits to TOTAL_SYSTEM_MEMORY]/8per https://github.com/ipfs/kubo/blob/master/core/node/libp2p/rcmgr_defaults.go#L50 . On a 16GB machine, this means 2GB for go-libp2p resource manager which translates to 124 System.ConnsInbound per https://github.com/libp2p/go-libp2p/blob/master/p2p/host/resource-manager/limit_defaults.go#L344 . (64 for initial GB and 64 for the GB increase). This checks out with what is being reported in https://github.com/ipfs/kubo/issues/9432#issuecomment-1331177613 .

I think we just set the default memory for go-libp2p too low by default. What about instead of TOTAL_SYSTEM_MEMORY]/8 we did TOTAL_SYSTEM_MEMORY]/2 (half)? I agree there's no magic number here that will solve all usecases, but this seems more reasonable to me in retorspect. go-libp2p will not use more than half the system memory (which should still provide plenty of headroom for the operating system to function even if we hit that value), and it effectively increases our default limit for `System.ConnsInbound``by 4x.

The other option here is to not use rcmgr.DefaultLimits.SystemBaseLimit.ConnsInbound in https://github.com/ipfs/kubo/blob/master/core/node/libp2p/rcmgr_defaults.go#L66. Instead define our own higher value. (I don't know a better value, but that's a knob to turn.)

Summary

Before we fallback to disabling

  1. System.ConnsInbound
  2. System.StreamsInbound
  3. Transient.ConnsInbound
  4. Transient.StreamsInbound by default, I'd like to see if we can thread the needle here. I think one of the ways is to suggest to folks in https://github.com/ipfs/kubo/issues/9432 that are having issues to set Swarm.ResourceMgr.MaxMemory to TOTAL_SYSTEM_MEMORY]/2 (they'll have to figure this value out) and we see how that performs.

(Also, even if we were to go with PR, it would need to include doc updates to https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgr )

Created at 10 minutes ago
Created at 22 minutes ago
started
Created at 25 minutes ago
started
Created at 28 minutes ago
issue comment
cannot reserve inbound connection: resource limit exceeded

Ok, there's lots to unpack here...

General note to the community

  1. Thanks for reporting the issues, and apologies here for the snags this is causing.
  2. While we get this figured out and improved, please know that you can certainly use Kubo 0.16 (which doesn't enable the libp2p resource manager by default) or you should be able to disable the resource manager with https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgrenabled explicitly (although it looks like per user reports that flag may not be working)
  3. This is a top priority for Kubo maintainers to address before the Kubo 0.18 release (RC targeting 2022-12-08).

Problems

Below are the problems I'm seeing...

Reports of a disabled go-libp2p resource manager still managing resources

https://github.com/ipfs/kubo/issues/9432#issuecomment-1327647257 and other comments said they disabled the resource manager but are still seeing messages in the logs. In that commend, we can see it's disabled in config:

  "Swarm": {
    "ResourceMgr": {
      "Enabled": false
    },

Confusion around "magic values"

4611686018427388000 is actually not a magic value. It is effectively "infinity" and is defined here: https://github.com/ipfs/kubo/blob/master/core/node/libp2p/rcmgr_defaults.go#L15

Confusion on when Swarm.ResourceMgr is set to {}

I believe in this case we are setting default values as described in https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgr.

Clarity around the "error message" meaning

There is confusion about what messages like mean "system: cannot reserve inbound connection: resource limit exceeded". For this example, it means Swarm.ResourceMgr.Limits.System.ConnsInbound is exceeded. It would be nice if the value from ipfs swarm limit system was included.

Actionable advice when resource limits are hit

When a resource limit is hit, we point users to https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgr. It's clear from the feedback here that the docs there aren't actionable enough.

Idea of limiting to one connection per IP

I don't think we should discuss this more or pursue it. As discussed in https://github.com/ipfs/kubo/issues/9432#issuecomment-1334482153, it is ineffective and impacts NATs (especially large organizations/enterprises which have all their traffic coming from behind a NAT).

When is ResourceMgr a feature (protecting users as expected) vs. a bug

There is good commentary on this in https://github.com/ipfs/kubo/issues/9432#issuecomment-1334160936.

I agree with this sentiment in general. go-libp2p bounding the resources it uses is generally a feature, and the presence of a message doesn't necessarily mean there's a bug.

That said, if by default our limits are crazy low, then I would call it a bug. For example, if Swarm.ResourceMgr.Limits.System.ConnsInbound was set to "1" by default, I would consider it a bug because this would mean we'd only allow 1 inbound connection.

Using https://github.com/ipfs/kubo/issues/9432#issuecomment-1331177613 as an example, Swarm.ResourceMgr.Limits.System.ConnsInbound is set to 123. This is derived from Swarm.ResourceMgr.MaxMemory. I assume @kallisti5 didn't set a MaxMemroy value and the default of TOTAL_SYSTEM_MEMORY]/8 was used per https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgrmaxmemory. (In this case TOTAL_SYSTEM_MEMORY looks to be around ~16GB as 1999292928*8/(1024*1024) = ~15,253)

General notes for maintainers

  1. All of the problems/issues above need to have one or more actions. We can't rely on typed responses in this issue. We ultimately need to make fixes and/or respond with URLs to documentation.
  2. I have thoughts/ideas on potential actions but didn't want to slow down getting this message out with my limited window this evening and would love others to take the reigns here so I don't become the blocker. I'm happy to engage/help if it's useful.
  3. Lets make sure we have a place where we're tracking the problems to solve and actions we're going to take. I created https://github.com/ipfs/kubo/issues/9442 where we can do this, but I'm fine if it happens somewhere else.
Created at 28 minutes ago
issue comment
[Tracking issue] go-libp2p resource manager critical post release fixes

Ok, there's lots to unpack here...

General note to the community

  1. Thanks for reporting the issues, and apologies here for the snags this is causing.
  2. While we get this figured out and improved, please know that you can certainly use Kubo 0.16 (which doesn't enable the libp2p resource manager by default) or you should be able to disable the resource manager with https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgrenabled explicitly (although it looks like per user reports that flag may not be working)
  3. This is a top priority for Kubo maintainers to address before the Kubo 0.18 release (RC targeting 2022-12-08).

Problems

Below are the problems I'm seeing...

Reports of a disabled go-libp2p resource manager still managing resources

https://github.com/ipfs/kubo/issues/9432#issuecomment-1327647257 and other comments said they disabled the resource manager but are still seeing messages in the logs. In that commend, we can see it's disabled in config:

  "Swarm": {
    "ResourceMgr": {
      "Enabled": false
    },

Confusion around "magic values"

4611686018427388000 is actually not a magic value. It is effectively "infinity" and is defined here: https://github.com/ipfs/kubo/blob/master/core/node/libp2p/rcmgr_defaults.go#L15

Confusion on when Swarm.ResourceMgr is set to {}

I believe in this case we are setting default values as described in https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgr.

Clarity around the "error message" meaning

There is confusion about what messages like mean "system: cannot reserve inbound connection: resource limit exceeded". For this example, it means Swarm.ResourceMgr.Limits.System.ConnsInbound is exceeded. It would be nice if the value from ipfs swarm limit system was included.

Actionable advice when resource limits are hit

When a resource limit is hit, we point users to https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgr. It's clear from the feedback here that the docs there aren't actionable enough.

Idea of limiting to one connection per IP

I don't think we should discuss this more or pursue it. As discussed in https://github.com/ipfs/kubo/issues/9432#issuecomment-1334482153, it is ineffective and impacts NATs (especially large organizations/enterprises which have all their traffic coming from behind a NAT).

When is ResourceMgr a feature (protecting users as expected) vs. a bug

There is good commentary on this in https://github.com/ipfs/kubo/issues/9432#issuecomment-1334160936.

I agree with this sentiment in general. go-libp2p bounding the resources it uses is generally a feature, and the presence of a message doesn't necessarily mean there's a bug.

That said, if by default our limits are crazy low, then I would call it a bug. For example, if Swarm.ResourceMgr.Limits.System.ConnsInbound was set to "1" by default, I would consider it a bug because this would mean we'd only allow 1 inbound connection.

Using https://github.com/ipfs/kubo/issues/9432#issuecomment-1331177613 as an example, Swarm.ResourceMgr.Limits.System.ConnsInbound is set to 123. This is derived from Swarm.ResourceMgr.MaxMemory. I assume @kallisti5 didn't set a MaxMemroy value and the default of TOTAL_SYSTEM_MEMORY]/8 was used per https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgrmaxmemory. (In this case TOTAL_SYSTEM_MEMORY looks to be around ~16GB as 1999292928*8/(1024*1024) = ~15,253)

General notes for maintainers

  1. All of the problems/issues above need to have one or more actions. We can't rely on typed responses in this issue. We ultimately need to make fixes and/or respond with URLs to documentation.
  2. I have thoughts/ideas on potential actions but didn't want to slow down getting this message out with my limited window this evening and would love others to take the reigns here so I don't become the blocker. I'm happy to engage/help if it's useful.
  3. Lets make sure we have a place where we're tracking the problems to solve and actions we're going to take. I created https://github.com/ipfs/kubo/issues/9442 where we can do this, but I'm fine if it happens somewhere else.
Created at 38 minutes ago
started
Created at 1 hour ago
Created at 1 hour ago
started
Created at 1 hour ago
started
Created at 1 hour ago
issue comment
Nightly build failed for ipget

ipget failed to build from the latest commit: https://github.com/ipfs/distributions/actions/runs/3599032009

Created at 1 hour ago
Created at 2 hours ago
Created at 2 hours ago
started
Created at 3 hours ago
Created at 3 hours ago
started
Created at 3 hours ago
started
Created at 3 hours ago
started
Created at 3 hours ago
started
Created at 3 hours ago
started
Created at 3 hours ago
started
Created at 3 hours ago
started
Created at 3 hours ago
Created at 3 hours ago
Created at 3 hours ago
issue comment
Proposed refactor of kubo install section in Install

OpenBsd instructions have been confirmed to work on my computer, FreeBsd instructions should work as well. Feel free to close [https://github.com/ipfs/ipfs-docs/issues/1326](issue 1326) when/if this is merged.

Created at 3 hours ago
pull request opened
Ci cd test/markdown link check cron
Created at 4 hours ago
issue comment
don't append "/" to prefix in queries

@Jorropo yes, exactly. For the DHT prefix lookup implementation I'm working on, only a prefix of the CID is queried for added privacy. Appending the slash breaks the implementation since the database key is (essentially) the CID. So if my CID is bafkreidlasregfrefuxtigrw74w56ybfsuyxnrsp3hafnxiicg2mjntcuu and I try to find keys starting with bafkreidlasregfrefuxtigrw74w56y in the database, I won't be able to find anything.

Created at 4 hours ago