Ehhh not sure it's OK to assume camel can be interpreted as raw bytes, and not convinced that some of those test cases necessarily tokenize in the way which is asserted. I appreciate the effort! But this is in my judgment too fragile.
The builtin profiler that tendermint uses is misleading as cpu time spent waiting on I/O is not accounted for.
Do you have
fgprof profiles which demonstrate this is the case?
I agree with the "non rate limited public RPCs get abused" comments posted by others. We run HAProxy in front of our RPC clusters and run 10 requests every ten seconds, per IP. We primarily run RPCs on DEXs and Bridges. Without that rate limit, it is only a matter of time before our RPC endpoints get shredded by arbitrage bots. In this unrestricted, frenzied state, the average RPC user isn't able to access our endpoints.
If you offer an endpoint to the public internet without authentication or validation of inbound requests in any way, you're obviously gonna get rekt. It's not the responsibility of the program servicing those requests to manage flow control or throttling or rate limiting or any of that stuff directly. It can't be. It doesn't scale. It's your responsibility as an operator to manage those concerns at a layer well above the individual processes that actually serve requests. Rate limiting and all that stuff should occur well before requests hit the full nodes themselves. HAProxy is a good start to that!
We typically average about 1.2 million request per day, per chain.
1.2M RPD = 50k RPH = 833 RPM = 13.8 RPS which is basically trivial scale. You can support that with a
bash script in a
for loop. Any reasonable server should be able to yield 10k RPS as a baseline.