Jump to content
Sign in to follow this  
DarkClaw

Parallel json rpc batch requests: memory not released

Recommended Posts

If I run getblock on all blocks in batches of say 10k-100k,  from multiple cpu in parallel , I start seeing massive ram usage by xayad. It is still holding that ram even after the querying is complete. Basically I need to restart xayad to get that ram back. Is this known/expected behaviour? I do get back the desired results. However, I get a (different) error for batches > ~100k blocks. So, for example, I am trying to run 3 parallel batch requests to xayad for 100k blocks each.

I am using linux mint 19 and  only running xayad (no qt). My xaya.conf looks like (no idea if this is crazy, saw it on stack overflow somewhere):

rpcworkqueue = 1600
rpcthreads   = 64

Here is what is being sent as one batch, for example (only the first 2 blocks):

[
  {
    "method": ["getblock"],
    "params": {
      "blockhash": ["e5062d76e5f50c42f493826ac9920b63a8def2626fd70a5cec707ec47a4c4651"],
      "verbosity": [2]
    }
  },
  {
    "method": ["getblock"],
    "params": {
      "blockhash": ["fb430a812903e3421bae1c872b8e03904b7e24c324333a9e495039785bab837f"],
      "verbosity": [2]
    }
  }
]

 

Edited by DarkClaw

Share this post


Link to post
Share on other sites

Interesting - I'm not aware of a bug like that.  But in any case, that must then be a bug in upstream Bitcoin.  I'll take a quick look today to see if I can easily fix it - otherwise I'll let you know so you can open a bug against upstream Bitcoin.

Share this post


Link to post
Share on other sites

I spent now some time experimenting with batch requests.  I set up a regtest node with 10k blocks, and then sent multiple batch requests (although not in parallel) for all of those 10k blocks each.  With this, I do see memory usage increasing (in my setup it roughly doubles to the level before the request) during each request, but it also drops afterwards.  I think I saw a slight increase overall, but definitely the majority of the memory gets released when the RPC call is finished.  I guess the increase I do see might be due to increased buffer or cache sizes, perhaps.

My situation is of course still simpler than what you do - I'm on regtest with empty blocks, only requested 10k blocks instead of 100k, and do not run the requests in parallel.  So perhaps that makes the difference.  What kind of memory usage are we talking about in your situation, BTW?  How much does the daemon use before the requests, during and after?  Also, do you see the same happening if you just send one big batch request rather than multiple in parallel?

If you have a Github account or are willing to create one, perhaps file an issue against Xaya first so we can debug and discuss this a bit more - even though the issue is likely also present in upstream Bitcoin.

Share this post


Link to post
Share on other sites
2 hours ago, domob said:

If you have a Github account or are willing to create one, perhaps file an issue against Xaya first so we can debug and discuss this a bit more - even though the issue is likely also present in upstream Bitcoin.

Thanks, I made an account for this project and started this issue where I put some test results, etc: https://github.com/xaya/xaya/issues/72

2 hours ago, domob said:

I guess the increase I do see might be due to increased buffer or cache sizes, perhaps.

Yes, I would guess it is something like this.

  • Like 1

Share this post


Link to post
Share on other sites
3 hours ago, snailbrain said:

Hi Darkclaw.

Out of curiosity, are you working on something? 

I was just making the stats/richlist/etc more efficient by adding the batch requests. Looks like website is still down after that security issue though: https://networkinfo.xaya-gaming.net/

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×