-
Notifications
You must be signed in to change notification settings - Fork 16
Handle IOTA-aware HTTP request with threadpool and epoll #569
Comments
could we use |
Do you mean to use Fetch-Pair-Tips? |
The speed to Fetch-Pair-Tips to IRI in devorg is fluctuating. Some of the requests would even time out. Here are some test results, but it is hard to reproduce. The average time is up to 300000ms currently. The test result before modification:
The test result in PR #567 and set the size of thread pool to 10:
|
It should be the IRI problem, maybe test it with HORNET? |
Run MHD by using internal thread (or thread pool) doing `epoll()` Set the size of thread pool by cli option --ta_thread <value> Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention. Correct cli option ta_thread to required argument close DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` Set the size of thread pool by cli option --ta_thread <value> Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention. Correct cli option ta_thread to required argument close DLTcollab#569
The requests from TA to HORNET only work fine when TA in proxy_passthrough mode. |
The test result of Fetch-Pair-Tips from a private tangle The test result before modification:
The test result in PR #567 and set the size of thread pool to 8:
The test result in PR #567 and set the size of thread pool to 16:
The test result in PR #567 and set the size of thread pool to 32:
|
Run MHD by using internal thread (or thread pool) doing `epoll()` Set the size of thread pool by cli option --ta_thread <value> Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention. Correct cli option ta_thread to required argument close DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` Set the size of thread pool by cli option --ta_thread <value> Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention. Correct cli option ta_thread to required argument close DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` Set the size of thread pool by cli option --ta_thread <value> Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention. Correct cli option ta_thread to required argument close DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` Set the size of thread pool by cli option --ta_thread <value> Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention. Correct cli option ta_thread to required argument close DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Set the size of thread pool by cli option --ta_thread <value> Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention. Correct cli option ta_thread to required argument close DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Set the size of thread pool by CLI option --tpool_size <value> Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention. Correct CLI option ta_thread to required argument close DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Set the size of thread pool by CLI option --tpool_size <value> Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention. Correct CLI option ta_thread to required argument close DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Set the size of thread pool by CLI option --tpool_size <value> Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention. Correct CLI option ta_thread to required argument close DLTcollab#569 dd
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Set the size of thread pool by CLI option --tpool_size <value> Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention. Correct CLI option ta_thread to required argument close DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Restrict the number of HTTP threads Set the size of thread pool by CLI option --tpool_size <value> Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention. Correct CLI option ta_thread to required argument To preserve a least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not) close DLTcollab#592 DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Restrict the number of HTTP threads Set the size of thread pool by CLI option --tpool_size <value> Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention. Correct CLI option ta_thread to required argument To preserve a least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not) close DLTcollab#592 DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Restrict the number of HTTP threads Set the size of thread pool by CLI option --tpool_size <value> Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention. Correct CLI option ta_thread to required argument To preserve a least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not) close DLTcollab#592 DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Restrict the number of HTTP threads Set the size of thread pool by CLI option --http_threads <value> Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention. Correct CLI option ta_thread to required argument To preserve at least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not) close DLTcollab#592 DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Restrict the number of HTTP threads Set the size of thread pool by CLI option --http_threads <value> Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention. Modify CLI option `ta_thread` to `http_threads` and require argument. To preserve at least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not) close DLTcollab#592 DLTcollab#569
Run MHD by using internal thread (or thread pool) doing `epoll()` and eliminate locks for IOTA client services. Restrict the number of HTTP threads Set the size of thread pool by CLI option --http_threads <value> Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention. Modify CLI option `ta_thread` to `http_threads` and require argument. To preserve at least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not) close DLTcollab#592 DLTcollab#569
Currently, TA would create a new thread for each HTTP connection and use single
iota_client_service
, which leads to lock contention.PR #567 modify the MHD daemon that supports
epoll()
and the thread pool with a configurable pool size. It also creates a newiota_client_service
for each HTTP request to prevent lock contention.To test the performance, I use JMETER and simulate 100 clients simultaneously to send quests to TA and loop 10 times. There are 100*10 = 1000 requests totally.
The following tests send Generate Address to the TA in devorg.
The test result before modification:
The test result when set the size of thread pool to 8:
The test result when set the size of thread pool to 16:
The test result when set the size of thread pool to 32:
The test result when set the size of thread pool to 64:
The text was updated successfully, but these errors were encountered: