Skip to content
This repository has been archived by the owner on Dec 26, 2022. It is now read-only.

Handle IOTA-aware HTTP request with threadpool and epoll #569

Closed
YingHan-Chen opened this issue Apr 20, 2020 · 6 comments · Fixed by #567
Closed

Handle IOTA-aware HTTP request with threadpool and epoll #569

YingHan-Chen opened this issue Apr 20, 2020 · 6 comments · Fixed by #567
Assignees

Comments

@YingHan-Chen
Copy link
Contributor

YingHan-Chen commented Apr 20, 2020

Currently, TA would create a new thread for each HTTP connection and use single iota_client_service, which leads to lock contention.

PR #567 modify the MHD daemon that supports epoll() and the thread pool with a configurable pool size. It also creates a new iota_client_service for each HTTP request to prevent lock contention.

To test the performance, I use JMETER and simulate 100 clients simultaneously to send quests to TA and loop 10 times. There are 100*10 = 1000 requests totally.

The following tests send Generate Address to the TA in devorg.

The test result before modification:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 43083 609 95534 9393.76 0.000% 2.17164 0.53 0.28 249
TOTAL 1000 43083 609 95534 9393.76 0.000% 2.17164 0.53 0.28 249

The test result when set the size of thread pool to 8:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 8604 645 29238 5121.11 0.000% 8.97577 2.93 1.17 334
TOTAL 1000 8604 645 29238 5121.11 0.000% 8.97577 2.93 1.17 334

The test result when set the size of thread pool to 16:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 4287 583 19852 3725.49 0.000% 18.17719 5.93 2.38 334
TOTAL 1000 4287 583 19852 3725.49 0.000% 18.17719 5.93 2.38 334

The test result when set the size of thread pool to 32:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 1893 584 16221 1862.31 0.000% 28.4495 9.28 3.72 334
TOTAL 1000 1893 584 16221 1862.31 0.000% 28.4495 9.28 3.72 334

The test result when set the size of thread pool to 64:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 1838 601 14062 1730.77 0.000% 36.24239 11.82 4.74 334
TOTAL 1000 1838 601 14062 1730.77 0.000% 36.24239 11.82 4.74 334
@howjmay
Copy link
Contributor

howjmay commented Apr 20, 2020

could we use getTransactionToApprove to replace generateAddress here?
The behariour of generateAddress is kind different from other APIs

@YingHan-Chen
Copy link
Contributor Author

could we use getTransactionToApprove to replace generateAddress here?
The behariour of generateAddress is kind different from other APIs

Do you mean to use Fetch-Pair-Tips?

@YingHan-Chen
Copy link
Contributor Author

The speed to Fetch-Pair-Tips to IRI in devorg is fluctuating. Some of the requests would even time out.

Here are some test results, but it is hard to reproduce. The average time is up to 300000ms currently.

The test result before modification:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 79175 926 291970 24948.96 0.000% 1.15944 0.42 0.15 373
TOTAL 1000 79175 926 291970 24948.96 0.000% 1.15944 0.42 0.15 373

The test result in PR #567 and set the size of thread pool to 10:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 25285 1260 84012 16606.53 0.000% 3.46638 1.26 0.46 373
TOTAL 1000 25285 1260 84012 16606.53 0.000% 3.46638 1.26 0.46 373
@jkrvivian
Copy link
Member

The speed to Fetch-Pair-Tips to IRI in devorg is fluctuating. Some of the requests would even time out.

It should be the IRI problem, maybe test it with HORNET?
Btw, did you add some checks for the responses?
such as checking the length of the received hashes, addresses, just in case. 😄

YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue Apr 22, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()`
Set the size of thread pool by cli option
--ta_thread <value>

Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention.

Correct cli option ta_thread to required argument

close DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue Apr 23, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()`
Set the size of thread pool by cli option
--ta_thread <value>

Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention.

Correct cli option ta_thread to required argument

close DLTcollab#569
@YingHan-Chen YingHan-Chen self-assigned this Apr 23, 2020
@YingHan-Chen
Copy link
Contributor Author

The speed to Fetch-Pair-Tips to IRI in devorg is fluctuating. Some of the requests would even time out.

It should be the IRI problem, maybe test it with HORNET?
Btw, did you add some checks for the responses?
such as checking the length of the received hashes, addresses, just in case.

The requests from TA to HORNET only work fine when TA in proxy_passthrough mode.
There are more details in #582

@YingHan-Chen
Copy link
Contributor Author

The test result of Fetch-Pair-Tips from a private tangle

The test result before modification:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 2190 40 4835 562.79 0.000% 41.36162 15.07 5.49 373
TOTAL 1000 2190 40 4835 562.79 0.000% 41.36162 15.07 5.49 373

The test result in PR #567 and set the size of thread pool to 8:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 262 29 563 123.8 0.000% 235.96036 85.95 31.34 373
TOTAL 1000 262 29 563 123.8 0.000% 235.96036 85.95 31.34 373

The test result in PR #567 and set the size of thread pool to 16:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 93 28 322 50.41 0.000% 471.47572 171.74 62.62 373
TOTAL 1000 93 28 322 50.41 0.000% 471.47572 171.74 62.62 373

The test result in PR #567 and set the size of thread pool to 32:

Label # Samples Average Min Max Std. Dev. Error % Throughput Received KB/sec Sent KB/sec Avg. Bytes
HTTP Request 1000 50 28 288 27.58 0.000% 641.43682 233.65 85.19 373
TOTAL 1000 50 28 288 27.58 0.000% 641.43682 233.65 85.19 373
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue Apr 30, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()`
Set the size of thread pool by cli option
--ta_thread <value>

Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention.

Correct cli option ta_thread to required argument

close DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue Apr 30, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()`
Set the size of thread pool by cli option
--ta_thread <value>

Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention.

Correct cli option ta_thread to required argument

close DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue Apr 30, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()`
Set the size of thread pool by cli option
--ta_thread <value>

Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention.

Correct cli option ta_thread to required argument

close DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue Apr 30, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()`
Set the size of thread pool by cli option
--ta_thread <value>

Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention.

Correct cli option ta_thread to required argument

close DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 2, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Set the size of thread pool by cli option
--ta_thread <value>

Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention.

Correct cli option ta_thread to required argument

close DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 2, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Set the size of thread pool by CLI option
--tpool_size <value>

Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention.

Correct CLI option ta_thread to required argument

close DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 3, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Set the size of thread pool by CLI option
--tpool_size <value>

Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention.

Correct CLI option ta_thread to required argument

close DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 5, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Set the size of thread pool by CLI option
--tpool_size <value>

Create new `iota_client_serivce_t` for each HTTP request to avoid race condition and lock contention.

Correct CLI option ta_thread to required argument

close DLTcollab#569
dd
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 7, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Set the size of thread pool by CLI option
--tpool_size <value>

Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention.

Correct CLI option ta_thread to required argument

close DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 10, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Restrict the number of HTTP threads


Set the size of thread pool by CLI option
--tpool_size <value>

Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention.

Correct CLI option ta_thread to required argument

To preserve a least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not)

close DLTcollab#592 DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 10, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Restrict the number of HTTP threads

Set the size of thread pool by CLI option
--tpool_size <value>

Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention.

Correct CLI option ta_thread to required argument

To preserve a least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not)

close DLTcollab#592 DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 11, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Restrict the number of HTTP threads

Set the size of thread pool by CLI option
--tpool_size <value>

Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention.

Correct CLI option ta_thread to required argument

To preserve a least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not)

close DLTcollab#592 DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 12, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Restrict the number of HTTP threads

Set the size of thread pool by CLI option
--http_threads <value>

Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention.

Correct CLI option ta_thread to required argument

To preserve at least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not)

close DLTcollab#592 DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 12, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Restrict the number of HTTP threads

Set the size of thread pool by CLI option
--http_threads <value>

Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention.

Modify CLI option `ta_thread`  to `http_threads` and require argument.

To preserve at least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not)

close DLTcollab#592 DLTcollab#569
YingHan-Chen added a commit to YingHan-Chen/tangle-accelerator that referenced this issue May 12, 2020
Run MHD by using internal thread (or thread pool) doing  `epoll()` and eliminate locks for IOTA client services.

Restrict the number of HTTP threads

Set the size of thread pool by CLI option
--http_threads <value>

Create new `iota_client_service_t` for each HTTP request to avoid race condition and lock contention.

Modify CLI option `ta_thread`  to `http_threads` and require argument.

To preserve at least one physical processor, set the limit of the thread number to logical processor number - 1(2) (According to whether CPUs support HyperThreading or not)

close DLTcollab#592 DLTcollab#569
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
3 participants