Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The inference speed is slower than yolov3 #20

Open
blackwool opened this issue Jul 20, 2020 · 3 comments
Open

The inference speed is slower than yolov3 #20

blackwool opened this issue Jul 20, 2020 · 3 comments

Comments

@blackwool
Copy link

I test this repo using 416*416 size. under win10 , NVIDIA GTX 1060 GPU
Each picture inference time is about average 112ms.
But when i use stand yolov3.cfg to detector ordinary object ,each picture takes about average 38 ms .
Why this network is so much slower than yolov3 network ?

@blackwool blackwool changed the title The inference speed is slow than yolov3 Jul 20, 2020
@Bardielz
Copy link

hi,in the issue 'the test speed',the author says under 2080ti input size:608x608 ,the image ./test_imgs/input/selfie.jpg: Predicted in 6.259000 milli-seconds. may be in the makefile the folowing selects such as "GPU=1" you don't choose ,so the speed you test is worked under cpu?that is my guess ,may not be true,but you can have a try.

@blackwool
Copy link
Author

I do use GPU=1 configure to make file ,and it calls cudnn api when it runs.
I print each layer costed time and find the that the convolutional layers with large groups will cost a lot of time .

@w840401
Copy link

w840401 commented Aug 13, 2021

How did you train successfully?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
3 participants