Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible support real-time transcription with websockets? #25

Open
joaogabrieljunq opened this issue Feb 21, 2024 · 3 comments
Open

Comments

@joaogabrieljunq
Copy link

I would like to know if it is possible to use CTranslate2 hosted model pipeline linked with a websocket service like twilio to receive audio streams. Like https://github.com/ufal/whisper_streaming or https://github.com/collabora/WhisperLive that uses faster-whisper. Is it possible now or how could that be implemented if I need to dive into repository code?

I want to code and test this scenario to build a multi-client server to transcript multiple audio streams at the same time using GPU.

@shashikg
Copy link
Owner

Hi @joaogabrieljunq

This project is more inclined toward offline ASR. Though I have some plans to work on streaming ASR using whisper in the future. Doing streaming ASR with whisper is tricky, with lots of things to consider: latency, concurrency, and all. I am very skeptical if using whisper will be beneficial for streaming ASR at all. But again, unless I try, I can't say.

But you might want to have a look at NVIDIA's RIVA models; they provide a very solid streaming ASR pipeline (comparable to many commercial streaming ASR services). https://docs.nvidia.com/deeplearning/riva/user-guide/docs/asr/asr-overview.html#streaming-recognition

@joaogabrieljunq
Copy link
Author

joaogabrieljunq commented Feb 21, 2024

Thank you for your response @shashikg. I am trying to test something like https://github.com/ufal/whisper_streaming but most streaming implementations don't have batch inference and most batch inference projects don't support audio streams. By the way, thanks for NVIDIA's RIVA models link, I'll definitely look into that as an alternative.

@amdrozdov
Copy link

Hello @joaogabrieljunq I did a small PR that allows in-memory chunks. with this mode you can do basic realities asr, but you will need to implement custom VAD and hypothesis buffer on your side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
3 participants