March 2026
Self-Host Your Own Whisper Server with Dictify
Dictify's cloud-whisper backend lets you offload transcription to a remote server. This is useful when:
Quick setup
``bash`
pip install openai-whisper flask gunicorn numpy
python -c "from glace.cloud_whisper import create_whisper_server_app; create_whisper_server_app().run(host='0.0.0.0', port=8080)"
Or with Docker:
`bash`
docker build -t glace-whisper-server -f server/Dockerfile .
docker run -p 8080:8080 glace-whisper-server
Then in your Dictify config:
`json``
{
"backend": "cloud-whisper",
"cloud_whisper_url": "http://your-server:8080/transcribe"
}
The server loads models on demand and caches them in memory. First request for a new model will be slow; subsequent requests are fast.