The title is dense and the paper is short. But the demo is outstanding: (https://huggingface.co/spaces/aiola/whisper-ner-v1). The sample audio is submitted with "entity labels" set to "football-club, football-player, referee" and WhisperNER returns tags Arsenal and Juventus for the football-club tag. They suggest "personal information" as a tag to try on audio.
Impressive, very impressive. I wonder if it could listen for credit cards or passwords.
Is there any reason why this would work better or is needed compared to taking audio and 1. doing ASR with whisper for instance 2. applying an NER model to the transcribed text?
There are open source NER models that can identify any specified entity type (https://universal-ner.github.io/, https://github.com/urchade/GLiNER). I don't see why this WhisperNER approach would be any better than doing ASR with whisper and then applying one of these NER models.
This works better because it gives a secondary set of conditions for which the decoder (generating text) is conditioning its generation. Assume instead of their demo you are doing Speech2Text for Oncologists. Out of the Box Whisper is terrible because the words are new and rare, especially in YouTube videos. If you just run ASR through it and run NER, it will generate regular words over cancer names. Instead, if you condition generation on topical entities the generation space is constrained and performance will improve. Especially when you can tell the model what all the drug names are because you have a list (https://www.cancerresearchuk.org/about-cancer/treatment/drug...)
I think one of the biggest advantages is the security/privacy benefit — you can see in the demo that the model can mask entities instead of tagging. This means that instead of transcribing and then scrubbing sensitive info, you can prevent the sensitive info from ever being transcribed.
Another potential benefit is in lower latency. The paper doesn't specifically mention latency but it seems to be on par with normal Whisper, so you save all of the time it would normally take to do entity tagging — big deal for real-time applications
Almost definitely. You can think of there being a type of triangle inequality for cascading different systems where manually combined systems almost always perform worse given comparable data and model capacity. Alternatively you have tied the models hands by forcing it to bottleneck through a representation you chose.
"The model processes audio files and simultaneously applies NER to tag or mask specific types of sensitive information directly within the transcription pipeline. Unlike traditional multi-step systems, which leave data exposed during intermediary processing stages, Whisper-NER eliminates the need for separate ASR and NER tools, reducing vulnerability to breaches."
On a similar note, I've a request for the HN community. Can anyone recommend a low-latency NER model/service.
I'm building an assistant that gives information on local medical providers that match your criteria. I'm struggling with query expansion and entity recognition. For any incoming query, I would want to NER for medical terms (which are limited in scope and pre-determined), and subsequently where I would do Query rewriting and expansion.
Impressive, very impressive. I wonder if it could listen for credit cards or passwords.
reply