3 min readfrom Machine Learning

easyaligner: Forced alignment with GPU acceleration and flexible text normalization (compatible with all w2v2 models on HF Hub) [P]

easyaligner: Forced alignment with GPU acceleration and flexible text normalization (compatible with all w2v2 models on HF Hub) [P]
easyaligner: Forced alignment with GPU acceleration and flexible text normalization (compatible with all w2v2 models on HF Hub) [P]

https://preview.redd.it/f4d5krhkjyvg1.png?width=1020&format=png&auto=webp&s=11310f377b22abbe3dd110cc7d362ba8aae35f8d

I have built easyaligner, a forced alignment library designed to be performant and easy to use.

Having worked with preprocessing hundreds of thousands of hours of audio and text for training speech-to-text models, I found that the available open source forced alignment libraries often missed some convenience features. For our purposes it was, in particular, important for the tooling to be able to:

  • Handle cases where the transcript does not cover all of the spoken content in the audio (by automatically detecting the relevant audio region).
  • Handle some irrelevant speech at the start/end of audio segments to be aligned.
  • Ideally handle long segments of audio and text without the need for chunking.
  • Normalize ground-truth texts for better alignment quality, while maintaining a mapping between the normalized text and the original text, so that the original text's formatting can be recovered after alignment.

easyaligner is an attempt to package all of these workflow improvements into a forced alignment library.

The documentation has tutorials for different alignment scenarios, and for custom text processing. The aligned outputs can be segmented at any level of granularity (sentence, paragraph, etc.), while preserving the original text’s formatting.

The forced alignment backend uses Pytorch's forced alignment API with a GPU based implementation of the Viterbi algorithm. It's both fast and memory-efficient, handling hours of audio/text in one pass without the need to chunk the audio. I've adapted the API to support emission extraction from all wav2vec2 models on Hugging Face Hub. You can force align audio and text in any language, as long as there's a w2v2 model on HF Hub that can transcribe the language.

easyaligner supports aligning both from ground-truth transcripts, as well as from ASR model outputs. Check out its companion library easytranscriber for an example where easyaligner is used as a backend to align ASR outputs. It works the same way as WhisperX, but transcribes 35% to 102% faster, depending on the hardware.

The documentation: https://kb-labb.github.io/easyaligner/
Source code on Github (MIT licensed): https://github.com/kb-labb/easyaligner

submitted by /u/mLalush
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#natural language processing for spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#no-code spreadsheet solutions
#financial modeling with spreadsheets
#rows.com
#natural language processing
#spreadsheet API integration
#enterprise-level spreadsheet solutions
#large dataset processing
#cloud-based spreadsheet applications
#workflow automation
#easyaligner
#forced alignment
#GPU acceleration
#text normalization
#w2v2 models
#Pytorch
#Viterbi algorithm
#ASR model outputs