Keynotes: AI and Linguistics, Automatization, ASR model, ASR system, OpenAI Whisper, Whisper models, Sociolinguistic datasets
Borders Scots is by no means an easy language to understand for someone who has only been taught Standard English in school.
Only a couple of years ago, one had to be born into a family whose native language was Borders Scots or had to have spent a good while in the region to immerse fully in the pronunciation and rhythm of this accent. Nowadays, a relatively simple code manages to transcribe spoken words into an understandable text, provided that the audio is in high quality, the speech is structured and the number of preferably female speakers is limited.
This Blogpost aims to delve into the matter of automatic speech recognition and tries to uncover its functionality, prospects and its consequences. Since I am neither a linguist nor a sociolinguist, I rely heavily on the presentation given by Junior Professor Doctor Andreas Weilinghoff from the University of Koblenz.
What do we understand as ASR?
Automatic Speech Recognition, also known as ASR, is the use of Machine Learning or Artificial Intelligence (AI) technology to process human speech into readable text. The field has grown exponentially over the past decade, with ASR systems popping up in popular applications we use every day such as TikTok and Instagram for real-time captions, Spotify for podcast transcriptions, Zoom for meeting transcriptions, and more.
Foster, Kelsey: What is Automatic Speech Recognition?
Generally, we differentiate between two approaches. On the one hand, there is the hidden Markov Model (HMM). On the other hand, we now have End-to-End approaches. The HMM system is generally considered to be outdated and outmatched by the newer End-to-End approaches.
HMM/GMM system
The hidden Markov model (HMM) and the Gaussian mixture models (GMM) work by taking audio segment and determining when particular words occur in this audio segment. To make these predictions about when a word appears, the models use a combination of lexicon models, acoustic models and language models. These models and their tasks are required since HMM needs the incoming data to be forced aligned.
- The lexicon model knows how words are pronounced phonetically.
- The acoustic model predicts which phoneme is spoken in the audio segments
- The language model calculates the likelihood of a sequence of words
In combination, these three models take apart each phoneme in an audio segment, calculate the probability of the following phoneme, and produce a text.
If you want to know more about more about the statistical nature of HMM and GMM systems you can follow this link to an ASR lecture at the University of Edinburgh.
The biggest flaw of the HMM/GMM system is that each ASR model must be trained independently and that a complete custom phonetic set must be created for each language added.
End-to-End Deep Learning Approach
Contrary to the HMM model, End-to-End Deep Learning does not require the incoming data to be forced aligned. As the name already reveals, this approach relies on a single neural network that is trained by taking an audio signal and its corresponding transcription. This learning enables an ASR based End-to-End method to output text directly from an incoming audio signal. This approach has the advantage that these models are much easier to train and implement than HMM-Models. Due to the absolutely giant amount of data currently used in training, end-to-end approaches are also more accurate than traditional models.
The two most well-known providers of ASR with End-to-End approaches are probably IBM Watson and OPENAI Whisper. However, whereas IBM Watson still partially relies on an acoustic and language model, OPENAI Whisper works only by using the end-to-end approach.
Since the end-to-end approach currently outperforms all other approaches, this blog post focuses from here on solely on whisper by OPENAI.
What is whisper?
Whisper is an ASR-System developed by OPENAI and was released in 2022. Compared to HMM, it uses encoder and decoder blocks in a single neural network to learn and predict which text matches with audio. It is a multilingual system that was trained on 680’000 hours of speech via unsupervised learning. Besides turning audio into text, the system also applies capitalization and punctuation and filters out background noises. As we will see, some of these features might be problematic under certain circumstances.
Another feature of whisper is that it can be executed locally. This allows audio data to be analysed without being connected to the internet and thus offers some data protection.
Remember the audio file from the beginning of this blogpost? Here are the results from four different models of whisper:
——————————————————————————
Tiny: Dde ni’ri o’r pdaedd ro’n pdaedd y shusa. Ondau dod y fν dryerd yn unitsio. Irdym vaeth nerdyn Informacoft mae’n hun yn ni’r Saelet cynraen.
Base: gir i’w enwedd gollen yn seiberalg eu rozwbar a y bydgistrych a ffbrwy sy mae wrth Targ Eiffel
Small: The history of the Tuna Selker, where I come from, is that it had a lot of shoemakers in it, and say the word suiter where capital S has come to mean a native of Selker, click myself.
Medium: Mae’r ystodau o’r tŷn o Selkirk, lle rydw i’n mynd amdano, yw bod hi’n cael llawer o sylfaenwyr yn ymlaen, a dweud y ffordd sy’n sylfaenwyr lle mae capital S wedi dod i mi yn ymlaen o Selkirk, fel fi.
——————————————————————————
Interestingly, all models were initially not able to detect the language. For some reason they all thought the audio to be in Welsh. This is weird, since I have already tested out the small model for the same audio file and it had no problem identifying the language as English.
Here is what whisper manages to achieve, if the right language is already provided:
——————————————————————————
Tiny: The history of the Tune of Selkirk where I come from is that there is a lot of shoemakers in it and say the word Sutur we are capitalists has come to mean a native of Selkirk like myself.
Base: The history of the Tunis Selkirk, where I come from, is that it had a lot of shoomikers in it and say the word sutra where capital S has come to mean a native of a Selkirk like myself.
Small: The history of the tune of Selkirk where I come from is that it had a lot of shoemakers in it and say the word Souterware Capital S has come to mean a native of Selkirk like myself.
Medium: The history of the town of Selkirk where I come from is that it had a lot of shoemakers in it and say the word suitor where a capital S has come to mean a native of Selkirk like myself.
——————————————————————————
If provided with the right language, the medium model manages to transcribe the whole audio file without any mistakes. Not only was whisper very quick in transcribing the original audio, but it also suffers from no fatigue, unlike humans. This consistency and speed are probably its main advantage compared to humans.
Influences on the Word-Error-Rate (WER) and inherent bias
A key concept in testing out the performance of an ASR is the so-called Word-Error-Rate (WER). It is a simple equation that gives a broad understanding of the accuracy of an ASR.
Word Error Rate = (Substitutions + Insertions + Deletions) / Number of total Words
Substitutions mean that in order to get to the correct transcription, one word had to be replaced.
Insertions mean that an extra word which had not been recognized had to be added.
Deletions mean a word that was in the recognized data had to be removed.
In all of the above shown cases, it was necessary to compare to a “perfect” (ground-truthed) transcription.
The most significant factors that influence the performance of an ASR are the model and corpus used for the system, the quality of sound, the number of speakers in what manner the speakers spoke, and the gender of the speaker(s).
As we have already seen, one can choose between different models like HMM or End-to-End, and even though these models try to achieve the same output, their means of doing so can be different from one another, which has implications for performance. The corpus consists of everything the model has been trained to. The more a model has learned, the more it will know how to handle unknown audio. Nothing much has to be said about the quality of the original audio data. But ASR’s prefer clean and “structured” spoken language. That is also why models like whispering give better results if only very few (or only one person) people participate in a structured manner of speaking, such as an interview or a monologue.
The most interesting factor that might influence the output is that ASR programs, especially whisper, prefer female voices over male ones. This preference is often explained by women having a higher pitch, which makes it easier for the model to distinguish it from other noises. A second explanation could also be that female – or higher-pitched – voices are just more prevalent in the used corpora.
This last example brings us to another critical aspect of ASR programs like whisper. Currently, the most significant part of the training data consists of native speakers who speak a standard version of their language. This means there is less training data available, which covers non-native speakers or native speakers with a rare accent. Even though whisper covers languages from Afrikaans over Norwegian to Zulu, whisper performs best on L1 North American English and struggles with niche dialects, especially from minorities.
Hallucinating or idealising?
Another “feature” that seems noteworthy is that whisper uses brute force. This means that an input requires an output – even if the program cannot identify the spoken words. If an audio file has lousy quality, long periods of silence, or overlapping speakers, the processing unit can get confused and usually start to make up or to hallucinate words. This then has a negative impact on the WER, as we have seen above.
In contrast to hallucinating, models like whisper also tend to idealise the input. This means that the transcription of an audio file gets automatically corrected. An example of this could be that stuttering, false starts, and repetition in speech often get removed.
In some cases, this idealising of speech can be a problem, for there are some cases of research where the flow of natural language and background noises are important.
AI versus Human: Possibilities and Consequences
Even though models like whisper still have some technical shortcomings as well as an inherent bias through the corpora it learned from, whisper surpasses humans in transcribing audio to text. And since the technology is not going to disappear, we should follow the question of how researchers can utilize these tools.
Since my field of research is history, I came up with some examples of utilization that could be interesting for historians.
The two main tasks ASR could do for historical research are to transcribe different kinds of oral history (audio registered information) as well as transcribe audio data from archives to make the original data more durable. On one hand, ASR could simplify a researcher’s work – especially if the researcher is dealing with a language he or she is not familiar with – and on the other hand, data stored on fragile audio storage hardware (e.g.tapes, discs, or phonograph rollers) can be turned into simple txt files via digitization which last longer. A major problem I see here is that besides hallucinating and idealising, models like whisper cannot identify the speakers. Generally speaking, through the process of ASR a lot of data gets lost. In a text it is not clear, who the speakers are, if there are important background noises missing or in what manner a person speaks. Ultimately, an academic process including models like whisper still requires human oversight.
If you have other examples from your own academic research, please let me know your ideas in the comment section below.
Step by step guide on how to install and use whisper on windows
- Install python
- Whisper requires python version 3.7 or higher
- In the setup, check the box named “Add python.exe to PATH”
- Install pytorch
- You can also use the following command in your comman promt:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
- You can also use the following command in your comman promt:
- Install chocolatey
- Run the code it gives you in windows powershell. Make sure to run powershell as an administrator
- Install ffmpeg
- Using windows powershell type in the following command:
choco install ffmpeg
- Using windows powershell type in the following command:
- Install whisper
- Open the command prompt as an administrator and use the following command:
pip install -U openai-whisper
- Open the command prompt as an administrator and use the following command:
- Transcribing audio to text
- Drop your audio files into a folder and click on the address field of your folder and enter cmd
- Use the following code to start transcribing audio to text:
whisper filename.datatype --language --model
Bibliography:
Unpublished:
Presentation given by Andreas Weilinghoff on the 7.10.2024.
Internet literature:
Foster, Kelsey: What is Automatic Speech Recognition? A Comprehensive Overview of ASR Technology. Assembly AI. 12.09.2024. Last accessed: 20.10.2024. https://www.assemblyai.com/blog/what-is-asr/.
Shimodaira, Hiroshi, Renals, Steve: Hidden Markov Models and Gaussian Mixture Models. Presentation from ASR Lectures 4&5. 24./28.01.2019. https://www.inf.ed.ac.uk/teaching/courses/asr/2018-19/asr03-hmmgmm-handout.pdf
Image sources:
Picture 1: https://www.assemblyai.com/blog/what-is-asr/
Picture 2: https://openai.com/index/whisper/
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
janblarer (7. November 2024). AI vs. Human? Cutting-Edge Tech Transforming Linguistic Research. Einblicke in die Digital Humanities. Abgerufen am 8. Dezember 2024 von https://doi.org/10.58079/12mzp
6 Antworten auf „AI vs. Human? Cutting-Edge Tech Transforming Linguistic Research“
Personally, I find that your article offers an insightful exploration of how cutting-edge ASR technologies ( like Whisper ) are transforming linguistic research. I particularly appreciated how you connected their potential benefits with practical tasks, such as transcription of oral history. I believe that this approach provides concrete substance to your article, bridging technical advancements with real-world applications.
Having followed both Professor Doctor Andreas Weilinghoff’s presentation and read your article, I’d like to focus myself on two concerns related to the use of ASR, which really struck me: biases in the training corpora and the idealization of speech. Firstly, the issue of struggling with niche dialects especially from minorities, as you correctly pointed out, raises worries about inclusivity in linguistic research. As Whisper performs best on widely spoken accents, it risks marginalizing voices from smaller communities, if we think about dialects too. Thus, I think that ASR system could also reinforce linguistic hierarchies, prioritizing dominant languages and accents. Secondly, the idealization of speech worries me a little bit because the tendency of Whisper to make perfect transcriptions (by eliminating mistakes) can also lead to misinterpretations in research, without taking into consideration conversational dynamics.
Finally, if I could do a suggestion about how to use Whisper in the reality (as you mentioned in the article), students and educators could use this program in schools in order to create more accessible learning materials, such as transcriptions of lectures or discussions, which could be integrated during the lessons in presence. Maybe this could help to create more dynamic and engaging lessons 😊
Good article!
Thank you for the valuable insight into the issue that you provided. If I were to offer an example from my field of study, which is communication, I would think of focus group transcription and analysis. Discussions during focus groups are recorded for further analysis conducted by researchers, who aim to understand the attitudes, perceptions and decision-making processes of a group of people on a specific topic. It is true that AI transcription tools, like Whisper, can accurately transcribe hours of discussion, which is really helpful, especially since researchers often deal with large amounts of audio. However, during focus group discussions, there are nuances that AI tools fail to capture, such as humor, irony or specific references, as well as sighs, pauses and overlapping speech, making it difficult for the AI to transcribe everything accurately. It is for these reasons that a human oversight is needed, or I would say indispensable, to ensure that the nuances of the conversation are preserved. AI certainly speeds up the process, but human researchers remain essential.
Thanks for your comment.
The topic of irony or sarcasm you brought up, is a very interesting one. Since both irony and sarcasm rely heavily on the tone of one’s voice, it currently does not get registered as such by whisper. I wonder if sometime in the near future, programms like whisper might be able to analyse the tone of the voice and basically give some sort of notice in case of irony.
Very Interesting. I think if we can reduce or, best case, eliminate the flaws of Whisper, it could prove very useful for preserving threatened minority languages and their history.
Vielen Dank für den Beitrag
Ich wundere mich, was es noch für weitere Anwendungen für die Systeme gibt, die ja so schnell lernen. Wäre es möglich, so eine KI zu brauchen, um Forschung für Gehörlose barrierrefreier zu gestalten? kann man die KI als modernes Diktiergerät gebrauchen, als Übersetzer? Wir sind auf jeden fall gespannt, wohin wir uns als nächstes entwickeln
Vielen Dank für Dein Kommentar,
Auf die Idee, dass ein ASR wie ein Diktiergerät für Gehörlose verwendet werden kann, bin ich gar nicht gekommen. Die grösste Hürde sehe ich wohl darin, dass sobald mehrere SprecherInnen vorkommen, die KI überfordert ist. Falls solche Probleme gelöst werden und die Transkription in Echtzeit abläuft, könnten Programme wie Whisper in der Tat Barrieren für Gehörlose massiv abbauen.