Live Captions uses machine learning to automatically create captions for video or audio clips where they didn’t exist before, and to make the web more accessible for anyone who is deaf or hard of hearing.
When enabled, Live Captions automatically appears in a small movable box at the bottom of your browser when you watch or listen to a piece of content where people are speaking.
The words appear after a slight delay, and you may notice errors in terms of fast or choppy speech, but the feature is generally as impressive as it was when it first appeared on Pixel phones in 2019.
Captions appear even with muted or low-level audio, making it a way to read videos or podcasts without disturbing others around you.
Google says: The Live Captions works offline as well, and at the present time only supports the English language.
The feature works within the Chrome browser with YouTube videos, Twitch broadcasts, podcast players, and even music-streaming services, such as Spotify.
Google also says: Live Captions works with audio and video files stored via device storage if they are opened in Chrome.
The feature can be activated via the latest version of the Chrome browser by going to Settings, then Advanced Section, and then Accessibility.
When turned on, Chrome downloads some speech recognition files, and then the captions appear the next time the browser plays the voice as people speak.
The feature was first introduced in the beta version of Android Q, but it was exclusive to some Pixel and Samsung phones until today.
After arriving in the Chrome browser, Live Captions are available to a much wider audience.
The feature is now available via the Chrome browser for Windows, Mac and Linux, and Google says: It is also coming to Chrome OS soon.