Narration Tool- Other info

Other Info:

Syncing text to audio has been around since closed captions and the early web. Most videos on the internet use the .srt file format to display text on the bottom of the screen for characters and describing other important events.

Example:

1

00:02:16,612 --> 00:02:19,376

Senator, we're making

our final approach into Coruscant.

The second line displays when the subtitle should start followed by when it should end. After the second line is the actual subtitle that should be displayed

There are a few differences between how subtitles are displayed on videos and how they should work in the book.

There were a few things to consider when using the .SRT approach. This approach tells what text should be displayed on the video. i already know what should be on the screen at a time but I need to know when each phrase I want should be highlighted.

This is fine! I can instead just use .SRT files to show when each phrase should be highlighted a different color instead of when a whole pice of text is displayed on a video feed.

Next issue, For this approach to work for my tool I would already need to know when my phrases start and end . there are paid and free services/software that can do this audio segmentation such as Google Speech to text. Online services had some issues with the voiceover files from Nozzlehead and couldn’t consistently parse when a phrase started/ended. Otherwise I would have used the timing from this software and use my text from the pdf to show what should be displayed.

Next
Next

Peaks: In Progress