Pulling hair out over audio sync with Premier Pro and Audacity

Colin Smith

Freshman
Sep 7, 2017
2
0
0
This has to be the most basic of audio tasks, and I just can't seem to get it right, and I have tried everything to try and resolve it, and, well, here I am.

I have an Audacity project with a recording of a scene. I have a Premier Pro project with video of the same scene. The video was recorded from a Sony a7's HDMI out through an AverMedia Extremecap U3, the sound on a Tascam US-2x2. There's really no issues with the sound or video files by themselves. It's when I combine them and try to sync the sound that I get issues.

I've made sure that the sample rate is identical in the audio output from Audacity and that sample rate matches the PP project's sample rate. Even so, when I line up the audio track correctly with the video, both are about 20 minutes long, and by the end of the clip, they sound if off by maybe a second or so, totally enough to be noticeable and super annoying.

I've looked around online for solutions, and some people seem to have experienced this claiming it's just a PP issue. I'm willing to accept that, but as I'm essentially an ambitious newbie, I have to imagine that the real issue is me.

Any thoughts much appreciated... I'm at my wits end.

Thanks,
Colin
 
I have run into similar issues where the audio slowly goes out of sync over time. Its not an answer to your question but the quickest solution was to just blade the audio periodically and shift sync it to the audio recorded to the video track. Its not that big of a task but I can see why you would want to avoid in the future.
 
Ben, when you say blade the audio, do you mean... well, what do you mean? Would I be removing frames from the video? Or having multiple copies of the audio track that sync up? Or something else?

And is there a way to synchronize the clocks on this equipment going forward?
 
I wish I could give you a solid solution on syncing for future projects. That part im not sure of.
What i was mentioning as a quick solution for this project was to export your audio file from audacity project. Put it on a track in premiere pro. Make it so you can see both the waveforms of the imported audio and the audio that was recorded to the video file. Get those lined up. Then zoom and scroll the time line to see when they start to drift. Before it becomes apparent on the video just cut the audio track and line it back up. Depending on where you make the cut you will often have to toss a little crossfade on there.
Its less than ideal but can be done pretty quickly if youve had to do it a few times.
 
I get this all the time, and I've never found a combination of gear or work-flow which solves it properly.

I often record musical theater performances as a multi-track through my X32 board to Reaper on a Mac laptop. The video is done by somebody else, with similarly modern gear, but when he sends me a WAV of the audio captured from his camera (most often a recent Canon AVC-HD model) it always drifts. For a while I thought it was something like 29.97 vs 30, which would account for 1 second in an hour, but sometimes I get more than that, and it's not consistent across gear. Most recently, I had the same with video shot on a Panasonic GX85 micro-four-thirds camera (4Kp30, which is really 29.97).

I don't have any pro editing software (FCPX, Premiere, PluralEyes) so my solution is just to line up the waveforms and measure the offset as accurately as possible and then calculate the speed change required to stretch or squash my final mixed audio to match the camera audio. This is usually some fraction of a percent. Reaper can do this accurately as a clip option. So far, when I've given audio processed like that back to the video guy, it has dropped right in.

PS. The speed change is never noticeable to the listener, as it's so small.
 
Last edited:
These sorts of timing issues are why most pro video and audio-for-video gear have ports to accept an external clock (timecode). No clock drift if everything is using the same clock.

Timecode and "clock" are two different things. Timecode merely labels the frames of video with an address, it has nothing to do with playback or record speed. I can shoot 24p footage and label it with 60FPS timecode or on and on and on. This can be as incorrect or as correct as you make it, with no impact on playback or record speed. Often in post production workflows, footage and audio will have many different pieces of timecode metadata.... source timecode, edit timecode, burned in timecode, etc and they can all be different and reference different things.

Clock (word clock in audio land.... gen lock, blackburst bi-level or tri-level sync in video land) ensures that fields or frames or samples are taken or reproduced at the same time.

Colin, it sounds like to me you need to do a 30FPS to 29.97FPS pull down on your audio to match the video. If the audio is slow then instead do a 29.97 to 30 pull up. DAW's like ProTools have simple tools to accomplish this.

Alternatively you can mess around doing some custom sample rate conversion with some simple stoichiometry. Figure out the percentage difference in length the video is from the audio and then apply the same percentage difference as a pull up/down sample rate conversion. This will bring them perfectly in sync.
 
Last edited:
  • Like
Reactions: Michael Grimaila