Hi everyone,
I’ve been experimenting with a pipeline where I use Ray to preprocess video data (batch trimming, basic transformations, and some AI-based tagging), and then bring the processed clips into capcut APK for Android for final editing.
The pipeline works fine from a processing standpoint, but I’m noticing a mismatch at the final stage:
-
Clips processed via Ray look consistent when viewed standalone
-
After importing into CapCut, some clips feel slightly different in timing and pacing
-
In a few cases, transitions between clips don’t feel as seamless as expected
-
Overall flow of the video feels less “continuous” compared to the preprocessed sequence
I’m not doing heavy transformations—mostly batching and light preprocessing—so I wasn’t expecting this kind of difference.
Question:
Is this kind of mismatch normal when combining distributed preprocessing (Ray) with consumer editing tools like CapCut?
-
Could this be related to encoding differences during batch processing?
-
Any best practices for preparing video outputs in Ray so they behave more predictably in editors?
Trying to keep the pipeline efficient without losing consistency in the final edit.
Thanks!