This pipeline example uses OpenCV to convert videos and images into edge-detected frames. The final output is a /collage folder containing a static html page that you can download and open to view the original and traced content side-by-side.
- If the videos are not in
.mp4format (e.g,.mov), they are converted by thevideo_mp4_converterpipeline before being passed to theimage_flattenerpipeline. Otherwise, they are passed directly to theimage_flattenerpipeline. - Images from the
image_flatteneroutput repo andraw_videos_and_imagesinput repo are processed by theimage-tracerpipeline. - Frames from the
image_flattenerpipeline are combined by themovie_giferpipeline to create gifs. - All content is re-shuffled into two folders (
edgesandoriginals) by thecontent_shufflerpipeline. - The shuffled content is then used by the
content_collagerpipeline to create a collage of the original and traced content using a static html page that you can download and open.
gh repo clone lbliii/opencv-video-to-frametrace
cd opencv-video-to-frametrace
pachctl create project video-to-frame-traces
pachctl config update context --project video-to-frame-traces
pachctl create repo raw_videos_and_images
pachctl create pipeline -f 1_convert_videos/video_mp4_converter.yaml
pachctl create pipeline -f 2_flatten_images/image_flattener.yaml
pachctl create pipeline -f 3_trace_images/image_tracer.yaml
pachctl create pipeline -f 4_gif_images/movie_gifer.yaml
pachctl create pipeline -f 5_shuffle_content/content_shuffler.yaml
pachctl create pipeline -f 6_collage_content/content_collager.yamlpachctl put file raw_videos_and_images@master:liberty.png -f https://raw.githubusercontent.com/pachyderm/docs-content/main/images/opencv/liberty.jpg
pachctl put file raw_videos_and_images@master:robot.png -f https://raw.githubusercontent.com/pachyderm/docs-content/main/images/opencv/robot.jpg
By default, when you first start up an instance, the default project is attached to your active context. Create a new project and set the project to your active pachctl context to avoid having to specify the project name (e.g., --project video-to-frame-traces) in each command.
pachctl create project video-to-frame-traces
pachctl config update context --project video-to-frame-tracesAt the top of our DAG, we'll need an input repo that will store our raw videos and images.
pachctl create repo raw_videos_and_imagesWe want to make sure that our DAG can handle videos in multiple formats, so first we'll create a pipeline that will:
- skip images
- skip videos already in the correct format (
.mp4) - convert videos to
.mp4format
The converted videos will be made available to the next pipeline in the DAG via the video_mp4_converter repo by declaring in the user code to save the converted images to /pfs/out/.
pachctl create pipeline -f 1_convert_videos/video_mp4_converter.yaml Next, we'll create a pipeline that will flatten the frames of the videos into individual .png images. Like the previous pipeline, the user code outputs the frames to /pfs/out so that the next pipeline in the DAG can access them in the image_flattener repo.
pachctl create pipeline -f 2_flatten_images/image_flattener.yamlNext, we'll create a pipeline that will trace the edges of the images. This pipeline will take a union of two inputs:
-
the
image_flattenerrepo, which contains the flattened images from the previous pipeline -
the
raw_videos_and_imagesrepo, which contains the original images that didn't need to be processed
pachctl create pipeline -f 3_trace_images/image_tracer.yamlNext, we'll create a pipeline that will create two gifs:
- a gif of the original video's flattened frames (from the
image_flatteneroutput repo) - a gif of the video's traced frames (from the
image_traceroutput repo)
pachctl create pipeline -f 4_gif_images/movie_gifer.yamlNext, we'll create a pipeline that will re-shuffle the content from the previous pipelines into two folders:
edges: contains the traced images and gifsoriginals: contains the original images and gifs
This helps us keep the content organized for easy access and manipulation in the next pipeline.
pachctl create pipeline -f 5_shuffle_content/content_shuffler.yamlFinally, we'll create a pipeline that will create a static html page that you can download and open to view the original and traced content side-by-side.
pachctl create pipeline -f 6_collage_content/content_collager.yamlNow that we have our DAG set up, we can add some videos and images to the raw_videos_and_images repo to see the pipeline in action.
pachctl put file raw_videos_and_images@master: -f
pachctl put file raw_videos_and_images@master:liberty.png -f https://raw.githubusercontent.com/pachyderm/docs-content/main/images/opencv/liberty.jpg
pachctl put file raw_videos_and_images@master:robot.png -f https://raw.githubusercontent.com/pachyderm/docs-content/main/images/opencv/robot.jpgThis is based off of Reid's Pachd_Pipelines repo, which extends the basic OpenCV example to support the conversion of videos to jpg frames.
