Image rotation with ffmpeg filter

I found a problem at work: when shooting videos, mobile devices shoot through the vertical screen, and the thumbnail whe...
1. How to find the angle indicating the need to rotate:
2. How to rotate
3. Rotate using ffmpeg filter

I found a problem at work: when shooting videos, mobile devices shoot through the vertical screen, and the thumbnail when storing files is also the vertical screen

But its resolution is in horizontal screen format.

Therefore, when making a player, the unprocessed player will find that it is horizontal when playing.

The reason is that it is stored horizontally, but there is a parameter in mp4 that indicates the angle of rotation during playback. rotate.

1. How to find the angle indicating the need to rotate:

There is a source data metadata in the AVStream of the file format descriptor AVFormatContext, which is used to describe the response information of the file. Print it and find that there will be corresponding operation parameters, including the rotate parameter.

AVStream *stream = m_pVideoAVSt; AVDictionaryEntry *m = NULL; while ((m = av_dict_get(stream->metadata, "", m, AV_DICT_IGNORE_SUFFIX)) != NULL) { printf("Metadata: Key:%s , value:%s\n", m->key, m->value); }


You can see that the rotation angle is 90 degrees. Now that the information has been obtained, find a way to do it.

2. How to rotate

Find the operation of ffmpeg for rotating the picture during playback:

ffmpeg -i fan.jpg -vf transpose=2 -y transpose2.png

ok, since ffmpeg has corresponding instructions to operate, there are corresponding methods to operate.

After searching, I found that - vf means adding a filter. I found that there is something called avfilter in the ffmpeg library, which is actually a filter.

3. Rotate using ffmpeg filter


Video or audio can be processed through multiple filters / filters to export one or more video or audio.
The function of the filter is very powerful. It can add subtitles, cut the length, zoom, cut the picture, add watermarks, splice video or audio, rotate or mirror the picture, add black edges and adjust the volume.

For the video rotation here, because the picture rotates into a vertical screen, you need to add a black edge on both sides, and only two filters are enough.

(1) Before playing, judge whether to rotate and how many degrees to create a filter:
Header file:

#include <libavcodec/avcodec.h> #include <libavformat/avformat.h> #include <libavfilter/avfilter.h> #include <libavfilter/buffersink.h> #include <libavfilter/buffersrc.h> #include <libavutil/opt.h> #include <libavutil/imgutils.h>
AVStream *stream = m_pVideoAVSt; AVDictionaryEntry *m = NULL; while ((m = av_dict_get(stream->metadata, "", m, AV_DICT_IGNORE_SUFFIX)) != NULL) { printf("Metadata: Key:%s , value:%s\n", m->key, m->value); if (strcmp(m->key, "rotate") == 0) { m_bRotate = true; //strcpy(m_pRotateAngle, m->value); if (strcmp(m->value, "90") == 0) { //Rotate 90 degrees clockwise and fill the left and right sides of the video with the black edges of the corresponding pixels int difference = abs(m_pVideoCodecCtx->height - m_pVideoCodecCtx->width); char args[512]; _snprintf(args, sizeof(args),"transpose=clock,pad=iw+%d:ih:%d", difference, difference/2); FilterInit(args); } else if (strcmp(m->value, "-90") == 0) { //Rotate 90 degrees counterclockwise and fill the left and right sides of the video with the black edges of the corresponding pixels int difference = abs(m_pVideoCodecCtx->height - m_pVideoCodecCtx->width); char args[512]; _snprintf(args, sizeof(args), "transpose=cclock,pad=iw+%d:ih:%d", difference, difference / 2); FilterInit(args); } } }

Here, the filled black edge is filled with the width and height difference.

int32_t FilterInit(const char *filters_descr) { /** * Register all avfilters */ avfilter_register_all(); char args[512]; int ret = 0; const AVFilter *buffersrc = avfilter_get_by_name("buffer"); const AVFilter *buffersink = avfilter_get_by_name("buffersink"); AVFilterInOut *outputs = avfilter_inout_alloc(); AVFilterInOut *inputs = avfilter_inout_alloc(); enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE }; //Allocate memory for FilterGraph filter_graph = avfilter_graph_alloc(); if (!outputs || !inputs || !filter_graph) { ret = AVERROR(ENOMEM); goto freefilter; } /** * To fill in the correct parameters */ _snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", m_pVideoCodecCtx->width, m_pVideoCodecCtx->height, m_pVideoCodecCtx->pix_fmt, m_pVideoCodecCtx->time_base.num, m_pVideoCodecCtx->time_base.den, m_pVideoCodecCtx->sample_aspect_ratio.num, m_pVideoCodecCtx->sample_aspect_ratio.den); //Create and add a Filter to the FilterGraph ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph); if (ret < 0) { printf("Cannot create buffer source\n"); goto freefilter; } //Create and add a Filter to the FilterGraph ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph); if (ret < 0) { printf("Cannot create buffer sink\n"); goto freefilter; } ret = av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN); if (ret < 0) { printf("Cannot set output pixel format\n"); goto freefilter; } outputs->name = av_strdup("in"); outputs->filter_ctx = buffersrc_ctx; outputs->pad_idx = 0; outputs->next = NULL; inputs->name = av_strdup("out"); inputs->filter_ctx = buffersink_ctx; inputs->pad_idx = 0; inputs->next = NULL; //Add a string of graphs describing the narrative through a string to the FilterGraph if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr, &inputs, &outputs, NULL)) < 0) { printf("parse ptr error\n"); goto freefilter; } //Check the configuration of FilterGraph if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) { printf("parse config error\n"); goto freefilter; } //Cache frame. Used to save the frame after the filter FilterFrame = av_frame_alloc(); //uint8_t *out_buffer = (uint8_t *) av_malloc(av_image_get_buffer_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, 1)); //av_image_fill_arrays(new_frame->data, new_frame->linesize, out_buffer, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, 1); freefilter: avfilter_inout_free(&inputs); avfilter_inout_free(&outputs); return ret; }

Goto here is convenient for you to see in one function. When writing code, you still use another function to release it. Try not to use goto.

(2) After the filter is created, each decoded AVFrame is processed when acquiring each frame

if (m_bRotate == true) { //Add an AVFrame to the FilterGraph ret = av_buffersrc_add_frame(buffersrc_ctx, m_pVideoFrame); if (ret >= 0) { //Take an AVFrame from the FilterGraph ret = av_buffersink_get_frame(buffersink_ctx, FilterFrame); if (ret >= 0) { printf("get AVFrame success"); } else { printf("Error while getting the filtergraph\n"); } } else { printf("Error while feeding the filtergraph\n"); } }

Remember to clear the FilterFrame and videoFrame every time the AVFrame of this frame is played.

av_frame_unref(m_pVideoFrame); if(m_bRotate == true) av_frame_unref(FilterFrame);

You can get the effect that the rotated picture has been filled with black edges.

3 November 2021, 19:18 | Views: 7675

Add new comment

For adding a comment, please log in
or create account

0 comments