ijkplayer code read_ AV in thread_ read_ Detailed explanation of frame () data stream reading process

review

ijkplayer startup process:

  1. In the Android program, the user calls the encapsulation interface IjkLibLoader method to load three library files of ijkffmpeg, ijksdl and ijkplayer to Android system.
  2. Initialize the player and call the JNI interface program native_setup() function, which creates a player message queue and plays its related parameters;
  3. In the Android program, users call createPlayer() and prepareAsync() to encapsulate the interface function to create the player and let the player enter the playback state.
  4. Start the player.

The related contents of prepareAsync() function have been analyzed earlier, and the more important function is VideoState *is = stream_open(ffp, file_name, NULL);
In the function:

  1. Establish three queues, video, audio and subtitle queues, and store three streams avstream * audio in is_ st,*subtitle_st,*video_st.
  2. Create 2 threads, read_thread() and video_refresh() thread;
  3. Initialize the relevant parameters of the decoder, and the function exits.
    The player has the ability to play. This process covers most of the ijkplayer source code; In the playback process, the program logic architecture has been established and can be used during operation
    Handle some user exchange functions.

Next, review read_ The thread() function is summarized as follows:

  1. Call avformat_open_input() function, which selects the network protocol and unpacker category according to the data source, distinguished by the user URL address keyword,
    For example:“ tcpext://192.168.1.31:1717/v -"Out. h264", ijkplayer parses that the protocol is tcpext and the unpacker is h264.

  2. Call avformat_find_stream_info(ic, opts) function, which identifies the coding format of the data stream data and obtains the configuration of the data stream
    What kind of decoder. When entering this function, the number and type of streams in the AVFormatContext are determined. When is it confirmed? In doubt

  3. Call stream_ component_ The open (FFP, st_index [avmedia_type_video]) function constructs a decoder for the data stream according to the data stream information,
    Configure decoders for video, audio and subtitle stream types respectively.

  4. Enter the loop body of the thread, AV_ read_ frame(ic, pkt) -> packet_ queue_ Put (& is - > videoq, & copy), the read - > queue process repeats.

Generally describe the logic of the program body, which is roughly such a logic.

This article mainly analyzes the data stream reading process, defines the target, and opens the code reading mode.

read_thread() thread

Let's look at read first_ The relevant simplified logic code in thread() thread is as follows:

void read_thread(void *arg)
{
    FFPlayer *ffp = arg;                                            ///>This parameter is related to Android user space transfer
    VideoState *is = ffp->is;
    AVFormatContext *ic = NULL;
    int err, i, ret __unused;
    int st_index[AVMEDIA_TYPE_NB];
    AVPacket pkt1, *pkt = &pkt1;

    ///>Data stream coding format identification code Part 1
    if (ffp->find_stream_info) {
        AVDictionary **opts = setup_find_stream_info_opts(ic, ffp->codec_opts);  ///>Get decoder parameter dictionary pointer
        int orig_nb_streams = ic->nb_streams;

        do {
            if (av_stristart(is->filename, "data:", NULL) && orig_nb_streams > 0) {
                for (i = 0; i < orig_nb_streams; i++) {
                    if (!ic->streams[i] || !ic->streams[i]->codecpar || ic->streams[i]->codecpar->profile == FF_PROFILE_UNKNOWN) {
                        break;
                    }
                }

                if (i == orig_nb_streams) {
                    break;
                }
            }
            err = avformat_find_stream_info(ic, opts);                          ///>Enter the matching search process and flag the type of stream in the decoder options dictionary
        } while(0); 
        ffp_notify_msg1(ffp, FFP_MSG_FIND_STREAM_INFO);
    }

    is->realtime = is_realtime(ic);
    av_dump_format(ic, 0, is->filename, 0);

    ///>Data stream coding format identification code Part 2
    int video_stream_count = 0;
    int h264_stream_count = 0;
    int first_h264_stream = -1;
    for (i = 0; i < ic->nb_streams; i++) {
        AVStream *st = ic->streams[i];
        enum AVMediaType type = st->codecpar->codec_type;
        st->discard = AVDISCARD_ALL;
        if (type >= 0 && ffp->wanted_stream_spec[type] && st_index[type] == -1)
            if (avformat_match_stream_specifier(ic, st, ffp->wanted_stream_spec[type]) > 0)
                st_index[type] = i;

        // choose first h264

        if (type == AVMEDIA_TYPE_VIDEO) {
            enum AVCodecID codec_id = st->codecpar->codec_id;
            video_stream_count++;
            if (codec_id == AV_CODEC_ID_H264) {
                h264_stream_count++;
                if (first_h264_stream < 0)
                    first_h264_stream = i;
            }
        }
        av_log(NULL, AV_LOG_INFO, "DEBUG %s, LINE:%d ,CODEC_ID:%d\n",__FILE__, __LINE__, (uint32_t)st->codecpar->codec_id);
    }

    ///>In case of multi stream mode
    if (video_stream_count > 1 && st_index[AVMEDIA_TYPE_VIDEO] < 0) {
        st_index[AVMEDIA_TYPE_VIDEO] = first_h264_stream;
        av_log(NULL, AV_LOG_WARNING, "multiple video stream found, prefer first h264 stream: %d\n", first_h264_stream);
    }

    ///>Match decoder one by one
    if (!ffp->video_disable)
        st_index[AVMEDIA_TYPE_VIDEO] =
            av_find_best_stream(ic, AVMEDIA_TYPE_VIDEO,
                                st_index[AVMEDIA_TYPE_VIDEO], -1, NULL, 0);
    if (!ffp->audio_disable)
        st_index[AVMEDIA_TYPE_AUDIO] =
            av_find_best_stream(ic, AVMEDIA_TYPE_AUDIO,
                                st_index[AVMEDIA_TYPE_AUDIO],
                                st_index[AVMEDIA_TYPE_VIDEO],
                                NULL, 0);
    if (!ffp->video_disable && !ffp->subtitle_disable)
        st_index[AVMEDIA_TYPE_SUBTITLE] =
            av_find_best_stream(ic, AVMEDIA_TYPE_SUBTITLE,
                                st_index[AVMEDIA_TYPE_SUBTITLE],
                                (st_index[AVMEDIA_TYPE_AUDIO] >= 0 ?
                                 st_index[AVMEDIA_TYPE_AUDIO] :
                                 st_index[AVMEDIA_TYPE_VIDEO]),
                                NULL, 0);

    is->show_mode = ffp->show_mode;
    ///*open the streams to open the stream format*/
    if (st_index[AVMEDIA_TYPE_AUDIO] >= 0) {
        stream_component_open(ffp, st_index[AVMEDIA_TYPE_AUDIO]);
    } else {
        ffp->av_sync_type = AV_SYNC_VIDEO_MASTER;
        is->av_sync_type  = ffp->av_sync_type;
    }

    ret = -1;
    if (st_index[AVMEDIA_TYPE_VIDEO] >= 0) {
        ret = stream_component_open(ffp, st_index[AVMEDIA_TYPE_VIDEO]);
    }
    if (is->show_mode == SHOW_MODE_NONE)
        is->show_mode = ret >= 0 ? SHOW_MODE_VIDEO : SHOW_MODE_RDFT;

    if (st_index[AVMEDIA_TYPE_SUBTITLE] >= 0) {
        stream_component_open(ffp, st_index[AVMEDIA_TYPE_SUBTITLE]);
    }
    ffp_notify_msg1(ffp, FFP_MSG_COMPONENT_OPEN);

    ///>Notify android space program
    if (!ffp->ijkmeta_delay_init) {
        ijkmeta_set_avformat_context_l(ffp->meta, ic);
    }

    ///>Sets the state of the dictionary item
    ffp->stat.bit_rate = ic->bit_rate;
    if (st_index[AVMEDIA_TYPE_VIDEO] >= 0)
        ijkmeta_set_int64_l(ffp->meta, IJKM_KEY_VIDEO_STREAM, st_index[AVMEDIA_TYPE_VIDEO]);
    if (st_index[AVMEDIA_TYPE_AUDIO] >= 0)
        ijkmeta_set_int64_l(ffp->meta, IJKM_KEY_AUDIO_STREAM, st_index[AVMEDIA_TYPE_AUDIO]);
    if (st_index[AVMEDIA_TYPE_SUBTITLE] >= 0)
        ijkmeta_set_int64_l(ffp->meta, IJKM_KEY_TIMEDTEXT_STREAM, st_index[AVMEDIA_TYPE_SUBTITLE]);

    ///>Player status adjustment
    ffp->prepared = true;
    ffp_notify_msg1(ffp, FFP_MSG_PREPARED);
    if (ffp->auto_resume) {
        ffp_notify_msg1(ffp, FFP_REQ_START);
        ffp->auto_resume = 0;
    }
    /* offset should be seeked*/
    if (ffp->seek_at_start > 0) {
        ffp_seek_to_l(ffp, (long)(ffp->seek_at_start));
    }

    ///>Enter the loop playback state, and the thread loop body
    for (;;){

        ///>
        if (is->queue_attachments_req) {  ///>Configure this ID when opening the stream = 1
            if (is->video_st && (is->video_st->disposition & AV_DISPOSITION_ATTACHED_PIC)) {
                AVPacket copy = { 0 };
                if ((ret = av_packet_ref(&copy, &is->video_st->attached_pic)) < 0)
                    goto fail;
                packet_queue_put(&is->videoq, &copy);
                packet_queue_put_nullpacket(&is->videoq, is->video_stream);
            }
            is->queue_attachments_req = 0;
        }
        ///> 
        pkt->flags = 0;
        ret = av_read_frame(ic, pkt);
        ///>
        if (pkt->flags & AV_PKT_FLAG_DISCONTINUITY) {
            if (is->audio_stream >= 0) {
                packet_queue_put(&is->audioq, &flush_pkt);
            }
            if (is->subtitle_stream >= 0) {
                packet_queue_put(&is->subtitleq, &flush_pkt);
            }
            if (is->video_stream >= 0) {
                packet_queue_put(&is->videoq, &flush_pkt);
            }
        }
        ///>
        if (pkt->stream_index == is->audio_stream && pkt_in_play_range) {
            packet_queue_put(&is->audioq, pkt);
        } else if (pkt->stream_index == is->video_stream && pkt_in_play_range
                   && !(is->video_st && (is->video_st->disposition & AV_DISPOSITION_ATTACHED_PIC))) {
            packet_queue_put(&is->videoq, pkt);
        } else if (pkt->stream_index == is->subtitle_stream && pkt_in_play_range) {
            packet_queue_put(&is->subtitleq, pkt);
        } else {
            av_packet_unref(pkt);
        }
        ///>
        ffp_statistic_l(ffp);
        av_log(NULL, AV_LOG_INFO, " %s / %s , LINE:%d \n",__FILE__, __func__, __LINE__);
    }
}

This program is a simplified version of the structure content, and each node has annotation information.

Read data stream

In read_thread() thread executes AV_ read_ The frame (IC, pkt) function circularly reads the content of the data stream, tracks and combs the function, and the function call relationship is as follows.

av_read_frame(ic, pkt);                             ///>Entry parameter: AVFormatContext *ic
    -> read_frame_internal(s, pkt);
        -> ff_read_packet(s, &cur_pkt);             ///>Entry parameter: AVPacket cur_pkt;
            -> av_init_packet(pkt);
            -> s->iformat->read_packet(s, pkt);     ///>Read here_ The packet call is ff_raw_read_partial_packet(AVFormatContext *s, AVPacket *pkt) function, in libavformat/rawdec.c file
                -> av_new_packet(pkt, size)
                -> avio_read_partial(s->pb, pkt->data, size); ///>Entry parameter: aviocontext s - > Pb. The function is in libavformat/aviobuf.c file
                    -> s->read_packet(s->opaque, buf, size);  ///>This read_ The packet call is io_read_packet() function, which finally calls TCP_ For the read() function, see the following analysis.
                    -> memcpy(buf, s->buf_ptr, len);
                    -> s->buf_ptr += len;
                    -> return len;
                -> av_shrink_packet(pkt, ret);
        -> av_parser_init(st->codecpar->codec_id)
        -> avcodec_get_name(st->codecpar->codec_id)
        -> compute_pkt_fields(s, st, NULL, pkt, AV_NOPTS_VALUE, AV_NOPTS_VALUE)
        -> read_from_packet_buffer(&s->internal->parse_queue, &s->internal->parse_queue_end, pkt)
        -> update_stream_avctx(s);
    -> add_to_pktbuf(&s->internal->packet_buffer, pkt,&s->internal->packet_buffer_end, 1);

Next, analyze the relationship between entry parameters and function call relationship io_ read_ packet() -> ffurl_ read() -> tcp_ Read () map.
First, sort out the function entry parameters as follows.
The first parameter of the function entry avformatcontext - > AVIOContext - > opaque parameter source relationship map, which is of type void * opaque in the AVIOContext structure definition.

///>The function entry parameter is s - > opaque
static int io_read_packet(void *opaque, uint8_t *buf, int buf_size)
{
    AVIOInternal *internal = opaque;                    ///>Here, the direct assignment is converted into an AVIOInternal pointer, and the aviocontext s - > Pb passed from the entry,
    return ffurl_read(internal->h, buf, buf_size);      ///> ffurl_ The read call is tcp_read() adds a function in the private protocol.
}

//>The structure AVIOInternal is defined as follows
typedef struct AVIOInternal {
    URLContext *h;
} AVIOInternal;

//>The structure URLContext is defined as follows
typedef struct URLContext {
    const AVClass *av_class;    /**< information for av_log(). Set by url_open(). */
    const struct URLProtocol *prot;
    void *priv_data;
    char *filename;             /**< specified URL */
    int flags;
    int max_packet_size;        /**< if non zero, the stream is packetized with this max packet size */
    int is_streamed;            /**< true if streamed (no seek possible), default = false */
    int is_connected;
    AVIOInterruptCB interrupt_callback;
    int64_t rw_timeout;         /**< maximum time to wait for (network) read/write operation completion, in mcs */
    const char *protocol_whitelist;
    const char *protocol_blacklist;
    int min_packet_size;        /**< if non zero, the stream is packetized with this min packet size */
    int64_t pts;                ///< add pts variable
} URLContext;

///>The content type of the entry URLContext *h pointer of this function is shown in the figure above.
static int tcp_read(URLContext *h, uint8_t *buf, int size)
{
    uint8_t header[HEADER_SIZE];
    TCPEXTContext *s = h->priv_data;
    int ret;

    if (!(h->flags & AVIO_FLAG_NONBLOCK)) {
        ret = ff_network_wait_fd_timeout(s->fd, 0, h->rw_timeout, &h->interrupt_callback);
        if (ret)
            return ret;
    }

    ret = recv(s->fd, header, HEADER_SIZE, MSG_WAITALL);
    if(ret < HEADER_SIZE){
        av_log(NULL, AV_LOG_INFO, "%s/%s(), LINE:%d ,READ_HEADER_AIL length:%d \n",__FILE__, __func__, __LINE__, ret);
        return 0;
    }
    uint32_t msb = header[0] << 24 | header[1] << 16 | header[2] << 8 | header[3];
    uint32_t lsb = header[4] << 24 | header[5] << 16 | header[6] << 8 | header[7];
    uint32_t len = header[8] << 24 | header[9] << 16 | header[10] << 8 | header[11];
    uint64_t pts = msb << 32 | lsb ;
    av_log(NULL, AV_LOG_INFO, "READ HEADER msb:%08x, lsb:%08x, len:%08x \n", msb, lsb, len);    
    assert( pts == NO_PTS || (pts & 0x8000000000000000) == 0);
    assert(len);

    ret = recv(s->fd, buf, len, MSG_WAITALL);
    if (ret > 0){
        av_application_did_io_tcp_read(s->app_ctx, (void*)h, ret);
        uint32_t hsb = buf[0] << 24 | buf[1] << 16 | buf[2] << 8 | buf[3];
        msb = buf[4] << 24 | buf[5] << 16 | buf[6] << 8 | buf[7];
        lsb = buf[8] << 24 | buf[9] << 16 | buf[10] << 8 | buf[11];
        av_log(NULL, AV_LOG_INFO, "H264 HEADER hsb:%08x, msb:%08x, lsb:%08x \n", hsb, msb, lsb);
    }
    av_log(NULL, AV_LOG_INFO, "%s/%s(), LINE:%d ,recv length:%d \n",__FILE__, __func__, __LINE__, ret);
    return ret < 0 ? ff_neterrno() : ret;
}

Summary:

  • 1> . thread read_thread() defines AVFormatContext ic,AVPacket pkt1, global variable and function av_read_frame(ic, pkt); Entry parameters
    All global variables, tcp_read function entry H = (urlcontext) IC - > Pb - > opaque, buf = pkt - > data parameter.
  • 2> FF. pts data can only be stored in URLContext *h, and in the added private unpacker_ raw_ read_ partial_ In the packet () function, the pts value is transcribed
    To pkt - > PTS.
  • 3> In the sdk of ijkplayer, adding private communication protocol is basically similar to private unpacker and program processing. This module can be used for reference.

Packet obtained here_ There is a pts value in the buffer object, that is, the current pts content can be put in when reading data,

Tags: Android

Posted on Mon, 29 Nov 2021 16:31:53 -0500 by son.of.the.morning