iOS 上通过 RTMP 的 H264 视频流

2023-12-29

经过一番挖掘,我发现了一个库,可以在写入 .mp4 文件时从该文件中提取 NAL 单元。我正在尝试使用 RTMP 将此信息打包为 flvlibavformat and libavcodec。我使用以下方法设置视频流:

-(void)setupVideoStream {
    int ret = 0;
    videoCodec = avcodec_find_decoder(STREAM_VIDEO_CODEC);

    if (videoCodec == nil) {
        NSLog(@"Could not find encoder %i", STREAM_VIDEO_CODEC);
        return;
    }

    videoStream                                 = avformat_new_stream(oc, videoCodec);

    videoCodecContext                           = videoStream->codec;

    videoCodecContext->codec_type               = AVMEDIA_TYPE_VIDEO;
    videoCodecContext->codec_id                 = STREAM_VIDEO_CODEC;
    videoCodecContext->pix_fmt                  = AV_PIX_FMT_YUV420P;
    videoCodecContext->profile                  = FF_PROFILE_H264_BASELINE;

    videoCodecContext->bit_rate                 = 512000;
    videoCodecContext->bit_rate_tolerance       = 0;

    videoCodecContext->width                    = STREAM_WIDTH;
    videoCodecContext->height                   = STREAM_HEIGHT;

    videoCodecContext->time_base.den            = STREAM_TIME_BASE;
    videoCodecContext->time_base.num            = 1;
    videoCodecContext->gop_size                 = STREAM_GOP;

    videoCodecContext->has_b_frames             = 0;
    videoCodecContext->ticks_per_frame          = 2;

    videoCodecContext->qcompress                = 0.6;
    videoCodecContext->qmax                     = 51;
    videoCodecContext->qmin                     = 10;
    videoCodecContext->max_qdiff                = 4;
    videoCodecContext->i_quant_factor           = 0.71;

    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
        videoCodecContext->flags                |= CODEC_FLAG_GLOBAL_HEADER;

    videoCodecContext->extradata                = avcCHeader;
    videoCodecContext->extradata_size           = avcCHeaderSize;

    ret = avcodec_open2(videoStream->codec, videoCodec, NULL);
    if (ret < 0)
        NSLog(@"Could not open codec!");
}

然后我连接,每次库提取一个NALU时,它都会返回一个包含一两个NALU的数组到我的RTMPClient。处理实际流的方法如下所示:

-(void)writeNALUToStream:(NSArray*)data time:(double)pts {
    int ret = 0;
    uint8_t *buffer = NULL;
    int bufferSize = 0;

    // Number of NALUs within the data array
    int numNALUs = [data count];

    // First NALU
    NSData *fNALU = [data objectAtIndex:0];
    int fLen = [fNALU length];

    // If there is more than one NALU...
    if (numNALUs > 1) {
        // Second NALU
        NSData *sNALU = [data objectAtIndex:1];
        int sLen = [sNALU length];

        // Allocate a buffer the size of first data and second data
        buffer = av_malloc(fLen + sLen);

        // Copy the first data bytes of fLen into the buffer
        memcpy(buffer, [fNALU bytes], fLen);

        // Copy the second data bytes of sLen into the buffer + fLen + 1
        memcpy(buffer + fLen + 1, [sNALU bytes], sLen);

        // Update the size of the buffer
        bufferSize = fLen + sLen;
    }else {
        // Allocate a buffer the size of first data
        buffer = av_malloc(fLen);

        // Copy the first data bytes of fLen into the buffer
        memcpy(buffer, [fNALU bytes], fLen);

        // Update the size of the buffer
        bufferSize = fLen;
    }

    // Initialize the packet
    av_init_packet(&pkt);

    //av_packet_from_data(&pkt, buffer, bufferSize);

    // Set the packet data to the buffer
    pkt.data            = buffer;
    pkt.size            = bufferSize;
    pkt.pts             = pts;

    // Stream index 0 is the video stream
    pkt.stream_index    = 0;

    // Add a key frame flag every 15 frames
    if ((processedFrames % 15) == 0)
        pkt.flags       |= AV_PKT_FLAG_KEY;

    // Write the frame to the stream
    ret = av_interleaved_write_frame(oc, &pkt);
    if (ret < 0) 
        NSLog(@"Error writing frame %i to stream", processedFrames);
    else {
        // Update the number of frames successfully streamed
        frameCount++;
        // Update the number of bytes successfully sent
        bytesSent += pkt.size;
    }

    // Update the number of frames processed
    processedFrames++;
    // Update the number of bytes processed
    processedBytes += pkt.size;

    free((uint8_t*)buffer);
    // Free the packet
    av_free_packet(&pkt);
}

大约 100 帧左右后,我收到一个错误:malloc: *** error for object 0xe5bfa0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug

我似乎无法阻止这种情况的发生。我尝试过注释掉av_free_packet()方法和free()随着尝试使用av_packet_from_data()而不是初始化数据包并设置数据和大小值。

我的问题是;我怎样才能阻止这个错误的发生,根据wireshark,这些是正确的RTMP h264数据包,但它们除了黑屏之外不会播放任何东西。是否有一些我忽略的明显错误?


在我看来,您的缓冲区溢出并破坏了您的流:

memcpy(buffer + fLen + 1, [sNALU bytes], sLen);

您正在分配 fLen + sLen 字节,然后写入fLen + sLen + 1字节。只要摆脱+ 1.

因为你的AVPacket是在栈上分配的av_free_packet()不需要。 最后,为 libav 分配额外的字节被认为是很好的做法。av_malloc(size + FF_INPUT_BUFFER_PADDING_SIZE )

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

iOS 上通过 RTMP 的 H264 视频流 的相关文章

随机推荐