Hongmeng open source full scene application development -- communication protocol

preface

         As mentioned earlier, the developed home group photo beauty camera application is based on both Hongmeng and Android devices. We will explain its four functional modules, namely video codec, video rendering, communication protocol and beauty filter.
In previous issues, we analyzed the implementation principle of video codec and video rendering module. This issue will continue to explain the communication protocol and briefly outline the implementation principle of Android beauty filter. The relevant code has been open source to Gitee( https://gitee.com/isrc_ohos/cameraharmony/tree/RTP/ ), welcome to download and use and put forward your valuable opinions!

background

         RTP is a basic protocol for streaming media transmission on the Internet. It works in the case of one-to-one or one to many transmission. Its purpose is to provide time information and realize stream synchronization. It can be built on the underlying connection oriented and non connection oriented transmission protocols. Generally, UDP protocol is used for transmission. The RTP packet sequence sent from a synchronization source is called a stream. An RTP session may contain multiple RTP streams.

Application effect display

1. Review of application effect of family group photo beauty camera

         First, let's review the application of family photo beauty camera explained in the last issue.
         This application can transmit the video data taken by Hongmeng large screen to Android mobile phone in real time; In addition, a filter is added at the anzhuo end, and then the processed video data is transmitted back to the Hongmeng large screen for rendering and display, so as to realize the function of beauty photography of Hongmeng large screen. Refer to figure 1 for the dynamic scene effect after application operation.
         In the figure, the bottom vertical screen shows the Android mobile phone, and the top horizontal screen shows the Hongmeng mobile phone (due to the lack of large screen equipment equipped with Hongmeng system in the experimental environment, we use Hongmeng mobile phone instead of large screen equipment to simulate the experimental scene), which shows the effect of rendering after video decoding.


Fig. 1 operation effect of home group photo beauty camera application

2.RTP transmission Demo effect

         In order to explain the communication protocol more clearly, we split the data transmission part in the application of home group photo beauty camera, formed an RTP transmission Demo, sorted and optimized the functions, and changed the original video transmission to image transmission. The video is composed of multi frame images, and the change of transmission data type will not affect the principle and steps of RTP transmission. The operation effect of RTP transmission Demo is shown in Figure 2. The above figure shows the effect of the sender and the following figure shows the effect of the receiver.
         After successfully installing and opening the application, click the blue button at the sending end to send the picture data of the specific area selected by the developer; Click the pink button at the receiving end to receive the picture data just sent by the sending end and display it below the button.


Figure 2 RTP transmission Demo operation effect diagram (upper transmitter and lower receiver)

Analysis of RTP transmission principle and steps

         Next, we will focus on the implementation principle and steps of RTP transmission.
         Refer to figure 3 for the principle flow of RTP transmission Demo. At the Hongmeng sender (server), set the image data to be transmitted and send it to the Hongmeng receiver (client) through the wireless network using RTP protocol and Socket point-to-point data communication.
         At Hongmeng receiving end (client), after receiving the image data sent by the transmitting end, the image is drawn. Next, the implementation steps of RTP transmission Demo will be analyzed.


Figure 3 RTP transmission principle flow chart

Server data sending

         On the server side, place the image to be sent in the resources - > base - > media folder, as shown in Figure 4. Then, the image data to be transmitted is format converted. The image data is transmitted to Hongmeng receiver through wireless network using RTP protocol and Socket point-to-point data communication.


Figure 4 position of picture in project structure

         The data sending process of the server includes the following three steps:
Step 1. Obtain the bitmap object through the resource ID;
Step 2. Format convert the pixels in the specified area of the bitmap;
Step 3. Data transmission;
(1) Get bitmap object by resource ID
         Get the resource input stream drawableInputStream with the resource IDdrawableID object as the input parameter through the getResource() method; Instantiate the ImageSource.SourceOptions object of the image setting class and set the image source format to png; Create an image source. The parameters are resource input stream and image source ImageSource class objects; Instantiate the object of the image parameter class decodingoptions, initialize the image size, area and set the bitmap format for it; Create a bitmap object through the image source ImageSource class object according to the image parameter class object decodingoptions; Returns a bitmap object.

//Get bitmap object by resource ID
private PixelMap getPixelMap(int drawableId) {
    InputStream drawableInputStream = null;
    try {
        //Take the resource ID as the input parameter to obtain the resource input stream
        drawableInputStream=this.getResourceManager().getResource(drawableId);
        //Instantiate an ImageSource class object
        ImageSource.SourceOptions sourceOptions = new ImageSource.SourceOptions();
        sourceOptions.formatHint = "image/png";//Format image source
        //Create an image source. The parameters are the resource input stream and the ImageSource class object of the image source
        ImageSource imageSource = ImageSource.create(drawableInputStream, sourceOptions);
        //Instantiate the DecodingOptions class object of image source decoding operation
        ImageSource.DecodingOptions decodingOptions = new ImageSource.DecodingOptions();
        decodingOptions.desiredSize = new Size(0, 0);//Set image size
        decodingOptions.desiredRegion = new Rect(0, 0, 0, 0);
        decodingOptions.desiredPixelFormat = PixelFormat.ARGB_8888;//Format bitmap
        PixelMap pixelMap = imageSource.createPixelmap(decodingOptions);//Create a bitmap based on the decoding operation class object
        return pixelMap;//Return bitmap
    }
    ...
}

(2) Converts the bitmap to pixels in the specified area
         After obtaining the bitmap object, instantiate the Rect rectangle class object to select a specific image area for the developer (the area should not be larger than the size of the image under the resources - > base - > media path); Call the readPixels() method through the bitmap object pixelMap to convert the pixels of the specified area into int [] type data; Call intToBytes() method and convert int [] type data format to byte type data.

// Reads the pixels of the specified area
Rect region = new Rect(0, 0, 30, 30);//Instantiate the class object to specify the specified area
pixelMap.readPixels(pixelArray,0,30,region);//Converts the pixels of the specified area to int [] type data
pic = intToBytes(pixelArray);//Transpose int [] type data into byte type data

(3) Data transmission
         Instantiate RTP sending class object RtpSenderWrapper, set the IP address to the mobile phone IP address of the receiving end, and set the port number to 5005; Call sendAvcPacket() method to send image data.
         Because the data type of RTP transmission is simplified, image RTP transmission will be relatively easy. If it is video RTP transmission in the original application, it is necessary to convert the video data frame by frame, and compress the YUV type original video data obtained from the camera into h264 type video data to facilitate Socket transmission.

mRtpSenderWrapper = new RtpSenderWrapper("192.168.31.12", 5005, false);
mRtpSenderWrapper.sendAvcPacket(pic, 0, pic.length, 0);//send data

Client receiving data

         After the sender successfully sends data through RTP protocol, the receiver can start receiving normally. The process of receiving data at the sender is mainly divided into the following five steps:
Step 1. Create a data receiving thread;
Step 2. Receive data;
Step 3. Data transmission between processes;
Step 4. Processing bitmap data to obtain pixelMapHolder;
Step 5. Draw the image.
(1) Create data receiving thread
         Create a child thread as a data receiving thread.

new Thread(new Runnable())//Open a new data receiving thread

(2) Receive data
         Instantiate the datagram packet in the receiving thread of the child thread; Call the receive() method through the Socket class object to receive the data from the sender into the datagram packet; Call getData() method through datagram packet to obtain RTP data in the packet.

datagramPacket = new DatagramPacket(data,data.length);//Instantiated packet
socket.receive(datagramPacket);//Receive data into packet
rtpData = datagramPacket.getData();Get the in the packet RTP data

3) Data transfer between processes
         After the sub thread gets the RTP sent data, it needs to transfer the RTP data from the sub thread to the main thread. This involves data transfer between threads. In this application, we use the SynchronousQueue concurrent queue of the Java class to realize the data transfer between the child thread and the main thread. First instantiate a byte [] type concurrent queue SynchronousQueue class object; Put h264 type data into the concurrent queue; Then get data from the queue.

SynchronousQueue<byte[]> queue = new SynchronousQueue<byte[]>();//Instantiate a concurrent queue of type byte []
queue.put(h264Data);//Put h264 type data into concurrent queue
rgbData = queue.take();//Get data from queue

(4) Process the decoded bitmap data to get the PixelMapHolder
         After the main thread obtains the image RGB data from the queue, it can draw the image. PixelMap is the received bitmap data. The pixelMapHolder uses PixelMap to generate the data required by the rendering back end, and provides the data as the input parameter of the method in Canvas. Therefore, in order to render the bitmap later, it is necessary to convert the image data PixelMap into a pixelMapHolder class object after the image data is transferred from the sub thread to the main thread, that is, when instantiating the pixelMapHolder class object, the PixelMap bitmap data is passed into the instantiation method as an input.

public void putPixelMap(PixelMap pixelMap){
        if (pixelMap != null) {//Judge whether the received bitmap data is empty
            rectSrc = new RectFloat(0, 0, pixelMap.getImageInfo().size.width, pixelMap.getImageInfo().size.height);
            pixelMapHolder = new PixelMapHolder(pixelMap);//Instantiate the PixelMapHolder class object
        }else{
            pixelMapHolder = null;//If the received bitmap is empty, all are set to empty
            setPixelMap(null);
        }
    }

(5) Draw image
         Instantiate a rectangular rect class object, set the image information and specify the specified area, such as width and height; Add a synchronous drawing task. First judge whether the pixelmapholder is empty. If it is empty, return directly. If it is not empty, start the drawing task; In the rendering task, we call the drawPixelMapHolderRoundRectShape() method to draw the PixelMapHolder class object into the rectangle Rect class object that is instantiated, and set it as fillet effect. Its position is specified by rectDst; When drawing is complete, release the pixelmapholder and leave it empty.

private void onDraw(){
    this.addDrawTask((view, canvas) -> { //Add drawing task
        if (pixelMapHolder == null){//Determine whether the pixelMapHolder is empty
            return;
        }
        synchronized (pixelMapHolder) {//Draw images in synchronization tasks
            canvas.drawPixelMapHolderRoundRectShape(pixelMapHolder, rectSrc, rectDst, radius, radius);//Draw the image as a fillet effect
            pixelMapHolder = null;//Release the pixelMapHolder when drawing is complete
        }
    });
}

Realization of the effect of Andrews end beauty filter

         For the beauty filter, we refer to the open source project on GitHub( https://github.com/google/grafika , https://github.com/cats-oss/android-gpuimage , https://github.com/wuhaoyu1990/MagicCamera ), GPU shaders are used to achieve the effect of adding filters and switching filters. Since the capability of Hongmeng is not involved, this part will not focus on it, but only briefly summarize its implementation process, which can be divided into the following five steps:
(1) Set different filters
         Use the shader language to set up a variety of code required.


Figure 5 filter used by beauty camera

(2)opengl drawing;

import android.opengl.GLES20;
...
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader);   
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader); 
// creates OpenGL ES program executables
GLES20.glLinkProgram(mProgram);

(3) Add filter;

private List<FilterFactory.FilterType>filters = new ArrayList<>();
  ...
  filters.add(FilterFactory.FilterType.Original);
  filters.add(FilterFactory.FilterType.Sunrise);
  ...

(4) Turn on or off the beauty filter;

mCameraView.enableBeauty(true);

(5) Set the beauty level;

mCameraView.setBeautyLevel(0.5f);

(6) Set the switching filter and lens, and then set the camera shooting and the callback after shooting.

mCameraView.updateFilter(filters.get(pos));//Cut flower filter
mCameraView.switchCamera();//cut from one shot to another

Project contributors

Cai Zhijie, Li Ke, Zhu Wei, Zheng Senwen, Chen Meiru

Tags: Java Design Pattern Software development rtp

Posted on Mon, 27 Sep 2021 23:23:43 -0400 by Reviresco