At the moment when I typed the title, I was a little excited. After all, the virtual camera has been working for several weeks, and it has successfully realized this function by bypassing many restrictions when android native system does not support it. In the next few days, I can finally get a good sleep.
OK, let's not talk about it. Let's analyze the virtual camera first. When it comes to the virtual camera, you must first think of such a scene: some wretched man is flirting with her on the Internet, watching the beautiful face of the beautiful woman in the video, not only the spring heart moves, saliva flows all over the place. In fact, on the other end of the network, it's not the beauty in the video that chats with this wretched man at all, but a big man who is picking his feet and smoking cigarettes. Well, then the question is, how did the stingy man become a beautiful woman who was so enchanted that he didn't deserve his life? The principle is a virtual camera. With the help of the virtual camera technology, a video recorded by a beautiful woman is fed to the camera interface. What the camera reads at this time is not the real real-time data, but the data recorded in the video in advance. The obscene man's side shows the video, which is naturally recorded.
As mentioned above, it is an application scenario of virtual camera, and there is another scenario. That is, multiple apps open the same camera at the same time and preview the same camera data. Of course, when multiple apps open the same camera, it doesn't mean to create multiple surfaces in the same app and let multiple surfaces display the same camera data. Distribute the camera data to different surfaces in the same app. This is very simple. The native camera framework supports it. For details, please refer to the blog I wrote earlier.
It is a bit cumbersome for multiple app to open the same camera and display the same data at the same time. First of all, the camera framework does not allow multiple apps to use camera at the same time, even if multiple apps open cameras with different IDs. Of course, this restriction is easy to crack. We can modify the ClientManager constructor in the file services/camera/libcameraservice/utils/ClientManager.h as follows:
//This is native. mMaxCost indicates how many app s can open camera at the same time template<class KEY, class VALUE, class LISTENER> ClientManager<KEY, VALUE, LISTENER>::ClientManager(int32_t totalCost) : mMaxCost(totalCost) {} //Solve the problem of multi process open camera conflict, * 10 means you can open up to 10 cameras at the same time template<class KEY, class VALUE, class LISTENER> ClientManager<KEY, VALUE, LISTENER>::ClientManager(int32_t totalCost) : mMaxCost(totalCost*10) {}
The principle of the above change is that when we open a camera, we will call CameraService.cpp In this function, the handleEvictionsLocked function is called to detect whether there is a conflict. Follow up the handleEvictionsLocked function, and you will find that it has called again mActiveClientManager.wouldEvict This function, when the function returns a conflicting client, will call the following code to disconnect the client.
for (auto& i : evictedClients) { // Disconnect is blocking, and should only have returned when HAL has cleaned up i->getValue()->disconnect(); // Clients will remove themselves from the active client list }
This function is defined in frameworks\av\services\camera\libcameraservice\utils\ClientManager.h. It calls the function wouldEvictLocked. In this function, you will determine the priority of the app that is currently invoked and the number of camera that is currently open.
for (const auto& i : mClients) { const KEY& curKey = i->getKey(); int32_t curCost = i->getCost(); ClientPriority curPriority = i->getPriority(); int32_t curOwner = i->getOwnerId(); bool conflicting = (curKey == key || i->isConflicting(key) || client->isConflicting(curKey)); //If it is forced to be written as false, the cameraservice::handleEvictionsLocked function will not force to detect whether the camera has been turned on conflicting = false; ALOGD("wouldEvictLocked totalCost=%" PRId64 ", mMaxCost=%d, conflicting=%d", totalCost, mMaxCost, conflicting); if (!returnIncompatibleClients) { // Find evicted clients if (conflicting && curPriority < priority) { // Pre-existing conflicting client with higher priority exists evictList.clear(); evictList.push_back(client); return evictList; } else if (conflicting || ((totalCost > mMaxCost && curCost > 0) && (curPriority >= priority) && !(highestPriorityOwner == owner && owner == curOwner))) { // Add a pre-existing client to the eviction list if: // - We are adding a client with higher priority that conflicts with this one. // - The total cost including the incoming client's is more than the allowable // maximum, and the client has a non-zero cost, lower priority, and a different // owner than the incoming client when the incoming client has the // highest priority. evictList.push_back(i); totalCost -= curCost; } } else { // Find clients preventing the incoming client from being added if (curPriority < priority && (conflicting || (totalCost > mMaxCost && curCost > 0))) { // Pre-existing conflicting client with higher priority exists evictList.push_back(i); } } }
If the number of open apps is greater than mMaxCost, the current app will be regarded as having conflicts. push it to the conflict list, and then cameraService.cpp disconnect it.
OK, this is the limit of the number of camera s that can be opened for multiple app s. There are a lot of more detailed information on the Internet, you can refer to it.
Now the first problem has been solved. Multiple apps can open different cameras at the same time. There's a problem turning on the same camera. If it is the same camera and the priority of the current app is lower than that of the opened app, the current app will be turned off. If the current priority is higher than the opened app, the camera in the opened app will be turned off. This can also be reflected in the above code. We can directly write conflicting as false.
When the conflicting is written as false, the cameraService will no longer detect whether the current camera id has been opened. It will continue to go down and call the makeClient. Finally, when the real camera is turned on, the hal layer reports an error that has been turned on.
In order to solve this problem, for multiple applications to open the same camera, we must let hal layer not report this error. However, if you follow the existing process and call the open function to open camera, it will not work. Obviously, it has been opened once. How can it be opened again? So how to deal with this problem?
The answer is virtual cameras.
We can add a hal module of camera in Hal layer, which defines camera_module_t
camera_module_t HAL_MODULE_INFO_SYM = { .common = { .tag = HARDWARE_MODULE_TAG, .module_api_version = CAMERA_MODULE_API_VERSION_2_3, .hal_api_version = HARDWARE_HAL_API_VERSION, //.id = CAMERA_HARDWARE_MODULE_ID, .id = "virtual_camera", .name = "virtual_camera", .author = "Antmicro Ltd.", .methods = &android::HalModule::moduleMethods, .dso = NULL, .reserved = }, .get_number_of_cameras = android::HalModule::getNumberOfCameras, .get_camera_info = android::HalModule::getCameraInfo, .set_callbacks = android::HalModule::setCallbacks, };
It implements the open function
static struct hw_module_methods_t moduleMethods = { .open = openDevice };
static int openDevice(const hw_module_t *module, const char *name, hw_device_t **device) { ALOGI("%s: lihb openDevice, name=%s", __FUNCTION__, name); if (module != &HAL_MODULE_INFO_SYM.common) { ALOGI("%s: invalid module (%p != %p)", __FUNCTION__, module, &HAL_MODULE_INFO_SYM.common); return -EINVAL; } if (name == NULL) { ALOGI("%s: NULL name", __FUNCTION__); return -EINVAL; } errno = 0; int cameraId = (int)strtol(name, NULL, 10); ALOGI("%s: cameraId: %d, getNumberOfCameras: %d", __FUNCTION__, cameraId, getNumberOfCameras()); if(errno || cameraId < 0 || cameraId >= getNumberOfCameras()) { ALOGI("%s: invalid camera ID (%s)", __FUNCTION__, name); return -EINVAL; } if(!cams[cameraId]->isValid()) { ALOGI("%s: camera %d is not initialized", __FUNCTION__, cameraId); *device = NULL; return -ENODEV; } return cams[cameraId]->openDevice(device); }
It implements getCameraInfo, setCallbacks and other standard Hal interfaces. Its camera class inherits from camera3_device, which implements all standard interfaces of camera hal3:
class Camera: public camera3_device { public: Camera(); virtual ~Camera(); bool isValid() { return mValid; } virtual status_t cameraInfo(struct camera_info *info); virtual int openDevice(hw_device_t **device); virtual int closeDevice(); void YV12ToI420(uint8_t *YV12,char *I420, int w,int h); protected: virtual camera_metadata_t * staticCharacteristics(); virtual int initialize(const camera3_callback_ops_t *callbackOps); virtual int configureStreams(camera3_stream_configuration_t *streamList); virtual const camera_metadata_t * constructDefaultRequestSettings(int type); virtual int registerStreamBuffers(const camera3_stream_buffer_set_t *bufferSet); virtual int processCaptureRequest(camera3_capture_request_t *request); /* HELPERS/SUBPROCEDURES */ void notifyShutter(uint32_t frameNumber, uint64_t timestamp); void processCaptureResult(uint32_t frameNumber, const camera_metadata_t *result, const Vector<camera3_stream_buffer> &buffers); camera_metadata_t *mStaticCharacteristics; camera_metadata_t *mDefaultRequestSettings[CAMERA3_TEMPLATE_COUNT]; CameraMetadata mLastRequestSettings; bool mValid; const camera3_callback_ops_t *mCallbackOps; size_t mJpegBufferSize; private: ImageConverter mConverter; Mutex mMutex; uint8_t* mFrameBuffer; uint8_t* rszbuffer; /* STATIC WRAPPERS */ static int sClose(hw_device_t *device); static int sInitialize(const struct camera3_device *device, const camera3_callback_ops_t *callback_ops); static int sConfigureStreams(const struct camera3_device *device, camera3_stream_configuration_t *stream_list); static int sRegisterStreamBuffers(const struct camera3_device *device, const camera3_stream_buffer_set_t *buffer_set); static const camera_metadata_t * sConstructDefaultRequestSettings(const struct camera3_device *device, int type); static int sProcessCaptureRequest(const struct camera3_device *device, camera3_capture_request_t *request); static void sGetMetadataVendorTagOps(const struct camera3_device *device, vendor_tag_query_ops_t* ops); static void sDump(const struct camera3_device *device, int fd); static int sFlush(const struct camera3_device *device); static camera3_device_ops_t sOps; }; }; /* namespace android */
In short, you can think of it as a real camera, but its open function is not to open the real physical camera, but to mmap a piece of shared memory. Its processCaptureRequest, instead of fetching data in real time from the real camera, fetches data from this shared memory. Let's call this virtual camera virtual camera. The logic of this virtual camera is basically like this. All the interfaces of hal are standard, only feeding data is taken from shared memory. Who feeds the data in the shared memory? The answer is - a real camera that's turned on.
I am currently debugging on the mtk 6762 8.1 platform. On the mtk platform, in the camera hal, the data from the camera will be sent to the DisplayClient.BufOps.cpp handleReturnBuffers for this file. We can use uint8 in this file_ t * srcBuf = (uint8_ T *) pstreamimgbuf - > getviraddr(), to retrieve the data of the current frame, and then write it to the shared memory.
Of course, other platforms, such as Qualcomm, ZHANXUN, etc., are all one principle. Find the place where the hal data comes out, write it to the shared memory, and then take it out in the processCaptureRequest in the virtual camera. In this way, the virtual camera can get the same data as the real camera.
However, the real camera data from hal layer is in yuv format. For example, the camera of my platform is in yv12 format. In the processCaptureRequest of virtual camera, you cannot directly throw it to process_capture_result callback function. Because camera previews data in ABGR format. Because the data displayed on the android surface, its buffer, is actually the GraphicBuffer. The default format of the system's GraphicBuffer is generally rgb. So we need to convert the data from shared memory.
However, yv12 can't be converted to rgb directly. It needs to be converted to i420 first, and the conversion function is attached below:
/* I420: YYYYYYYY UU VV =>YUV420P YV12: YYYYYYYY VV UU =>YUV420P yv12 Turn I420, just change their u and v components */ void Camera::YV12ToI420(uint8_t *YV12,char *I420, int w,int h) { memcpy(I420, YV12, w*h);//y component memcpy(I420+w*h, YV12+w*h+w*h/4, w*h/4);//V component memcpy(I420+w*h+w*h/4, YV12+w*h, w*h/4);//u component }
After turning to i420, you can call the I420ToABGR function of libyuv. The data transferred here can be directly sent to the surface for display. As for how to deal with shared memory after Android 8.0, you can also refer to my previous blog.
In general, you need to add a multi application to open a virtual camera of the same camera. You need to deal with the following blocks:
1) in cameraService, remove the limitation of multi application opening camera (not limited to the same camera id)
2.) add shared memory hidl service module
3.) hal of adding a new virtual camera (refer to my blog of adding a new camera for details)
4.) in the real camera hal, copy the data read from the driver to the shared memory
5) in the process capturerequest of virtual camera, read out the data and convert it to the corresponding rgba format.
After finishing the above work, the main work of adding a new virtual camera is finished. However, there are many details that need to be dealt with one by one.
For example, we have added a virtual camera, but the app doesn't know. We just tell the app developers that you can open the same camera id with multiple apps at the same time. As for how virtual camera deals with hal layer and framework, app developers do not know and should not let them care about it. There's only one thing they need to do, in app 1, Camera.open(0); this opens the camera with id 0. At the same time, it is possible to call Camera.open(0);, turn on the same camera. It is even possible to call Camera.open(0); open the same camera with id 0.
So, here comes the question. All the cameras opened with the same id, the framework layer, how to transfer to our virtual camera? This needs to be handled in the connectHelper. We need to add the following code before calling the handleevicionslocked function in this function:
auto current = mActiveClientManager.get(cameraId); if (current != nullptr) { char c_camera_ref[PROPERTY_VALUE_MAX] = {'\0'}; int i_camera_ref = 0; property_get(YOV_VIRTUAL_CAMERA_REF, c_camera_ref, "0"); i_camera_ref = atoi(c_camera_ref); cameraId = "2"; auto tempClient = mActiveClientManager.get(cameraId); if(tempClient != nullptr) { cameraId = "3"; } }
On the platform I debugged, there are two cameras in front and back, so my virtual camera, whose id starts from 2 (because mtk camera hal is hal1, my virtual camera hal is hal3). So when dealing with the number of cameras corresponding to hal1 and hal3, they don't know each other's number. I have to write it down here). If you want to have multiple virtual cameras, their IDs will be extended by 2, 3, 4
The above code means that if the current camera id has been opened, then I will quietly change the id to the virtual camera with id 2. If the virtual camera with id 2 is also open, then we will open the virtual camera with id 3. Of course, if we have 10 or 20 virtual cameras, we can follow the above logic.
Every time we open a camera and create a client in the connectHelper, we call finishConnectLocked(client, partial). This function uses the client in the finishConnectLocked function mActiveClientManager.addAndEvict(clientDescriptor); This line of code is added to the list of M active client manager. This is the key to judge whether the camera corresponding to an id has been opened.
Here, we can basically open a camera with two apps at the same time. However, if the main app (we will open the first camera here, and the app that opens the real camera is called the main app) exits, the preview will stop immediately from the app (we will open the second and third camera that has already opened the id, and the app that has been transferred to the virtual camera is called the slave APP). Because when the main app exits, it will turn off the real camera from hal layer. When the real camera is turned off, the virtual camera can't get new data from the shared memory, so it will stop previewing.
To solve this problem, we need to introduce the concept of "reference count". First of all, we need to add the count to 1.mtk6762 platform where the real camera is opened. The corresponding opening place of camera hal1 is vendor \ MediaTek \ proprietary \ hardware \ mtkcam \ main \ Hal \ device \ 1. X \ device \ camera device 1 Base.cpp . In the open function of this file, we do the reference counting for the corresponding camera id as follows:
Return<Status> CameraDevice1Base:: open(const sp<ICameraDeviceCallback>& callback) { ........ //The real device can only be opened once, and the count can only be 0 or 1 if(mInstanceId == 0) { property_set(CAMERA0_REF, "1"); } else if(mInstanceId == 1) { property_set(CAMERA1_REF, "1"); } ........ }
In close, the reference count is decremented as follows:
Return<void> CameraDevice1Base:: close() { ........ if(mInstanceId == 0) { property_set(CAMERA0_REF, "0"); } else if(mInstanceId == 1) { property_set(CAMERA1_REF, "0"); } ........ }
In the open function of virtual camera, do the following reference counting:
int Camera::open(hw_device_t **device) { ........ //Virtual camera, can open multiple, so the reference count can increase all the time char c_camera_ref[PROPERTY_VALUE_MAX] = {'\0'}; int i_camera_ref = 0; property_get(VIRTUAL_CAMERA_REF, c_camera_ref, "0"); i_camera_ref = atoi(c_camera_ref); ++i_camera_ref; memset(c_camera_ref, '\0', PROPERTY_VALUE_MAX); sprintf(c_camera_ref, "%d", i_camera_ref); property_set(VIRTUAL_CAMERA_REF, c_camera_ref); ........ }
Note here that the subtraction of virtual camera cannot be put in the close function of virtual camera, because we stay here cameraService.cpp I also need it
Then, after calling the app of the real camera to exit, call camera.stopPreview(), camera.release After (), you need to use the above reference count to block the close release operation to the underlying camera driver. On mtk 6762 8.1, the camera hal used is hal1. No matter the camera api you use is 1 or 2, when you make a client, it is the camera client of make. The operation of APP on camera will eventually go through here. So when we intercept, we can start here.
For example, when the user calls stopPreview, we need to use the\ CameraClient.cpp Do the following:
// stop preview mode void CameraClient::stopPreview() { LOG1("stopPreview (pid %d), mCameraId=%d", getCallingPid(), mCameraId); Mutex::Autolock lock(mLock); if(mCameraId != 2 && mCameraId != 3) { //add by xuhui char c_camera_ref[PROPERTY_VALUE_MAX] = {'\0'}; int i_camera_ref = 0; property_get(VIRTUAL_CAMERA_REF, c_camera_ref, "0"); i_camera_ref = atoi(c_camera_ref); //The current virtual camera is already open, and cannot be closed at this time. You have to wait until the virtual camera is turned off before you turn it off here. if(i_camera_ref > 0) { ALOGE("CameraClient::stopPreview, virtual camera exist, don't stopPreview"); return; } } if (checkPidAndHardware() != NO_ERROR) return; disableMsgType(CAMERA_MSG_PREVIEW_FRAME); mHardware->stopPreview(); sCameraService->updateProxyDeviceState( hardware::ICameraServiceProxy::CAMERA_STATE_IDLE, mCameraIdStr, mCameraFacing, mClientPackageName); mPreviewBuffer.clear(); }
Then when users call release, they need to do the following in disconnect:
binder::Status CameraClient::disconnect() { int callingPid = getCallingPid(); Mutex::Autolock lock(mLock); binder::Status res = binder::Status::ok(); if(mCameraId != 2 && mCameraId != 3 ) { //add by xuhui sp<CameraClient> client = NULL; char c_camera0_ref[PROPERTY_VALUE_MAX] = {'\0'}; char c_camera1_ref[PROPERTY_VALUE_MAX] = {'\0'}; char c_camera_virtual_ref[PROPERTY_VALUE_MAX] = {'\0'}; int i_camera_virtual_ref = 0; int i_camera0_ref = 0; int i_camera1_ref = 0; property_get(CAMERA0_REF, c_camera0_ref, "0"); i_camera0_ref = atoi(c_camera0_ref); property_get(CAMERA1_REF, c_camera1_ref, "0"); i_camera1_ref = atoi(c_camera1_ref); property_get(VIRTUAL_CAMERA_REF, c_camera_virtual_ref, "0"); i_camera_virtual_ref = atoi(c_camera_virtual_ref); if(i_camera_virtual_ref > 0) { ALOGE("CameraClient::disconnect, mCameraId=%d, virtual camera exist, don't disconnect", mCameraId); //-1 means that the normal camera wants to exit, but the virtual camera is turned on and cannot exit temporarily. When the virtual camera is turned off, exit again if(mCameraId == 0) { property_set(CAMERA0_REF, "-1"); } else if(mCameraId == 1) { property_set(AMERA1_REF, "-1"); } return res; } } #endif // Allow both client and the cameraserver to disconnect at all times if (callingPid != mClientPid && callingPid != mServicePid) { ALOGE("different client - don't disconnect"); //If the screen is not off, you cannot close the previously opened camera through other processes // return res; } // Make sure disconnect() is done once and once only, whether it is called // from the user directly, or called by the destructor. if (mHardware == 0) return res; LOG1("CameraClient::disconnect, hardware teardown"); // Before destroying mHardware, we must make sure it's in the // idle state. // Turn off all messages. disableMsgType(CAMERA_MSG_ALL_MSGS); //!++ disableMsgType(MTK_CAMERA_MSG_ALL_MSGS); //!-- mHardware->stopPreview(); sCameraService->updateProxyDeviceState( hardware::ICameraServiceProxy::CAMERA_STATE_IDLE, mCameraIdStr, mCameraFacing, mClientPackageName); mHardware->cancelPicture(); // Release the hardware resources. mHardware->release(); // Release the held ANativeWindow resources. if (mPreviewWindow != 0) { disconnectWindow(mPreviewWindow); mPreviewWindow = 0; mHardware->setPreviewWindow(mPreviewWindow); } mHardware.clear(); CameraService::Client::disconnect(); LOG1("CameraClient::disconnect end, (pid %d)", callingPid); return res; }
Here is to intercept the real camera's stopPreview and release of hal layer, so where should it be released? Generally speaking, the main app has exited. Compared with losing control over the camera object created in the main app, other apps have no permission to operate it. So how to make the real app quit?
I don't know if you remember what I said above cameraService.cpp In, every client that comes out of the makeclient is saved in the activeclientmanager. The client out of this make is a basic client, which is the grandparent class of cameraClient. So when a camera is closed, it must go to disconnect of BasicClient. Where we really close the real camera is in this function.
binder::Status CameraService::BasicClient::disconnect() { binder::Status res = Status::ok(); if (mDisconnected) { return res; } mDisconnected = true; sCameraService->removeByClient(this); sCameraService->logDisconnected(mCameraIdStr, mClientPid, String8(mClientPackageName)); sp<IBinder> remote = getRemote(); if (remote != nullptr) { remote->unlinkToDeath(sCameraService); } finishCameraOps(); // Notify flashlight that a camera device is closed. sCameraService->mFlashlight->deviceClosed(mCameraIdStr); ALOGI("%s: Disconnected client for camera %s for PID %d", __FUNCTION__, mCameraIdStr.string(), mClientPid); // client shouldn't be able to call into us anymore mClientPid = 0; int id = cameraIdToInt(mCameraIdStr); if(id == 2 || id == 3) { //add by xuhui sp<CameraService::BasicClient> client = NULL; char c_camera0_ref[PROPERTY_VALUE_MAX] = {'\0'}; char c_camera1_ref[PROPERTY_VALUE_MAX] = {'\0'}; char c_camera_virtual_ref[PROPERTY_VALUE_MAX] = {'\0'}; int i_camera_virtual_ref = 0; int i_camera0_ref = 0; int i_camera1_ref = 0; property_get(CAMERA0_REF, c_camera0_ref, "0"); i_camera0_ref = atoi(c_camera0_ref); property_get(CAMERA1_REF, c_camera1_ref, "0"); i_camera1_ref = atoi(c_camera1_ref); property_get(VIRTUAL_CAMERA_REF, c_camera_virtual_ref, "0"); i_camera_virtual_ref = atoi(c_camera_virtual_ref); --i_camera_virtual_ref; if(i_camera_virtual_ref < 0) { i_camera_virtual_ref = 0; } sprintf(c_camera_virtual_ref, "%d", i_camera_virtual_ref); property_set(VIRTUAL_CAMERA_REF, c_camera_virtual_ref); //-1 means that the app calling the real camera calls close. If the virtual camera count is 0, it means //No one quoted the virtual camera, so the real camera can also be turned off. if(i_camera_virtual_ref == 0) { if(i_camera0_ref == -1) { String8 cameraId0("0"); client = sCameraService->mActiveClientManager.getCameraClient(cameraId0); } else if(i_camera1_ref == -1) { String8 cameraId1("1"); client = sCameraService->mActiveClientManager.getCameraClient(cameraId1); } if(client != NULL) { client->disconnect(); } } } return res; }
With the above function, you can turn off the real virtual camera.
At this stage, the real camera and the virtual camera can be turned on at the same time. When the real camera is turned off, it will not turn off the driving of the real camera, but will not turn off the virtual camera until it is turned off.
But now it's perfect. For example, when the main app exits, it calls windowManager.removeView(surfaceView); for example, if the surface corresponding to the camera in the main app is removed, in hal layer, because there is no antiativewindow, i.e. surface, there is no corresponding dequeue_buffer,enqueue_ The data of camera hal cannot be retrieved due to the call of buffer and other functions. The preview interface on the app is suddenly black, or the frame is different in the previous frame. Reflected in the log, the following errors will be reported:
05-25 18:22:20.292 3542 15151 E BufferQueueProducer: [SurfaceTexture-1-3542-0](this:0xc35b5000,id:1,api:4,p:442,c:-1) queueBuffer: BufferQueue has been abandoned 05-25 18:22:20.293 442 663 E Surface : queueBuffer: error queuing buffer to SurfaceTexture, -19 05-25 18:22:20.293 461 16924 E MtkCam/DisplayClient: (16924)[enquePrvOps] mpStreamOps->enqueue_buffer failed: status[Function not implemented(38)], rpImgBuf(0xdd60c510,0xd5bbd000) (enquePrvOps){#589:vendor/mediatek/proprietary/hardware/mtkcam/middleware/v1/client/DisplayClient/DisplayClient.Stream.cpp}
By the way, the relationship between the surface on this app and the mtk camera hal layer. The surface on the app corresponds to the mpStreamOps in DisplayClient.h, which is declared in DisplayClient.h
preview_stream_ops* mpStreamOps;
preview_ stream_ The structure of OPS is defined as follows:
typedef struct preview_stream_ops { int (*dequeue_buffer)(struct preview_stream_ops* w, buffer_handle_t** buffer, int *stride); int (*enqueue_buffer)(struct preview_stream_ops* w, buffer_handle_t* buffer); int (*cancel_buffer)(struct preview_stream_ops* w, buffer_handle_t* buffer); int (*set_buffer_count)(struct preview_stream_ops* w, int count); int (*set_buffers_geometry)(struct preview_stream_ops* pw, int w, int h, int format); int (*set_crop)(struct preview_stream_ops *w, int left, int top, int right, int bottom); int (*set_usage)(struct preview_stream_ops* w, int usage); int (*set_swap_interval)(struct preview_stream_ops *w, int interval); int (*get_min_undequeued_buffer_count)(const struct preview_stream_ops *w, int *count); int (*lock_buffer)(struct preview_stream_ops* w, buffer_handle_t* buffer); // Timestamps are measured in nanoseconds, and must be comparable // and monotonically increasing between two frames in the same // preview stream. They do not need to be comparable between // consecutive or parallel preview streams, cameras, or app runs. int (*set_timestamp)(struct preview_stream_ops *w, int64_t timestamp); } preview_stream_ops_t;
It passes DisplayClient.cpp setWindow call in DisplayClient.Stream.cpp Set in file_ preview_ stream_ OPS to pass down the surface uploaded from the app and set it to mpStreamOps. If we catch up with each other, we can CameraClient.cpp The setPreviewTarget function in.
// set the buffer consumer that the preview will use status_t CameraClient::setPreviewTarget( const sp<IGraphicBufferProducer>& bufferProducer) { sp<IBinder> binder; sp<ANativeWindow> window; if (bufferProducer != 0) { binder = IInterface::asBinder(bufferProducer); window = new Surface(bufferProducer, /*controlledByApp*/ true); } return setPreviewWindow(binder, window); } status_t CameraClient::setPreviewWindow(const sp<IBinder>& binder, const sp<ANativeWindow>& window) { ...... result = mHardware->setPreviewWindow(window); ...... }
Then, if the surface is removed from the app, the DisplayClient.Stream.cpp In the dequeprvps function, call err = mpstreamops - > dequeue_ Buffer (mpstreamops, & phbuffer, & Stripe); this line is reported to be wrong and no buff can be obtained. And then it affects the DisplayClient.BufOps.cpp This function, DisplayClient::prepareOneTodoBuffer, in the file, calls dequeprvps (pstreamimgbuff) to report an error. And then it will affect DisplayClient.BufOps.cpp The DisplayClient::prepareAllTodoBuffers function in the file reports an error in prepareOneTodoBuffer(rpBufQueue), which will then affect the DisplayClient.BufOps.cpp The loop of the DisplayClient::onThreadLoop function in the file. The result is that in this dead cycle, no data can be retrieved all the time, resulting in a blank preview screen.
To solve this problem, you need to let the surface on the app not be released. But this is possible. Even if the app doesn't show up to release it, after the app exits for a period of time, the system will release the resources occupied by the exiting process. So think of another way.
The so-called other methods are also very simple. new a Surface to hal layer, it's OK.
void CameraClient::setNewPreviewWindow() { ALOGE("CameraClient::setNewPreviewWindow() start"); BufferQueue::createBufferQueue(&mNewProducer, &mNewConsumer); ALOGE("CameraClient::setNewPreviewWindow() 1"); GLuint texName; glGenTextures(1, &texName); mNewSurfaceTexture = new GLConsumer(mNewConsumer, texName, GL_TEXTURE_EXTERNAL_OES, true, true); if (mNewSurfaceTexture == 0) { ALOGE("CameraClient::setNewPreviewWindow() 2, Unable to create native SurfaceTexture"); return; } ALOGE("CameraClient::setNewPreviewWindow() 3"); mNewSurfaceTexture->setName(String8::format("SurfaceTexture-%d-%d-%d", texName, getpid(), createProcessUniqueId())); sp<SurfaceTextureListener> stListener(new SurfaceTextureListener()); stListener->mtListener=stListener; mNewSurfaceTexture->setFrameAvailableListener(stListener); //In the following line, do you need to mNewSurfaceTexture->setDefaultBufferSize(1280, 720); bool useAsync = false; status_t res; int32_t consumerUsage; if ((res = mNewProducer->query(NATIVE_WINDOW_CONSUMER_USAGE_BITS, &consumerUsage)) != OK) { ALOGE("CameraClient::setNewPreviewWindow() 5: Camera : Failed to query mNewConsumer usage"); return; } if (consumerUsage & GraphicBuffer::USAGE_HW_TEXTURE) { ALOGE("CameraClient::setNewPreviewWindow() 6: Camera : Forcing asynchronous mode for stream"); useAsync = true; } ALOGE("CameraClient::setNewPreviewWindow() 7"); mNewSurface = new Surface(mNewProducer, useAsync); //ANativeWindow *anw = surface.get(); ALOGE("CameraClient::setNewPreviewWindow() 8"); sp<IBinder> binder; // sp<ANativeWindow> window; if (mNewProducer != 0) { ALOGE("CameraClient::setNewPreviewWindow() 9"); binder = IInterface::asBinder(mNewProducer); } setPreviewWindow(binder, mNewSurface); ALOGE("CameraClient::setNewPreviewWindow() end"); }
Where can I call this function? In CameraClient::disconnect(), when app release camera is available, we can manually set another surface.
binder::Status CameraClient::disconnect() { ...... if(mCameraId != 2 && mCameraId != 3 ) { //add by xuhui sp<CameraClient> client = NULL; char c_camera0_ref[PROPERTY_VALUE_MAX] = {'\0'}; char c_camera1_ref[PROPERTY_VALUE_MAX] = {'\0'}; char c_camera_virtual_ref[PROPERTY_VALUE_MAX] = {'\0'}; int i_camera_virtual_ref = 0; int i_camera0_ref = 0; int i_camera1_ref = 0; ALOGE("CameraClient::disconnect 3"); property_get(CAMERA0_REF, c_camera0_ref, "0"); i_camera0_ref = atoi(c_camera0_ref); property_get(CAMERA1_REF, c_camera1_ref, "0"); i_camera1_ref = atoi(c_camera1_ref); ALOGE("CameraClient::disconnect 4"); property_get(VIRTUAL_CAMERA_REF, c_camera_virtual_ref, "0"); i_camera_virtual_ref = atoi(c_camera_virtual_ref); ALOGE("CameraClient::disconnect, i_camera0_ref=%d, i_camera1_ref=%d, , i_camera_virtual_ref=%d", i_camera0_ref, i_camera1_ref, i_camera_virtual_ref); if(i_camera_virtual_ref > 0) { ALOGE("CameraClient::disconnect, mCameraId=%d, virtual camera exist, don't disconnect", mCameraId); //-1 means that the normal camera wants to exit, but the virtual camera is turned on and cannot exit when rolling. When the virtual camera is turned off, exit again if(mCameraId == 0) { property_set(CAMERA0_REF, "-1"); } else if(mCameraId == 1) { property_set(CAMERA1_REF, "-1"); } ALOGE("CameraClient::disconnect setNewPreviewWindow"); setNewPreviewWindow(); return res; } } ...... }
OK, the solution of opening the same camera for multiple apps is over. Now I can have an app to record video in the background at the same time, and have an app to play a camera at the front desk at the same time.
Students who are interested in camera can continue to communicate.