Android Camera調用流程
Android中Camera的調用流程可分為以下幾個層次:
Package->Framework->JNI->Camera(cpp)--(binder)-->CameraService->Camera HAL->Camera Driver
以拍照流程為例:
- 各個參數設置完成,對焦完成后,位于Package的Camera.java會調用Framework中Camera.java的takePicture函數,如下:
public final void takePicture(ShutterCallback shutter, PictureCallback raw,
}</pre>PictureCallback postview, PictureCallback jpeg) { mShutterCallback = shutter; mRawImageCallback = raw; mPostviewCallback = postview; mJpegCallback = jpeg; native_takePicture();
此函數保存Package層傳下的callback函數,同時調用JNI層的native_takePicture
- JNI層的native_takePicture自己并沒有做太多事情,只是簡單地調用cpp的Camera中的takePicture函數。此前已經把JNI中的一個對象注冊成了Camera.cpp的listener
位于frameworks/base/libs/camera是向CameraService請求服務的客戶端,但它本身也繼承了一個BnCameraClient類,用于CameraService回調自己。
class ICameraClient: public IInterface { public: DECLARE_META_INTERFACE(CameraClient);
virtual void notifyCallback(int32_t msgType, int32_t ext1, int32_t ext2) = 0; virtual void dataCallback(int32_t msgType, const sp<IMemory>& data) = 0; virtual void dataCallbackTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& data) = 0; };</pre>
從上面的接口定義可以看到,這個類就是用于回調。
Camera.cpp的takePicture函數是利用open Camera時得到的ICamera對象來繼續調用takePicture
- 接下來通過binder轉到另一個進程CameraService中的處理。CameraService中之前已經實例化了一個HAL層的 CameraHardware,并把自己的data callback傳遞給了CameraHardware,這些工作都是由CameraService的內部類Client來完成的,這個Client類繼承自BnCamera,是真正提供Camera操作API的類
然后自然是調用HAL層CameraHardware的takePicture函數。從HAL層向下就不是Android的標準代碼了,各個廠商有自己不同的實現。但思路應該都是相同的:Camera遵循V4L2架構,利用ioctl發送VIDIOC_DQBUF命令得到有效的圖像數據,接著回調HAL層的data callback接口以通知CameraService,CameraService會通過binder通知Camera.cpp,如下:
void CameraService::Client::dataCallback(int32_t msgType,
const sp<IMemory>& dataPtr, void* user) {
LOG2("dataCallback(%d)", msgType);
sp<Client> client = getClientFromCookie(user); if (client == 0) return; if (!client->lockIfMessageWanted(msgType)) return;
if (dataPtr == 0) {
LOGE("Null data returned in data callback"); client->handleGenericNotify(CAMERA_MSG_ERROR, UNKNOWN_ERROR, 0); return;
}
switch (msgType) {
case CAMERA_MSG_PREVIEW_FRAME: client->handlePreviewData(dataPtr); break; case CAMERA_MSG_POSTVIEW_FRAME: client->handlePostview(dataPtr); break; case CAMERA_MSG_RAW_IMAGE: client->handleRawPicture(dataPtr); break; case CAMERA_MSG_COMPRESSED_IMAGE: client->handleCompressedPicture(dataPtr); break; default: client->handleGenericData(msgType, dataPtr); break;
} } // picture callback - compressed picture ready void CameraService::Client::handleCompressedPicture(const sp<IMemory>& mem) { int restPictures = mHardware->getPictureRestCount(); if (!restPictures) {
disableMsgType(CAMERA_MSG_COMPRESSED_IMAGE);
}
sp<ICameraClient> c = mCameraClient; mLock.unlock(); if (c != 0) {
c->dataCallback(CAMERA_MSG_COMPRESSED_IMAGE, mem);
} }</pre>
- Camera.cpp會繼續通知它的listener:
// callback from camera service when frame or image is ready void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr) { sp<CameraListener> listener; {
} if (listener != NULL) {Mutex::Autolock _l(mLock); listener = mListener;
} }</pre>listener->postData(msgType, dataPtr);
而這個listener就是我們的JNI層的JNICameraContext對象了:
void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr) { // VM pointer will be NULL if object is released Mutex::Autolock _l(mLock); JNIEnv *env = AndroidRuntime::getJNIEnv(); if (mCameraJObjectWeak == NULL) {
LOGW("callback on dead camera object"); return;
}
// return data based on callback type switch(msgType) { case CAMERA_MSG_VIDEO_FRAME:
// should never happen break;
// don't return raw data to Java case CAMERA_MSG_RAW_IMAGE:
LOGV("rawCallback"); env->CallStaticVoidMethod(mCameraJClass, fields.post_event, mCameraJObjectWeak, msgType, 0, 0, NULL); break;
default:
// TODO: Change to LOGV LOGV("dataCallback(%d, %p)", msgType, dataPtr.get()); copyAndPost(env, dataPtr, msgType); break;
} }</pre>
- 可以看到JNI層最終都會調用來自java層的函數postEventFromNative,這個函數會發送對應的消息給自己的 eventhandler,收到消息后就會根據消息的類型回調Package層Camera.java最初傳下來的callback函數。至此,我們就在最上層拿到了圖像數據。