如何在libGDX中調用android攝像頭
開發背景
簡單介紹下自己,目前在人臉識別從事android開發與測試,公司的產品是人臉識別SDK,成品就是各種終端上的人臉識別設備,總監大人希望最好能有一個跨平臺的開發框架,能夠以一種語言為主,輸出各種app。于是他找到了libGDX,一個多平臺開發游戲的框架, 語言為java。libGDX是一個游戲開發工具,確切的說也就是一個java框架,按他的套路進行編寫即可。
開始稍微看看游戲,有些誤入歧途,然后他說要看看怎么掉攝像頭。然后我又誤入歧途的尋找尋找了一些有意思的東西:一個是抓幽靈的游戲ChaseWhisplyProject。
ChaseWhisplyProject
beyondar
當然,上邊兩個多事在Android內實現的,而非在libgdx中實現,感謝Github,在libGDX的介紹中,有達人分享了他使用libGDX調用android攝像頭的實例,他在libGDX中調用攝像頭的目的簡單來說就是AR,使用者可以通過APP看到設備背后的路面,或者和真實世界有交互,就像PokemonGO一樣,當然全部的這些也需要算法支持。但是時間是2013年,我拷貝下來,進行了微調,發現是可以實現的,鑒于目前網絡上還真沒有許多的實例說明如何在LibGDX中調用android攝像頭,我搜索libgdx camera 大部分給我的結果都是libgdx中的鏡頭,就是跟隨演員的鏡頭,而不是設備攝像頭,這里也對這個實例進行一個記錄和補充,以及簡單的實現。
下面我也是基于這篇文章進行翻譯,順序當然有所改變,還有原代碼中有包含一個類似初始頁面的SplashActivity的我沒有實現,目前還沒有去了解libGDX的三維編程,代碼的源碼已上傳至GitHub,再好的文字也比不上源碼。
libGDX中到底是如何實現調用攝像頭
很簡單,就是在一個透明的畫布上繪制libGDX的舞臺演員,在畫布的后邊放置攝像頭的預覽畫面。而文章主要解決的問題就是如何顯示攝像頭預覽畫面,和如何調用攝像頭。
如何顯示攝像頭預覽畫面
我們知道,libGDX的代碼主要全部在core項目中,其他各個平臺的代碼都是經過一個簡單的代碼啟動啟動然后,剩下的就去調用core項目中的代碼,首先在libGDX初始化時,要進行一定的設置,才能讓攝像頭的預覽畫面顯示在libGDX框架生成的app中,進入android項目組,修改AndroidLauncher.class代碼如下
AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();
cfg.r = 8;
cfg.g = 8;
cfg.b = 8;
cfg.a = 8;
DeviceCameraControl cameraControl = new AndroidDeviceCameraController(this);
initialize(new MyGdxGame0606(cameraControl), cfg);
你可以點擊進入AndroidApplicationConfiguration去看看r,g,b,a原來數值如下
/** number of bits per color channel **/
public int r = 5, g = 6, b = 5, a = 0;
rgba的具體值指的是每種顏色的比特位數,這里改為8位,大概的意思就是每種顏色深度為8位,也就是2的8次冪,也就是256,和photoshop中意義一樣。這么設置的原因是為了libGDX中的畫布的后邊能夠正常的顯示android的攝像頭,不然顏色和在android上調用攝像頭有出入,色彩不會同樣的豐富。
同樣在AndroidLauncher.class中,將OpenGL surface 模式設置成TRANSLUCENT
if (graphics.getView() instanceof SurfaceView) {
SurfaceView glView = (SurfaceView) graphics.getView();
// force alpha channel - I'm not sure we need this as the GL surface
// is already using alpha channel
glView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
}
然后是,在AndroidLauncher.class新建一個post方法,用來幫助喚起一些異步線程
public void post(Runnable r) {
handler.post(r);
}
以上就是AndroidLauncher.class所要完成的全部內容。
如何讓屏幕透明
很簡單,在core項目中MyGdxGame0606.class主類中的render()渲染方法中,要注意使用glClearColor()清屏時,參數的選擇必須全部是0,這樣相機預覽的畫面才會顯示在render畫面的后邊
Gdx.gl20.glClearColor(0f, 0.0f, 0.0f, 0.0f);
清屏準備完畢,現在就是要調用android攝像頭的時候到了
首先了解一下,把大象裝冰箱中總共分幾步?對于調用android攝像頭的步驟則可以分如下幾部?Android相機有特別的工作順序必須遵守,通過application的callbacks回調函數能管理相機的狀態,機器的工作狀態將由AndroidDeviceCameraController.class進行管理(desktop沒有實現,Android已實現)。
相機的工作狀態一次是 Ready -> Preview -> autoFocusing -> ShutterCalled -> Raw PictureData -> Postview PictureData -> Jpeg PictureData -> Ready (可以做一個表格)
本實例代碼實現的順序也可以概括為 Ready -> Preview -> autoFocusing -> Jpeg PictureData -> Ready
新建AndroidDeviceCameraController
在core項目中,新建一個DeviceCameraController類,而為了配合core中DeviceCameraControl,在android項目中,新建一個AndroidDeviceCameraController 類,來控制設備的攝像頭,它要繼承DeviceCameraControl,同時還要實現Camera.PictureCallback:(android.hardware.Camera.PictureCallback)Camera.AutoFocusCallback(android.hardware.Camera.AutoFocusCallback),共計三個接口,來實現android攝像頭從準備到拍攝的過程。
public class AndroidDeviceCameraController implements DeviceCameraControl, Camera.PictureCallback, Camera.AutoFocusCallback {
.
.
.
}
AndroidDeviceCameraController 新建后,逐步實現攝像頭該有的各個功能。
1.準備顯示預覽信息的CameraSurface
我們產生一個CameraSurface類來負責管理攝像頭和它收集的圖像,這里我和android攝像相關的代碼一致
public class CameraSurface extends SurfaceView implements SurfaceHolder.Callback {
private Camera camera;
public CameraSurface( Context context ) {
super( context );
// We're implementing the Callback interface and want to get notified
// about certain surface events.
getHolder().addCallback( this );
// We're changing the surface to a PUSH surface, meaning we're receiving
// all buffer data from another component - the camera, in this case.
getHolder().setType( SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS );
}
public void surfaceCreated( SurfaceHolder holder ) {
// Once the surface is created, simply open a handle to the camera hardware.
camera = Camera.open();
}
public void surfaceChanged( SurfaceHolder holder, int format, int width, int height ) {
// This method is called when the surface changes, e.g. when it's size is set.
// We use the opportunity to initialize the camera preview display dimensions.
Camera.Parameters p = camera.getParameters();
p.setPreviewSize( width, height );
camera.setParameters( p );
// We also assign the preview display to this surface...
try {
camera.setPreviewDisplay( holder );
} catch( IOException e ) {
e.printStackTrace();
}
}
public void surfaceDestroyed( SurfaceHolder holder ) {
// Once the surface gets destroyed, we stop the preview mode and release
// the whole camera since we no longer need it.
camera.stopPreview();
camera.release();
camera = null;
}
public Camera getCamera() {
return camera;
}
}
2.在android項目中增加相機預覽視圖
在android項目的AndroidDeviceController類,使用activity.addContentView,直接將cameraSurface顯示在android設備的屏幕上
@Override
public void prepareCamera() {
if (cameraSurface == null) {
cameraSurface = new CameraSurface(activity);
}
activity.addContentView( cameraSurface, new LayoutParams( LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT ) );
}
@Override
public synchronized void prepareCamera() {
if (cameraSurface == null) {
cameraSurface = new CameraSurface(activity);
}
FrameLayout.LayoutParams params = new FrameLayout.LayoutParams (680,680);
params.rightMargin=150;//可以通過設置rightMargin控制組件的實際位置
params.leftMargin=200;//可以通過設置rightMargin控制組件的實際位置
params.topMargin=100;
activity.addContentView(cameraSurface, params);
}</code></pre>
prepareCamera方法應該在libgdx渲染過程中異步調用
@Override
public void prepareCameraAsync() {
Runnable r = new Runnable() {
public void run() {
prepareCamera();
}
};
activity.post(r);
}
@Override
public void prepareCameraAsync() {
Runnable r = new Runnable() {
public void run() {
prepareCamera();
}
};
activity.post(r);
}</code></pre>
當CameraSurface和camera 對象準備好了的時候(通過檢測cameraSurface!=null && cameraSurface.getCamera() != null),就可以讓相機由準備狀態進入預覽模式
@Override
public boolean isReady() {
if (cameraSurface!=null && cameraSurface.getCamera() != null) {
return true;
}
return false;
}
@Override
public boolean isReady() {
if (cameraSurface != null && cameraSurface.getCamera() != null) {
return true;
}
return false;
}
異步調用開啟預覽
@Override
public synchronized void startPreviewAsync() {
Runnable r = new Runnable() {
public void run() {
startPreview();
}
};
activity.post(r);
}
@Override
public synchronized void startPreview() {
// ...and start previewing. From now on, the camera keeps pushing preview
// images to the surface.
if (cameraSurface != null && cameraSurface.getCamera() != null) {
cameraSurface.getCamera().startPreview();
}
}
3.由預覽模式進行拍照
拍照前還要AndroidDeviceCameraController類設置下相機合適的參數
public void setCameraParametersForPicture(Camera camera) {
// Before we take the picture - we make sure all camera parameters are as we like them
// Use max resolution and auto focus
Camera.Parameters p = camera.getParameters();
List<Camera.Size> supportedSizes = p.getSupportedPictureSizes();
int maxSupportedWidth = -1;
int maxSupportedHeight = -1;
for (Camera.Size size : supportedSizes) {
if (size.width > maxSupportedWidth) {
maxSupportedWidth = size.width;
maxSupportedHeight = size.height;
}
}
p.setPictureSize(maxSupportedWidth, maxSupportedHeight);
p.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO);
camera.setParameters( p );
}
接下來,我們將通過設置相機的參數,設置聚焦為自動模式,
@Override
public synchronized void takePicture() {
// the user request to take a picture - start the process by requesting focus
setCameraParametersForPicture(cameraSurface.getCamera());
cameraSurface.getCamera().autoFocus(this);
}
當聚焦完成后,我們就要拍照了,僅僅實現JPG回調實現
@Override
public synchronized void onAutoFocus(boolean success, Camera camera) {
// Focus process finished, we now have focus (or not)
if (success) {
if (camera != null) {
camera.stopPreview();
// We now have focus take the actual picture
camera.takePicture(null, null, null, this);
}
}
}
@Override
public synchronized void onPictureTaken(byte[] pictureData, Camera camera) {
this.pictureData = pictureData;
}
@Override
public synchronized void onAutoFocus(boolean success, Camera camera) {
// Focus process finished, we now have focus (or not)
if (success) {
if (camera != null) {
camera.stopPreview();
/增加三個回調函數shutterCallback, rawPictureCallback, jpegPictureCallback后,可以進行拍照,并且成功保存/
// We now have focus take the actual picture
camera.takePicture(shutterCallback, rawPictureCallback, jpegPictureCallback);//54wall
camera.startPreview();
}
}
}
@Override
public synchronized void onPictureTaken(byte[] pictureData, Camera camera) {
// We got the picture data - keep it
this.pictureData = pictureData;
}
ShutterCallback shutterCallback = new ShutterCallback() {
@Override
public void onShutter() {
}
};
PictureCallback rawPictureCallback = new PictureCallback() {
@Override
public void onPictureTaken(byte[] arg0, Camera arg1) {
}
};
PictureCallback jpegPictureCallback = new PictureCallback() {
@Override
public void onPictureTaken(byte[] arg0, Camera arg1) {
/可以在Android項目中中生成圖片/
// String fileName = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORYDCIM)
// .toString()
// + File.separator
// + "PicTest" + System.currentTimeMillis() + ".jpg";
// File file = new File(fileName);
// if (!file.getParentFile().exists()) {
// file.getParentFile().mkdir();
// }
//
// try {
// BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(file));
// bos.write(arg0);
// bos.flush();
// bos.close();
//
// } catch (Exception e) {
//
// }
pictureData=arg0;
};
};</code></pre>
以上就是拍照的具體步驟,之后照片完成后,需要將圖像數據進行保存到存儲器上
拍照的具體功能實現了,為了在libgdx中能夠看到攝像頭,當然需要在實現ApplicationListener的主類的render()進行設置了,我這里設置了三個按鈕,功能分別是開啟相機,進行拍攝,和一個控制人物移動的按鈕(算是證明是在libgdx框架內部的)。
原代碼時觸控進入相機預覽,松開則進行拍照,我開始還沒太理解,render中因為涉及到相機的功能切換,所以在libgdx主類中定義了相機的這幾種狀態
public enum Mode {
normal, prepare, preview, takePicture, waitForPictureReady,
},
render()中的代碼非常長,不過就是在相機的各個狀態中切換,具體代碼如下:
@Override
public void render() {
// Gdx.gl20.glClearColor(0.0f, 0f, 0.0f, 0.0f);//黑
// Gdx.gl.glClearColor(1, 1, 1, 1);//背景為白色
Gdx.gl.glClearColor(0.57f, 0.40f, 0.55f, 1.0f);// 紫色
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);// 清屏
render_preview();
}
public void render_preview() {
/* 我已經將preview變為takePicture狀態移動到click中,實現先預覽再拍照,這樣便于理解相機的運行步驟 */
Gdx.gl20.glHint(GL20.GL_GENERATE_MIPMAP_HINT, GL20.GL_NICEST);
if (mode == Mode.takePicture) {
Gdx.gl20.glClearColor(0f, 0.0f, 0.0f, 0.0f);
if (deviceCameraControl != null) {
deviceCameraControl.takePicture();
}
mode = Mode.waitForPictureReady;
} else if (mode == Mode.waitForPictureReady) {
Gdx.gl20.glClearColor(0.0f, 0f, 0.0f, 0.0f);
} else if (mode == Mode.prepare) {
Gdx.gl20.glClearColor(0.0f, 0.0f, 0f, 0.6f);
if (deviceCameraControl != null) {
if (deviceCameraControl.isReady()) {
deviceCameraControl.startPreviewAsync();
mode = Mode.preview;
}
}
} else if (mode == Mode.preview) {
Gdx.gl20.glClearColor(0.0f, 0.0f, 0.0f, 0f);
} else {
/* mode = normal */
Gdx.gl20.glClearColor(0.0f, 0.0f, 0.6f, 1.0f);
}
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
/* 下邊放到texture.bind();時效果一致 */
batch.begin();
stage.act(); // 更新舞臺邏輯
batch.draw(texture, 0, 0, 3f*texture.getWidth(), 3f*texture.getHeight());
Gdx.app.log("", String.valueOf(texture.getWidth()));
//先繪制的就先出現,所以演員在texture上邊,而不是被覆蓋
//batch.draw(actorTexture, firstActor.getX(), firstActor.getY());//原大小
batch.draw(actorTexture, firstActor.getX(), firstActor.getY(),4*actorTexture.getWidth(),4*actorTexture.getWidth());//可控制繪制圖像大小
button_move.draw(batch, 1.0f);
stage.draw();// 繪制舞臺
batch.end();
Gdx.gl20.glEnable(GL20.GL_DEPTH_TEST);
Gdx.gl20.glEnable(GL20.GL_TEXTURE);
Gdx.gl20.glEnable(GL20.GL_TEXTURE_2D);
// Gdx.gl20.glEnable(GL20.GL_LINE_SMOOTH);//old
Gdx.gl20.glEnable(GL20.GL_LINE_LOOP);//54wall
Gdx.gl20.glDepthFunc(GL20.GL_LEQUAL);
Gdx.gl20.glClearDepthf(1.0F);
camera.update(true);
// camera.apply(Gdx.gl20);//old
texture.bind();
if (mode == Mode.waitForPictureReady) {
/*注意deviceCameraControl.getPictureData()得到的是byte[],可見整體思路就是,
*將Android攝像頭得到byte[],然后將byte[]轉換為Pixmap,最后將pixmap存為jpg,這樣不適用Android端圖片保存模式,
*byte[]----Pixmap----jpg
*/
if (deviceCameraControl.getPictureData() != null) {
// camera picture was actually takentake Gdx Screenshot
Pixmap screenshotPixmap = getScreenshot(0, 0,
Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
Pixmap cameraPixmap = new Pixmap(
deviceCameraControl.getPictureData(), 0,
deviceCameraControl.getPictureData().length);
merge2Pixmaps(cameraPixmap, screenshotPixmap);
// we could call PixmapIO.writePNG(pngfile, cameraPixmap);
//僅保存screenshot,對同一時間的圖片進行保存然后進行比較
Pixmap screenshotPixmap_test = getScreenshot(0, 0,
Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
FileHandle jpgfile_screenshot = Gdx.files
.external("a_SDK_fail/libGdxSnapshot" + "_" + date
+ "_screenshot.jpg");
deviceCameraControl.saveAsJpeg(jpgfile_screenshot,
screenshotPixmap_test);
//僅保存cameraPixma,對同一時間的圖片進行保存然后進行比較
Pixmap cameraPixmap_test = new Pixmap(
deviceCameraControl.getPictureData(), 0,
deviceCameraControl.getPictureData().length);
FileHandle jpgfile_cameraPixmap = Gdx.files
.external("a_SDK_fail/libGdxSnapshot" + "_" + date
+ "_camera.jpg");
deviceCameraControl.saveAsJpeg(jpgfile_cameraPixmap,
cameraPixmap_test);
//保存混合之后的相片
FileHandle jpgfile = Gdx.files
.external("a_SDK_fail/libGdxSnapshot" + "_" + date
+ ".jpg");
Gdx.app.log("FileHandle", date);
time_1 = System.currentTimeMillis();
deviceCameraControl.saveAsJpeg(jpgfile, cameraPixmap);
time_2 = System.currentTimeMillis();
//可以得到35830ms=35s,所以非常忙,導致Mode非常緩慢的回到Mode.normal
Gdx.app.log("cost", String.valueOf(time_2 - time_1));
deviceCameraControl.stopPreviewAsync();
//保存文件后,mode回到normal繼續render循環,所以中間停頓和logcat長時間未動的其實是卡住了Org
mode = Mode.normal;
}
}
// 這個log將會一直出現,所以render其實是一直在執行
// Gdx.app.log("mode", String.valueOf(i_render++));
}</code></pre>
如何實現libgdx端的截圖
因為AndroidDeviceCameraController 實現兩個接口: Camera.PictureCallback,所以可以直接調用,而deviceCameraControl.getPictureData()的byte[]數據則來自AndroidDeviceCameraController,如下
@Override
public synchronized byte[] getPictureData() {
// Give to picture data to whom ever requested it
return pictureData;
}
if (deviceCameraControl.getPictureData() != null) { // camera picture was actually taken
Pixmap cameraPixmap = new Pixmap(deviceCameraControl.getPictureData(), 0, deviceCameraControl.getPictureData().length);
}
下面是截圖的具體操作過程是保存為pixmap,libgdx中保存格式都是pixmap,而非android中的bitmap
public Pixmap getScreenshot(int x, int y, int w, int h, boolean flipY) {
Gdx.gl.glPixelStorei(GL10.GL_PACK_ALIGNMENT, 1);
final Pixmap pixmap = new Pixmap(w, h, Format.RGBA8888);
ByteBuffer pixels = pixmap.getPixels();
Gdx.gl.glReadPixels(x, y, w, h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixels);
final int numBytes = w * h * 4;
byte[] lines = new byte[numBytes];
if (flipY) {
final int numBytesPerLine = w * 4;
for (int i = 0; i < h; i++) {
pixels.position((h - i - 1) * numBytesPerLine);
pixels.get(lines, i * numBytesPerLine, numBytesPerLine);
}
pixels.clear();
pixels.put(lines);
} else {
pixels.clear();
pixels.get(lines);
}
return pixmap;
}
接下來的操作都是需要消耗大量時間和CPU資源的,首先不應放到UI線程中,應該新開線程去執行,并且最好加一個進度條,在代碼示例中,我們并沒有那么做,所以屏幕在這個過程中會出現卡死的狀況。我這里則直接保存了三分文件,分別是截圖,android攝像頭的拍攝相片,還有二者混合之后的圖片,代碼如下
/ 僅保存screenshot,對同一時間的圖片進行保存然后進行比較 /
Pixmap screenshotPixmap_test = getScreenshot(0, 0,
Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
FileHandle jpgfile_screenshot = Gdx.files
.external("a_SDKfail/libGdxSnapshot" + "" + date
+ "_screenshot.jpg");
deviceCameraControl.saveAsJpeg(jpgfile_screenshot,
screenshotPixmap_test);
/* 僅保存cameraPixma,對同一時間的圖片進行保存然后進行比較 */
Pixmap cameraPixmap_test = new Pixmap(
deviceCameraControl.getPictureData(), 0,
deviceCameraControl.getPictureData().length);
FileHandle jpgfile_cameraPixmap = Gdx.files
.external("a_SDK_fail/libGdxSnapshot" + "_" + date
+ "_camera.jpg");
deviceCameraControl.saveAsJpeg(jpgfile_cameraPixmap,
cameraPixmap_test);
/* 保存混合之后的相片 */
FileHandle jpgfile = Gdx.files
.external("a_SDK_fail/libGdxSnapshot" + "_" + date
+ ".jpg");
Gdx.app.log("FileHandle", date);
time_1 = System.currentTimeMillis();
deviceCameraControl.saveAsJpeg(jpgfile, cameraPixmap);
time_2 = System.currentTimeMillis();
/* 可以得到35830ms=35s,所以非常忙,導致Mode非常緩慢的回到Mode.normal */
Gdx.app.log("cost", String.valueOf(time_2 - time_1));
deviceCameraControl.stopPreviewAsync();</code></pre>
混合兩個pixmap
接下來是整合兩個PIxmap對象,LibGDX Pixmap對象可以幫助我們實現這個功能,但是因為相機的相片可能有不同的aspect ratio,所以我們也需要分別對待處理
private void merge2Pixmaps(Pixmap mainPixmap, Pixmap overlayedPixmap) {
// merge to data and Gdx screen shot - but fix Aspect Ratio issues between the screen and the camera
Pixmap.setFilter(Filter.BiLinear);
float mainPixmapAR = (float)mainPixmap.getWidth() / mainPixmap.getHeight();
float overlayedPixmapAR = (float)overlayedPixmap.getWidth() / overlayedPixmap.getHeight();
if (overlayedPixmapAR < mainPixmapAR) {
int overlayNewWidth = (int)(((float)mainPixmap.getHeight() / overlayedPixmap.getHeight()) * overlayedPixmap.getWidth());
int overlayStartX = (mainPixmap.getWidth() - overlayNewWidth)/2;
mainPixmap.drawPixmap(overlayedPixmap,
0,
0,
overlayedPixmap.getWidth(),
overlayedPixmap.getHeight(),
overlayStartX,
0,
overlayNewWidth,
mainPixmap.getHeight());
} else {
int overlayNewHeight = (int)(((float)mainPixmap.getWidth() / overlayedPixmap.getWidth()) * overlayedPixmap.getHeight());
int overlayStartY = (mainPixmap.getHeight() - overlayNewHeight)/2;
mainPixmap.drawPixmap(overlayedPixmap,
0,
0,
overlayedPixmap.getWidth(),
overlayedPixmap.getHeight(),
0,
overlayStartY,
mainPixmap.getWidth(),
overlayNewHeight);
}
}
將圖片保存為jpg
所以我們選擇JPG格式進行保存,一種方式就是使用Android的bitmap類對圖片進行jpg格式的保存,這個功能可以在AndroidDeviceController類中實現,因為它是Android特有的功能,所以我們想不用它。
盡量將大部分代碼全部放到libgdx框架中,就是大部分實現的代碼要在core中,然而libgdx的pixel格式是RGBA,而bitmap的Pixmap格式是ARGB,所以我們需要一bit一bit的將顏色轉換過來
@Override
public void saveAsJpeg(FileHandle jpgfile, Pixmap pixmap) {
FileOutputStream fos;
int x=0,y=0;
int xl=0,yl=0;
try {
Bitmap bmp = Bitmap.createBitmap(pixmap.getWidth(), pixmap.getHeight(), Bitmap.Config.ARGB_8888);
// we need to switch between LibGDX RGBA format to Android ARGB format
for (x=0,xl=pixmap.getWidth(); x<xl;x++) {
for (y=0,yl=pixmap.getHeight(); y<yl;y++) {
int color = pixmap.getPixel(x, y);
// RGBA => ARGB
int RGB = color >> 8;
int A = (color & 0x000000ff) << 24;
int ARGB = A | RGB;
bmp.setPixel(x, y, ARGB);
}
}
// Finished Color format conversion
fos = new FileOutputStream(jpgfile.file());
bmp.compress(CompressFormat.JPEG, 90, fos);
// Finished Comression to JPEG file
fos.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (IllegalArgumentException e) {
e.printStackTrace();
}
}
停止預覽
在完成保存圖片后,我們將停止預覽窗口,并且從Activity窗口中移去CameraSurface,我們同樣也將停止camera繼續想camera surface繼續發送preview,我們同樣異步執行這些。
@Override
public synchronized void stopPreviewAsync() {
Runnable r = new Runnable() {
public void run() {
stopPreview();
}
};
activity.post(r);
}
@Override
public synchronized void stopPreview() {
// stop previewing.
if (cameraSurface != null) {
if (cameraSurface.getCamera() != null) {
cameraSurface.getCamera().stopPreview();
}
ViewParent parentView = cameraSurface.getParent();
if (parentView instanceof ViewGroup) {
ViewGroup viewGroup = (ViewGroup) parentView;
viewGroup.removeView(cameraSurface);
}
}
}
注意在合成兩張pixmap也就是混合相機取景和libgdx截圖時分辨率有差異。在我們的Pixmaps整合過程中依然還存在一個問題,就是相機的分辨率和我們的截圖分辨率也許是不同的(我在我的三星手機上,我把一個480x320屏幕截圖延伸到2560x1920 大小的圖片)。一個圍繞它的解決方法就是擴大Libgdx視圖的尺寸到更大,比實際物理設備的尺寸要大,要實現他需要使用setFixedSize()功能。真實的屏幕尺寸是根據BPU內存。
然而,在這個時間中我設法去設置虛擬屏幕尺寸為960x640(可能因為GPU顯存已經被origin的帶下分配了)
public void setFixedSize(int width, int height) {
if (graphics.getView() instanceof SurfaceView) {
SurfaceView glView = (SurfaceView) graphics.getView();
glView.getHolder().setFixedSize(width, height);
glView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
}
}
public void restoreFixedSize() {
if (graphics.getView() instanceof SurfaceView) {
SurfaceView glView = (SurfaceView) graphics.getView();
glView.getHolder().setFixedSize(origWidth, origHeight);
glView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
}
}
一些小提示
-
提示1
注意,代碼僅僅適用于Android平臺,并非跨平臺解決方法,但是我覺得至少在桌面端,應該可以提供類似功能的代碼
-
提示2
在整合圖片,并將圖片數據保存到存儲器上時,會花大量的時間,是因為libgdx顏色方案是RGBA而bitmap顏色方案是ARGB
-
提示3
最后需要注意的
我僅僅測試了一部分Android設備,不同的GPU會產生不同的現象。
最后的截圖,按下camera可以開啟相機,看下shot,就是進行拍攝,move可以控制貓向右側移動

4.jpg
來自:http://www.jianshu.com/p/081a63afe95d