With a little more code, you can create 3D stereoscopic VR worlds !
To implement your activity, your application needs to extend GVRActivity to meet the application needs, by providing a device XML file and a GVRScript object.
For details, see step 1 in the basic GearVRf application section.
GearVRf will create the camera rig by default. Its parameters do not need to be adjusted; however, applications may move the camera rig.
The HMD sensor automatically adjusts camera orientation; your app does not need code for this. Camera background color can be set for each eye; however, they are typically the same. Camera background color and post-effect data can be set by the application. Post-effects are applied to each camera.
// set camera background color mGVRContext.getMainScene().getMainCameraRig().getLeftCamera() .setBackgroundColor(0.0f, 0.0f, 0.0f, 1.0f); mGVRContext.getMainScene().getMainCameraRig().getRightCamera() .setBackgroundColor(0.0f, 0.0f, 0.0f, 1.0f); // set up camerarig position (default) mGVRContext.getMainScene().getMainCameraRig().getOwnerObject() .getTransform().setPosition(0.0f, 0.0f, 0.0f);
The scene graph - the VR world - is a hierarchical tree of scene objects. Each scene object is a tree node with one parent and one or more child scene objects. Applications must build a scene graph. Your app needs to set up cameraRig for the root scene object of the scene graph, but does not need to set up cameraRig for each lower-level scene object. To create a scene graph at initialization time, get the GearVRf main scene (the root scene object) from GVRContext.
To create the scene graph by getting its root scene object:
GVRScene scene = mGVRContext.getMainScene();
Populate your VR world's scene graph scene object tree by adding scene objects to the root scene object and to other lower-level scene objects.
The most common way is to load models using the GearVRf wrapped Assimp library.Assimp can load many 3D file formats and construct GearVRf scene object hierarchies from them. The materials, textures and shaders are automatically attached to the appropriate scene object and the model is added to the scene. The asset loader uses the GVRPhongShader class as the shader template for all imported objects.
To create a scene object from from a file:
// load mesh using assimp // GVRContext gvrContext; // GVRScene gvrScene; GVRModelSceneObject model = gvrContext.getAssetLoader().loadModel("sphere.obj", GVRResourceVolume.VolumeType.ANDROID_ASSETS, gvrScene);
You can also load only a mesh and construct the scene object, material and render data programmatically.
To create a scene object with shader-only material via render data:
// load mesh object GVRMesh sphereMesh = gvrContext.loadMesh("sphere.obj"); // get material GVRMaterial sphereMaterial = new GVRMaterial(gvrContext, mScreenShader.getShaderId()); // create render data GVRRenderData sphereRenderData = new GVRRenderData(gvrContext); // set mesh and material for render data sphereRenderData.setMesh(sphereMesh); sphereRenderData.setMaterial(sphereMaterial); // create scene object sphereObject = new GVRSceneObject(gvrContext); sphereObject.attachRenderData(sphereRenderData);
After scene objects are added to the scene graph, each scene object can be controlled by transforms.
GVRSceneObject rotator = new GVRSceneObject(mGVRContext, 2.0f, 1.0f, rotatorTextures.get(i)); rotator.getTransform().setPosition(0.0f, 0.0f, -5.0f); float degree = 360.0f * i / (rotatorTextures.size() + 1); rotator.getTransform().rotateByAxisWithPivot(degree, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f);
Opaque scene objects are drawn back-to-front, in render order. Transparent objects are drawn front-to-back. The renderer will automatically sort scene object data. However, the application may set the render order (lower render value first). Standard constants are shown below; however, any integer values are valid.
|Render Type = Render Value||Render Order|
|Background = 1000||First|
|Geometry = 2000||Second|
|Transparent = 3000||Third|
|Overlay = 4000||Last|
After your startup code has built a scene graph, GearVRf enters its event loop. On each frame, GearVRf starts its render pipeline, which consists of four main steps. The first three steps run your Java callbacks on the GL thread. The final step is managed by GearVRf.
- GearVRf executes any Runnable you added to the run-once queue.
Queue operations are thread-safe. You can use the GVRContext.runOnGlThread() method from the GUI or background threads in the same way you use Activity.runOnUiThread() from non-GUI threads. The analogy is not exact: runOnGlThread() always enqueues its Runnable, even when called from the GL thread.
- GearVRf executes each frame listener you added to the on-frame list.
GearVRf includes animation and periodic engines that use frame listeners to run time-based code on the GL thread, but you may add frame listeners directly. A frame listener is like a Runnable that gets a parameter telling you how long it has been since the last frame. An animation runs every frame until it stops, and morphs a scene object from one state to another. A periodic callback runs a standard Runnable at a specified time (or times) as a runOnGlThread() callback. You can run a sequence of animations either by starting each new animation in the previous animation's optional on-finish callback or by starting each new animation at set times from a periodic callback.
- GearVRf runs your onStep() callback, which is the place to process Android or cloud events and make changes to your scene graph. As a rule of thumb, an animation changes a single scene object's properties; onStep() changes the scene graph itself. Of course, you can use onStep() to start an animation that will make a smooth change.
- GearVRf renders the scene graph twice, once for each eye.
- Rendering determines what a camera can see and draws each visible triangle to a buffer in GPU memory.
- Any post-effects you have registered for an eye are applied the buffer for step 4.a in registration order. (Typically, you will use per-eye registration to run the same effect with different per-eye parameters, but you can run different effects on each eye, perhaps adding different debugging info to each eye.)
A post-effect is a shader that is a lot like the shaders that draw scene objects' skins; the big difference is that while a material shader's vertex shader may be quite complex (adding in lighting and reflections), a post-effect is a 2D effect, and all the action is in the fragment shader, which draws each pixel. GearVRf includes pre-defined post-effect shaders, and it is easy to add your own.
- One last shader applies barrel distortion to the render buffer, and draws the barrel distortion to the screen. When the user views the screen though a fish-eye lens, the undistorted image will (nearly) fill the field of view. This step is not programmable, except in so far as you provide an XML file with screen size information at start-up time.