Tag Archives: Android

Privacy in the Metaverse

Or: How to install any app on the Quest 3 without giving Meta your phone number.

With the long anticipated Apple Vision Pro become available at February 2nd 2024 (unfortunately only in the US), we’ll finally see Apple’s take on a consumer-ready headset for mixed reality – er … I meant to say spatial computing. Seamless video see-through and hand tracking – what a technological marvel.

As of now, the closest alternative to the Vision Pro, those unwilling to spend $3500 or located outside the US, seems to be the Meta Quest 3. And this only at a fraction of the price, at $500. But unlike Apple, Meta is less known for privacy-aware products. After all, it is their core business model to not be.

This post explains how to increase your privacy on the Quest 3, in four easy steps.

Continue reading Privacy in the Metaverse

Combining ARCore tracking and Cardboard Spatial Audio

This week Google released ARCore, their answer to Apple’s recently published Augmented Reality framework ARKit. This is an exciting opportunity for mobile developers to enter the world of Augmented Reality, Mixed Reality, Holographic Games, … whichever buzzword you prefer.

To get to know the AR framework I wanted to test how easy it would be to combine it with another awesome Android framework: Google VR, used for their Daydream and Cardboard platform. Specifically, its Spatial Audio API. And despite never having used one of those two libraries, combining them is astonishingly simple.

Cf. to https://developers.google.com/vr/concepts/spatial-audio

The results:

The goal is to add correctly rendered three dimensional sound to an augmented reality application. For a demonstrator, we pin an audio source to each of the little Androids placed in the scene.
Well, screenshots don’t make sense to demonstrate audio but without them this post looks so lifeless 🙂 Unfortunately, I could not manage to do a screen recording which includes the audio feed.

The how-to:

  1. Setup ARCore as explained in the documentation. Currently, only Google Pixel and the Samsung Galaxy S8 are supported so you need one of those to test it out. The device coverage will increase in the future
  2. The following step-by-step tutorial starts at the sample project located in /samples/java_arcore_hello_ar it is based on the current Github repository’s HEAD
  3. Open the application’s Gradle build file at /samples/java_arcore_hello_ar/app/build.gradle and add the VR library to the dependencies
    dependencies {
        ...
        compile 'com.google.vr:sdk-audio:1.10.0'
    }
    
  4. Place a sound file in the asset folder. I had some troubles getting it to work until I found out that it has to be a 32-bit float mono wav file. I used Audacity for the conversion:
    1. Open your Audio file in Audacity
    2. Click Tracks -> Stereo Track to Mono
    3. Click File -> Export. Select “Other uncompressed files” as type, Click Options and select “WAV” as Header and “Signed 32 bit PCM” as encoding

    I used “Sam’s Song” from the Ubuntu Touch Sound Package and you can download the correctly converted file here.

  5. We have to apply three modifications to the sample’s HelloArActivity.java: (1) bind the GvrAudioEngine to the Activity’s lifecycle, (2) add a sound object for every object placed into the scene and (3) Continuously update audio object positions and listener position. You find the relevant sections below.

    public class HelloArActivity extends AppCompatActivity implements GLSurfaceView.Renderer {
        /*
        ...
        */
        private GvrAudioEngine mGvrAudioEngine;
        private ArrayList<Integer> mSounds = new ArrayList<>();
        final String SOUND_FILE = "sams_song.wav";
    
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            /*
            ...
             */
            mGvrAudioEngine = new GvrAudioEngine(this, GvrAudioEngine.RenderingMode.BINAURAL_HIGH_QUALITY);
            new Thread(
                new Runnable() {
                    @Override
                    public void run() {
                        // Prepare the audio file and set the room configuration to an office-like setting
                        // Cf. https://developers.google.com/vr/android/reference/com/google/vr/sdk/audio/GvrAudioEngine
                        mGvrAudioEngine.preloadSoundFile(SOUND_FILE);
                        mGvrAudioEngine.setRoomProperties(15, 15, 15, PLASTER_SMOOTH, PLASTER_SMOOTH, CURTAIN_HEAVY);
                    }
                })
            .start();
        }
    
        @Override
        protected void onResume() {
            /*
            ...
             */
            mGvrAudioEngine.resume();
        }
    
        @Override
        public void onPause() {
            /*
            ...
             */
            mGvrAudioEngine.pause();
        }
    
        @Override
        public void onDrawFrame(GL10 gl) {
            // Clear screen to notify driver it should not load any pixels from previous frame.
            GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
    
            try {
                // Obtain the current frame from ARSession. When the configuration is set to
                // UpdateMode.BLOCKING (it is by default), this will throttle the rendering to the
                // camera framerate.
                Frame frame = mSession.update();
    
                // Handle taps. Handling only one tap per frame, as taps are usually low frequency
                // compared to frame rate.
                MotionEvent tap = mQueuedSingleTaps.poll();
                if (tap != null && frame.getTrackingState() == TrackingState.TRACKING) {
                    for (HitResult hit : frame.hitTest(tap)) {
                        // Check if any plane was hit, and if it was hit inside the plane polygon.
                        if (hit instanceof PlaneHitResult && ((PlaneHitResult) hit).isHitInPolygon()) {
                            /*
                            ...
                             */
                            int soundId = mGvrAudioEngine.createSoundObject(SOUND_FILE);
                            float[] translation = new float[3];
                            hit.getHitPose().getTranslation(translation, 0);
                            mGvrAudioEngine.setSoundObjectPosition(soundId, translation[0], translation[1], translation[2]);
                            mGvrAudioEngine.playSound(soundId, true /* looped playback */);
                            // Set a logarithmic rolloffm model and mute after four meters to limit audio chaos
                            mGvrAudioEngine.setSoundObjectDistanceRolloffModel(soundId, GvrAudioEngine.DistanceRolloffModel.LOGARITHMIC, 0, 4);
                            mSounds.add(soundId);
    
                            // Hits are sorted by depth. Consider only closest hit on a plane.
                            break;
                        }
                    }
                }
                /*
                 ...
                 */
                // Visualize planes.
                mPlaneRenderer.drawPlanes(mSession.getAllPlanes(), frame.getPose(), projmtx);
    
                // Visualize anchors created by touch.
                float scaleFactor = 1.0f;
                for (int i=0; i < mTouches.size(); i++) {
                    PlaneAttachment planeAttachment = mTouches.get(i);
                    if (!planeAttachment.isTracking()) {
                        continue;
                    }
                    // Get the current combined pose of an Anchor and Plane in world space. The Anchor
                    // and Plane poses are updated during calls to session.update() as ARCore refines
                    // its estimate of the world.
                    planeAttachment.getPose().toMatrix(mAnchorMatrix, 0);
    
                    // Update and draw the model and its shadow.
                    mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
                    mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
                    mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
                    mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
    
                    // Update the audio source position since the anchor might have been refined
                    float[] translation = new float[3];
                    planeAttachment.getPose().getTranslation(translation, 0);
                    mGvrAudioEngine.setSoundObjectPosition(mSounds.get(i), translation[0], translation[1], translation[2]);
                }
    
                /*
                 * Update the listener's position in the audio world
                 */
                // Extract positional data
                float[] translation = new float[3];
                frame.getPose().getTranslation(translation, 0);
                float[] rotation = new float[4];
                frame.getPose().getRotationQuaternion(rotation, 0);
    
                // Update audio engine
                mGvrAudioEngine.setHeadPosition(translation[0], translation[1], translation[2]);
                mGvrAudioEngine.setHeadRotation(rotation[0], rotation[1], rotation[2], rotation[3]);
                mGvrAudioEngine.update();
            } catch (Throwable t) {
                // Avoid crashing the application due to unhandled exceptions.
                Log.e(TAG, "Exception on the OpenGL thread", t);
            }
        }
        /*
         ...
         */
    }
    
    
  6. That’s it! Now, every Android placed into the scene also plays back audio.

Some findings:

  1. Setting up ADB via WiFi is really helpful as you will walk around a lot and don’t want to reconnect USB every time.
  2. Placing the Androids too close to each other will produce a really annoying sound chaos. You can modify the rolloff model to reduce this (cf. line 71 in the code excerpt above).
  3. It matters how you hold your phone (portrait with the current code), because ARCore measures the physical orientation of the device but the audio coordinate system is (not yet) rotated accordingly. If you want to use landscape mode, it is sufficient to set the Activity in the manifest to android:screenOrientation="landscape"
  4. Ask questions tagged with the official arcore tag on Stack Overflow, the Google developers are reading them!

Indicate the first run after new push from Android Studio

When developing an app with a big file size the time between pressing Ctrl+F5 and having the new app instance running on a connected device can be rather long.

A timespan of around 10 seconds for a 25MB app is long enough for me to deal with other things, e.g. staging my changes in the VCS and then looking back at the device wondering whether the new version is already running or if I’m looking at the old state. Usually I then start clicking to test the new implementation when the app just closes and reopens since the upload took longer than expected.

The solution is: Add a script to AndroidStudio’s build process which closes the app immediately after pressing Ctrl+F5. This way, when you see your app screen the next time, you can be sure that you are looking at the new version.

  1. Open Android Studio → Run → Edit Configurations
  2. Select your application, scroll to the bottom to the “Before launch” section. Click Plus -> Run External Tool -> Click Plus.Set the values:
    Name: Force-stop app
    program: adb.exe
    parameters: shell am force-stopcreateTool 
  3. Make sure that the external tool runs before the Gradle-aware Make in the “Before Launch” sectionrunDebugConfigurations

P.S.: An alternative would be to auto-increment the version code with every change but that is not feasible for me since I increment my version code automatically based on my git history (more on that later).

Remote Android emulator

Update: According to a comment, this does no longer work due to cryptographical authentication!

I often use the Android emulator to check my apps with different display configurations and to stress-test them. But the problem is that it is really slow on my development laptop. So I installed the Android emulator on my desktop PC running Windows and connect to it over my LAN. The major advantage is that you can continue using your development machine while a “server” deals with emulating – one could even emulate several devices at once and still continue programming.

The approach in a nutshell: Forward the emulator’s port so that it is accessible in the local network. Then connect the ADB to it.

On your desktop – the “server”:

  1. Store the executable of Trivial Portforward on the desktop system (e.g. directly in C:\trivial_portforward.exe).
  2. Create a virtual device to emulate (HowTo) and name it “EmulatedAndroid”.
  3. Create a batch file:
    <your-android-sdk-path>\tools\emulator -avd EmulatedAndroid &amp; 
    echo 'On the development machine: adb kill-server and then: adb connect <desktop-pc-name>:5585'
    C:\trivial_portforward 5585 127.0.0.1 5555

  4. If you execute this batch file on your desktop PC, it will open the emulator with the specified virtual device.

Now on your laptop – the “client”:

  1. Now – given that both systems are in the same network – you can connect to the emulator from your laptop by typing in a terminal:
    adb kill-server
    adb connect <desktop-pc-name>:5585

  2. Now you can upload apps, access the logcat and execute adb commands on your remote emulator like on any other Android device. And all without performance impairments on your workstation.
  3. If you are experiencing communication losses, increase the emulator timeout in the eclipse settings to maybe 5000 ms (Window â†’ Preferences â†’ Android â†’ DDMS â†’ ADB connection time out (ms)).

Hello World for Android computer vision

Every once in a while I start a new computer vision project with Android. And I am always facing the same question: “What do I need again to retrieve a camera image ready for processing?”. While there are great tutorials around I just want a downloadable project with a minimum amount of code – not taking pictures, not setting resolutions, just the continuous retrieval of incoming camera frames.

So here they are – two different “Hello World” for computer vision. I will show you some excerpts from the code and then provide a download link for each project.

Pure Android API

The main problem to solve is how to store the camera image into a processable image format – in this case the android.graphics.Bitmap .

@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width,
		int height) {
	if(camera != null) {
		camera.release();
		camera = null;
	}
	camera = Camera.open();
	try {
		camera.setPreviewDisplay(holder);
	} catch (IOException e) {
		e.printStackTrace();
	}
	camera.setPreviewCallback(new PreviewCallback() {

		public void onPreviewFrame(byte[] data, Camera camera) {
			System.out.println("Frame received!"+data.length);
			Size size = camera.getParameters().getPreviewSize();
			/*
			 * Directly constructing a bitmap from the data would be possible if the preview format
			 * had been set to RGB (params.setPreviewFormat() ) but some devices only support YUV.
			 * So we have to stick with it and convert the format
			 */
			int[] rgbData = convertYUV420_NV21toRGB8888(data, size.width, size.height);
			Bitmap bitmap = Bitmap.createBitmap(rgbData, size.width, size.height, Bitmap.Config.ARGB_8888);
			/*
			 * TODO: now process the bitmap
			 */
		}
	});
	camera.startPreview();
}

Notice the function convertYUV420_NV21toRGB8888() which is needed since the internal representation of camera frames does not match any supported Bitmap format.

Using OpenCV

This is even more straight-forward. We just use OpenCV’s JavaCameraView. If you are new to Android+OpenCV, here is a good tutorial for you.

cameraView = (CameraBridgeViewBase) findViewById(R.id.cameraView);
cameraView.setCvCameraViewListener(this);

xkcd widget

When I saw the xkcd comic “Now”, I immediately wanted it as widget on my smartphone. A cool little gadget showing you the approximate time of day all around the world.

I searched trough the Google Play Store and only found versions with a huge file size – since they just stored all possible images in the app files, the widgets reached a size of around 25 MB.

So I spent a few hours learning how to built Android widgets and compiled my own version – fewer than 1 MB big and with a cool preview animation.

Download the xkcd widget and try it for yourself!

XKCD Screenshot 3 XKCD Screenshot 4XKCD Screenshot 2

How to install and run an APK on Google Glass

Since Google Glass is currently only available for “Explorers”, the usability is quite limited and the device requires some hacking. Since the Glass-specific OS is updated regularly this will change soon but up until then, the following trick will come in handy:

Install and run an APK on Google Glass:

Well, currently the standard Glass launcher is quite limited and only allows for starting some pre-installed applications. You could either install a custom launcher like Launchy or do it without any modifications at all: In a nutshell, we will install the APK, find out the name of the app’s launching activity and then launch it. So no modifications and no prior knowledge of the application’s code is needed.

  1. Connect the Google Glass to your computer (you should have the Android SDK installed) and enable the debug mode on your device.
  2. Open a terminal and install the APK via adb install <apk path>.
  3. Find the android tool aapt (located in <sdk>\build-tools\<version>).
  4. Retrieve the package name and the activity to launch: aapt dump badging <apk path> | grep launchable-activity.
  5. Now you have all necessary information to launch the activity whenever you want: adb shell am start -a android.intent.action.MAIN -n <package name>/<activity name>.
  6. As an example with the password app Passdraw (currently not ported to Glass 🙂 ): adb shell am start -a android.intent.action.MAIN -n com.passdraw.free/com.passdraw.free.activities.LandingActivity

How to connect Google Glass to Windows

I recently got my hands on a Google Glass, the Android-based head-mounted display developed by Google. While connecting to it and installing apps works like a charm on my Linux system, it was quite a hassle to do the same with Windows.

I found a quite nice tutorial which I had to adapt to Windows 8: In a nutshell, we have to convince the Google usb driver that it fits to the Glass device and due to the editing we have to convince Windows 8, that it is okay for the driver’s signature to mismatch. Please proceed at your own responsibility.

  1. Connect the Google Glass to your PC and watch how the driver installation fails. If it does work: congratulations, you are done!
  2. Note the VID and the PID of your connected Glass. You can find them via Device Manager → Portable Devices → Glass 1 → Properties → Details → Hardware Ids.
  3. Open <sdk>\extras\google\usb_driver\android_winusb.inf
  4. Add the following lines using the VID and PID from step 2 to sections [Google.NTx86] and [Google.NTamd64]:
    ;Google Glass
    %SingleAdbInterface% = USB_Install, USB\VID_0000&PID_0000&REV_0216
    %CompositeAdbInterface% = USB_Install, USB\VID_0000&PID_0000&MI_01
    
    %SingleAdbInterface% = USB_Install, USB\VID_0000&PID_0000&REV_0216
    %CompositeAdbInterface% = USB_Install, USB\VID_0000&PID_0000&MI_01
    
  5. Go to the device manager and update the drivers.
  6. If you are not running Windows 8, you are done. If you are, the following error will occur: “The hash for the file is not present in the specified catalog file. The file is likely corrupt or the victim of tampering”. This is because we have altered the .INF-file and now the signature does not match anymore.
  7. Go back to the file android_winusb.inf and search for the lines
    CatalogFile.NTx86   = androidwinusb86.cat
    CatalogFile.NTamd64 = androidwinusba64.cat
    

    and comment them out:

    ;CatalogFile.NTx86   = androidwinusb86.cat
    ;CatalogFile.NTamd64 = androidwinusba64.cat
    
  8. Now, you will get a different error: “The third-party INF does not contain digital signature information”. Well, this security check is great but since we know what we are doing … : Do an “Advanced Startup” (just press the windows key and type it in, then go to Troubleshoot → Advanced options → Start up settings → Restart.
  9. Disable driver signature enforcement in the boot menu.
  10. Update your drivers again in the device manager and this time skip the driver signature enforcement.
  11. Google Glass should now be recognized correctly. Restart your computer if you want to re-enable the driver signature enforcement.

Passdraw

Passdraw is my first smartphone app on Google Play.

Passdraw is a special keyboard assisting you with entering your passwords – which is normally ridiculously annoying on smartphones. After you have done the setup just draw one of your secret paths whenever you need your password!
—————————————————–
+ No annoying switching between special characters and normal keyboard layout
+ Bystanders don’t get to see the original characters
+ This app does not know your password either
+ No extra permissions – your data stays with you!
+ No need for simple passwords
—————————————————–
More information under www.passdraw.com or directly on Google Play.

Update (December ’13): I am currently working on a complete redesign with an entirely new algorithm and a new user interface. So stay tuned!

Passdraw Screenshot 1 Passdraw Screenshot 3 Passdraw Screenshot 2