Microsoft Surface: remapping keys for power users

The Surface Pro 3 is a neat device but there are a bunch of user experience flaws concerning the Cover and the Pen (which we will solve in this post) such as:

  1. The Cover’s function keys F1-F12 can only be reached using the Fn button – especially annoying for developers.
  2. The Cover has no button for changing the screen brightness.
  3. The Surface Pen’s top button (the purple one) cannot be configured to open a custom program. I want the Snipping Tool to be opened.
  4. The Surface Pen lacks a “back” button, e.g. for quickly correct a wrong pen stroke in Photoshop, Mischief or similar drawing programs.
  5. I never use CapsLock – let’s map it to a different key, e.g. AltGr (this is of course not Surface-specific but a nice productivity boost nonetheless)


The solutions 1 & 2 are just simple keyboard shortcuts:

  1. Function key lock: Press Fn+CapsLock to toggle the behavior. Unfortunately, this introduces a new problem which will be solved in the paragraph after this:
    1. For F1-F8 I want the default to be the function keys themselves but for F9-F12 I want the reverse as default: Home, End, Page Up, Page DownSurface Pro cover: function keys
  2. Secret shortcuts for screen brightness: Fn + Del/Backspace

The remaining problems are solved by using AutoHotKey with its extensive scripting capabilities and a lively community sharing knowledge thereof.
After installing it, copy the following script to the autostart folder (e.g. under the name of EnableSurfaceProductivity.ahk) to have it running with every start. To test the effect immediately, double-click the file.

; Solution for 1b: Reverse behavior for F9-F12
; The dollar signs prevent that the hooks call themselves

$Home::SendInput, {F9}

$End::SendInput, {F10}

$PgUp::SendInput, {F11}

$PgDn::SendInput, {F12}

$F9::Send, {Home}

$F10::Send, {End}

$F11::Send, {PgUp}

$F12::Send, {PgDn}

; Solution 3: Open the Snipping Tool when double-pressing the purple button
; The script waits for the program to start and then simulates
; Win+PrintScreen which directly jumps to the selection process
Run, "C:\Windows\Sysnative\SnippingTool.exe"
WinActivate, Snipping Tool ; This makes sure it is active
WinWaitActive, Snipping Tool
Send, ^{PrintScreen}

; Solution 4: Simulate Ctrl+Z when single-clicking the purple button
#F20::Send, ^z ; Left Arrow, Browser Back Button

; Solution 5: Map CapsLock to AltGr

Git hook to prevent WIP commits to certain branches

Git hooks are a great way to automate your workflow and to check for faulty commits. I am using them to prevent work-in-progress commits to the master branch (i.e. commits with a line starting with the string WIP) . But wait – this script differs from the sample found in .git/hooks/pre-commit.sample in two ways:

  1. Only pushes to special heads are inspected. This still allows you to backup your WIP commit on the remote server or to share it with your colleagues but prevents integration into the master branch.
  2. Only commits which would actually be merged into the master are checked – and not all commits ever pushed. This way, even when a WIP commit was pushed to the master in the past, the script does not prevent you from pushing new commits. The pre-commit.sample would explicitly disallow that.

We have two staging areas in our development environment: All pushes to ready/<any name> or directPatch/<any name> trigger our continuous integration process and eventually merge the changes into our master (which you cannot push to directly). Pushes to <developer acronym>/<any name> are always allowed and do not trigger any merging. So we want to check only the pushes to ready and directPatch. Of course, you might want to adapt the script to your needs:

  1. Changing the heads to be checked – see line 24
  2. Changing the word to look for – see line 40

The following hook can be activated by storing it in the file <your repository>/.git/hooks/pre-commit (no file extension)


# This hook is called with the following parameters:
# $1 -- Name of the remote to which the push is being done
# $2 -- URL to which the push is being done
# Information about the commits which are being pushed is
# supplied as lines to the standard input in the form:
#   <local ref> <local sha1> <remote ref> <remote sha1>
# This sample shows how to prevent push of commits where the
# log message starts with "WIP" (work in progress) and is pushed
# to a refs/heads/ready or refs/heads/directPatch



IFS=' '
while read local_ref local_sha remote_ref remote_sha
	if [[ $remote_ref != refs/heads/ready/* ]] && [[ $remote_ref != refs/heads/directPatch/* ]]
		# Do not check the WIP

	if [ "$local_sha" = $z40 ]
		# Handle delete
		# Only inspect commits not yet merged into origin/master

		# Check for WIP commit
		commit=`git rev-list -n 1 --grep '^WIP' "$range"`
		if [ -n "$commit" ]
			echo "Found WIP commit in $local_ref: $commit, not pushing."
			exit 1

exit 0

Migrating Windows 7 to 8.1 without losing your apps

When trying to migrate from Windows 7 to 8.1 I learned that I would lose my installed applications. This was especially frustrating because I knew from the upgrade assistant that there is an option to keep them – it is just not available for my configuration. And even the official guide states that I would have to reinstall all apps.

The solution was: First migrate to Windows 8 (the option to keep your apps is available here) and then to 8.1 (which again allows you to keep your apps). My system is up and running and I did not have to reinstall anything. Voilà! The only question which remains is: why?

Master thesis

The final piece of my Master in Computer Science is the thesis “Monocular SLAM for Context-Aware Workflow Assistance”. The abstract is included below:

In this thesis, we propose the integration of contextual workflow knowledge into a
SLAM tracking system for the purpose of procedural assistance using Augmented Reality.
Augmented Reality is an intuitive way for presenting workflow knowledge (e.g. main-
tenance or repairing) step by step but requires sophisticated models of the scene appear-
ance, the actions to perform and the spatial structure. For the latter one we propose the
integration with SLAM (Simultaneous Localization And Mapping) operating on images of
a monocular camera.
We first develop a stand-alone SLAM system with a point cloud as map representation
which is continuously extended and refined by triangulations obtained from new view
points using a three-step keyframe insertion procedure. In a second step, we integrate
contextual knowledge which is automatically obtained from reference recordings in a so-
called offline mapping step. This allows the tracking not only to cope with but actively
adapt to changes in the environment by explicitly including them in the tracking model.
To aid the data acquisition and to merge multiple tracking models into a single one, we
propose a novel method for combining offline acquired maps which is independent of the
spatial structure as opposed to ICP (Iterative Closest Point).
We show typical tracking results and evaluate the single components of our system by
measuring their performance when exposed to noise.

I wrote my thesis in Prof. Stricker’s Augmented Vision group and was supervised by Nils Petersen.

Master Thesis

author = {Hasper, Philipp},
school = {TU Kaiserslautern},
title = {{Monocular SLAM for Context-Aware Workflow Assistance}},
type = {Masterthesis},
year = {2014}

Migrating Eigen2 to Eigen3: useful regular expressions

I’ve recently migrated a project from Eigen2 library to Eigen3 based on the very useful staged migration path.

Here are two regular expressions which came in really handy. Of course you have to manually check the result but they save a lot of time.

 * Using the new linear solver interface

// Regular expression for finding
// Regular expression for replacing
$1$7 = $2.solve\($4\);


 * Use the static way for map creation.
 * Does not work with all types

// Regular expression for finding
// Regular expression for replacing


OpenCV: Draw epipolar lines

Drawing epipolar lines in OpenCV is not hard but it requires a sufficient amount of code. Here is a out-of-the-box function for your convenience which only needs the fundamental matrix and the matching points. And it even has an option to exclude outliers during drawing!

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/imgproc/imgproc.hpp>
 * \brief	Compute and draw the epipolar lines in two images
 *			associated to each other by a fundamental matrix
 * \param title			Title of the window to display
 * \param F					Fundamental matrix
 * \param img1			First image
 * \param img2			Second image
 * \param points1		Set of points in the first image
 * \param points2		Set of points in the second image matching to the first set
 * \param inlierDistance			Points with a high distance to the epipolar lines are
 *								not displayed. If it is negative, all points are displayed
template <typename T1, typename T2>
static void drawEpipolarLines(const std::string& title, const cv::Matx<T1,3,3> F,
							  const cv::Mat& img1, const cv::Mat& img2,
							  const std::vector<cv::Point_<T2>> points1,
							  const std::vector<cv::Point_<T2>> points2,
							  const float inlierDistance = -1)
	CV_Assert(img1.size() == img2.size() && img1.type() == img2.type());
	cv::Mat outImg(img1.rows, img1.cols*2, CV_8UC3);
	cv::Rect rect1(0,0, img1.cols, img1.rows);
	cv::Rect rect2(img1.cols, 0, img1.cols, img1.rows);
	 * Allow color drawing
	if (img1.type() == CV_8U)
		cv::cvtColor(img1, outImg(rect1), CV_GRAY2BGR);
		cv::cvtColor(img2, outImg(rect2), CV_GRAY2BGR);
	std::vector<cv::Vec<T2,3>> epilines1, epilines2;
	cv::computeCorrespondEpilines(points1, 1, F, epilines1); //Index starts with 1
	cv::computeCorrespondEpilines(points2, 2, F, epilines2);

	CV_Assert(points1.size() == points2.size() &&
			  points2.size() == epilines1.size() &&
			  epilines1.size() == epilines2.size());

	cv::RNG rng(0);
	for(size_t i=0; i<points1.size(); i++)
		if(inlierDistance > 0)
			if(distancePointLine(points1[i], epilines2[i]) > inlierDistance ||
				distancePointLine(points2[i], epilines1[i]) > inlierDistance)
				//The point match is no inlier
		 * Epipolar lines of the 1st point set are drawn in the 2nd image and vice-versa
		cv::Scalar color(rng(256),rng(256),rng(256));

		cv::circle(outImg(rect1), points1[i], 3, color, -1, CV_AA);

		cv::circle(outImg(rect2), points2[i], 3, color, -1, CV_AA);
	cv::imshow(title, outImg);

template <typename T>
static float distancePointLine(const cv::Point_<T> point, const cv::Vec<T,3>& line)
	//Line is given as a*x + b*y + c = 0
	return std::fabsf(line(0)*point.x + line(1)*point.y + line(2))
			/ std::sqrt(line(0)*line(0)+line(1)*line(1));
The most important concepts of epipolar geometry. Author: Arne Nordmann, CC BY-SA 3.0

The most important concepts of epipolar geometry. Author: Arne Nordmann, CC BY-SA 3.0

Remote Android emulator

I often use the Android emulator to check my apps with different display configurations and to stress-test them. But the problem is that it is really slow on my development laptop. So I installed the Android emulator on my desktop PC running Windows and connect to it over my LAN. The major advantage is that you can continue using your development machine while a “server” deals with emulating – one could even emulate several devices at once and still continue programming.

The approach in a nutshell: Forward the emulator’s port so that it is accessible in the local network. Then connect the ADB to it.

On your desktop – the “server”:

  1. Store the executable of Trivial Portforward on the desktop system (e.g. directly in C:\trivial_portforward.exe).
  2. Create a virtual device to emulate (HowTo) and name it “EmulatedAndroid”.
  3. Create a batch file:
    <your-android-sdk-path>\tools\emulator -avd EmulatedAndroid &
    echo 'On the development machine: adb kill-server and then: adb connect <desktop-pc-name>:5585'
    C:\trivial_portforward 5585 5555
  4. If you execute this batch file on your desktop PC, it will open the emulator with the specified virtual device.

Now on your laptop – the “client”:

  1. Now – given that both systems are in the same network – you can connect to the emulator from your laptop by typing in a terminal:
    adb kill-server
    adb connect <desktop-pc-name>:5585
  2. Now you can upload apps, access the logcat and execute adb commands on your remote emulator like on any other Android device. And all without performance impairments on your workstation.
  3. If you are experiencing communication losses, increase the emulator timeout in the eclipse settings to maybe 5000 ms (Window → Preferences → Android → DDMS → ADB connection time out (ms)).

Hello World for Android computer vision

Every once in a while I start a new computer vision project with Android. And I am always facing the same question: “What do I need again to retrieve a camera image ready for processing?”. While there are great tutorials around I just want a downloadable project with a minimum amount of code – not taking pictures, not setting resolutions, just the continuous retrieval of incoming camera frames.

So here they are – two different “Hello World” for computer vision. I will show you some excerpts from the code and then provide a download link for each project.

Pure Android API

The main problem to solve is how to store the camera image into a processable image format – in this case the .

public void surfaceChanged(SurfaceHolder holder, int format, int width,
		int height) {
	if(camera != null) {
		camera = null;
	camera =;
	try {
	} catch (IOException e) {
	camera.setPreviewCallback(new PreviewCallback() {

		public void onPreviewFrame(byte[] data, Camera camera) {
			System.out.println("Frame received!"+data.length);
			Size size = camera.getParameters().getPreviewSize();
			 * Directly constructing a bitmap from the data would be possible if the preview format
			 * had been set to RGB (params.setPreviewFormat() ) but some devices only support YUV.
			 * So we have to stick with it and convert the format
			int[] rgbData = convertYUV420_NV21toRGB8888(data, size.width, size.height);
			Bitmap bitmap = Bitmap.createBitmap(rgbData, size.width, size.height, Bitmap.Config.ARGB_8888);
			 * TODO: now process the bitmap

Notice the function convertYUV420_NV21toRGB8888() which is needed since the internal representation of camera frames does not match any supported Bitmap format.


Using OpenCV

This is even more straight-forward. We just use OpenCV’s JavaCameraView. If you are new to Android+OpenCV, here is a good tutorial for you.

cameraView = (CameraBridgeViewBase) findViewById(;