Tag Archives: Augmented Reality

Master thesis

The final piece of my Master in Computer Science is the thesis “Monocular SLAM for Context-Aware Workflow Assistance”. The abstract is included below:

In this thesis, we propose the integration of contextual workflow knowledge into a
SLAM tracking system for the purpose of procedural assistance using Augmented Reality.
Augmented Reality is an intuitive way for presenting workflow knowledge (e.g. main-
tenance or repairing) step by step but requires sophisticated models of the scene appear-
ance, the actions to perform and the spatial structure. For the latter one we propose the
integration with SLAM (Simultaneous Localization And Mapping) operating on images of
a monocular camera.
We first develop a stand-alone SLAM system with a point cloud as map representation
which is continuously extended and refined by triangulations obtained from new view
points using a three-step keyframe insertion procedure. In a second step, we integrate
contextual knowledge which is automatically obtained from reference recordings in a so-
called offline mapping step. This allows the tracking not only to cope with but actively
adapt to changes in the environment by explicitly including them in the tracking model.
To aid the data acquisition and to merge multiple tracking models into a single one, we
propose a novel method for combining offline acquired maps which is independent of the
spatial structure as opposed to ICP (Iterative Closest Point).
We show typical tracking results and evaluate the single components of our system by
measuring their performance when exposed to noise.

I wrote my thesis in Prof. Stricker’s Augmented Vision group and was supervised by Nils Petersen.

Master Thesis

@mastersthesis{Hasper2014,
author = {Hasper, Philipp},
school = {TU Kaiserslautern},
title = {{Monocular SLAM for Context-Aware Workflow Assistance}},
type = {Masterthesis},
year = {2014}
}

See you at the CeBIT 2014

As last year, I will be an exhibitor at this year’s CeBIT taking place March 10-14 in Hannover. We will present the awesome AR-Handbook at Hall 9, Booth D44.

Our topic is Fast MRO (Fast Maintenance, Repair and Overhaul) – a comprehensive Augmented Reality Maintenance Information System that shows and explains technical details for simplified maintenance work right where it is needed. Using a John Deere tractor as an example, Fast MRO offers support for the exchange of defective consumables or the maintenance of lubrication parts. The system offers information about the position of machine elements, maintenance intervals or the use of certain components and guides the repairman step by step through the individual work instructions.

Remote Execution vs. Simplification for Mobile Real-time Computer Vision

As part of my work at the DFKI Kaiserslautern, I published a paper at VISAPP 2014 dealing with Remote Execution for mobile Augmented Reality:

Remote Execution vs. Simplification for Mobile Real-time Computer Vision. Philipp Hasper, Nils Petersen, Didier Stricker. In Proceedings of the 9th International Conference on Computer Vision Theory and Applications (VISAPP) 2014. doi:10.5220/0004683801560161.

Mobile implementations of computationally complex algorithms are often prohibitive due to performance constraints. There are two possible solutions for this: (1) adopting a faster but less powerful approach which results in a loss of accuracy or robustness. (2) using remote data processing which suffers from limited bandwidth and communication latencies and is difficult to implement in real-time interactive applications. Using the example of a mobile Augmented Reality application, we investigate those two approaches and compare them in terms of performance. We examine different workload balances ranging from extensive remote execution to pure onboard processing.

@inproceedings{Hasper2014,
author = {Hasper, Philipp and Petersen, Nils and Stricker, Didier},
booktitle = {Proceedings of the 9th International Conference on Computer Vision Theory and Applications},
doi = {10.5220/0004683801560161},
isbn = {978-989-758-003-1},
pages = {156--161},
publisher = {SCITEPRESS - Science and and Technology Publications},
title = {{Remote Execution vs. Simplification for Mobile Real-time Computer Vision}},
year = {2014}
}

How to install and run an APK on Google Glass

Since Google Glass is currently only available for “Explorers”, the usability is quite limited and the device requires some hacking. Since the Glass-specific OS is updated regularly this will change soon but up until then, the following trick will come in handy:

Install and run an APK on Google Glass:

Well, currently the standard Glass launcher is quite limited and only allows for starting some pre-installed applications. You could either install a custom launcher like Launchy or do it without any modifications at all: In a nutshell, we will install the APK, find out the name of the app’s launching activity and then launch it. So no modifications and no prior knowledge of the application’s code is needed.

  1. Connect the Google Glass to your computer (you should have the Android SDK installed) and enable the debug mode on your device.
  2. Open a terminal and install the APK via adb install <apk path>.
  3. Find the android tool aapt (located in <sdk>\build-tools\<version>).
  4. Retrieve the package name and the activity to launch: aapt dump badging <apk path> | grep launchable-activity.
  5. Now you have all necessary information to launch the activity whenever you want: adb shell am start -a android.intent.action.MAIN -n <package name>/<activity name>.
  6. As an example with the password app Passdraw (currently not ported to Glass 🙂 ): adb shell am start -a android.intent.action.MAIN -n com.passdraw.free/com.passdraw.free.activities.LandingActivity

How to connect Google Glass to Windows

I recently got my hands on a Google Glass, the Android-based head-mounted display developed by Google. While connecting to it and installing apps works like a charm on my Linux system, it was quite a hassle to do the same with Windows.

I found a quite nice tutorial which I had to adapt to Windows 8: In a nutshell, we have to convince the Google usb driver that it fits to the Glass device and due to the editing we have to convince Windows 8, that it is okay for the driver’s signature to mismatch. Please proceed at your own responsibility.

  1. Connect the Google Glass to your PC and watch how the driver installation fails. If it does work: congratulations, you are done!
  2. Note the VID and the PID of your connected Glass. You can find them via Device Manager → Portable Devices → Glass 1 → Properties → Details → Hardware Ids.
  3. Open <sdk>\extras\google\usb_driver\android_winusb.inf
  4. Add the following lines using the VID and PID from step 2 to sections [Google.NTx86] and [Google.NTamd64]:
    ;Google Glass
    %SingleAdbInterface% = USB_Install, USB\VID_0000&PID_0000&REV_0216
    %CompositeAdbInterface% = USB_Install, USB\VID_0000&PID_0000&MI_01
    
    %SingleAdbInterface% = USB_Install, USB\VID_0000&PID_0000&REV_0216
    %CompositeAdbInterface% = USB_Install, USB\VID_0000&PID_0000&MI_01
    
  5. Go to the device manager and update the drivers.
  6. If you are not running Windows 8, you are done. If you are, the following error will occur: “The hash for the file is not present in the specified catalog file. The file is likely corrupt or the victim of tampering”. This is because we have altered the .INF-file and now the signature does not match anymore.
  7. Go back to the file android_winusb.inf and search for the lines
    CatalogFile.NTx86   = androidwinusb86.cat
    CatalogFile.NTamd64 = androidwinusba64.cat
    

    and comment them out:

    ;CatalogFile.NTx86   = androidwinusb86.cat
    ;CatalogFile.NTamd64 = androidwinusba64.cat
    
  8. Now, you will get a different error: “The third-party INF does not contain digital signature information”. Well, this security check is great but since we know what we are doing … : Do an “Advanced Startup” (just press the windows key and type it in, then go to Troubleshoot → Advanced options → Start up settings → Restart.
  9. Disable driver signature enforcement in the boot menu.
  10. Update your drivers again in the device manager and this time skip the driver signature enforcement.
  11. Google Glass should now be recognized correctly. Restart your computer if you want to re-enable the driver signature enforcement.

Bachelor thesis

I finished my Bachelor in Computer Science with my thesis “Real-time mobile image processing using heterogeneous cloud computing”. I wrote it in German (the German title is “Mobile Echtzeit-Bildverarbeitung durch heterogenes Cloud-Computing”) but I also included an English abstract:

Providing real-time image processing with computationally intensive components to low-level mobile devices is a problem not yet fully solved.
We show that offloading complex computations elsewise done on a smartphone to a cloud system can result in a higher processing speed.
The tradeoff between computational and network footprint is from particular importance since smartphones are often used in mobile networks. It is shown to be unsuitable to use the mobile device solely for image retrieval and offloading all further calculations (which is referred to as thin-client).
To hide the delay caused by the remote execution the development of a so-called latency-hiding component is discussed and its effectiveness verified.

I wrote my thesis in Prof. Stricker’s Augmented Vision group and was supervised by Nils Petersen.
Bachelor thesis, Figure 4.3

@mastersthesis{Hasper2012,
author = {Hasper, Philipp},
school = {TU Kaiserslautern},
title = {{Mobile Echtzeit-Bildverarbeitung durch heterogenes Cloud-Computing}},
type = {Bachelorthesis},
year = {2012}
}