Pebble Blueprint

Guys, guys, guys... I have been working on a project for a couple of weekends now and to make a long story short it's a watchface and watchapp generator for the Pebble Time for iOS. The basic idea is that it allows putting watchfaces/watchapps together easily and then deploying them on the Pebble that is attached to the iOS device.

Here is a video of the whole thing in action, note that the iPad simulator uses US keyboard layout and I only have my German one, so, yeah, you can watch me stumble over the keyboard quite a bit at times:

Also, my Pebble Time arrived yesterday and this is how it looks like in real life:

Pebble Blueprint in real

Took quite a bit to get a working LLVM/Clang cross compiler ready, but it basically is completely working now. Needs a ton of polish obviously, but I think for a weekend project this is quite cool. The only real big thing missing right now is an action system a la RPG Maker to allow some more customization than just the expression system.

CLion, a couple of weeks after the EAP

I bought CLion after sporadically using it in during the EAP phase. I’ve been using Xcode and Visual Studio as IDE of choice so far on OS X and Windows, and both are great, but when developing a cross platform library like Rayne it definitely was a pain to keep both project files in sync. CLion promises to not have that issue, be cross platform AND allow me to use one single build system: CMake. If you are unaware about what CLion is, CLion is a C/C++ IDE by Jetbrains, the guys behind products like IntelliJ, AppCode, WebStorm and more. In short, they know IDEs.


CLion is my first Jetbrains product. I’ve heard good things about them, and I was super excited about a cross-platform C/C++ IDE. I started using it back when it was in the Early Access Program (EAP), but never did too much with it since it was still very beta-ish. The IDE itself is written in Java, but is quite performant. No Sublime Text, but definitely fast enough for every day use. I do have to say though that I have a very beefy MacBook with a 2.8 GhZ Haswell, 16 Gb of RAM, Geforce GT 750M with 2Gb VRAM, so I would expect what is essentially a text editor with fluff to run fast.

The greatest thing about CLion is the sheer number of settings you have. Tweaking the editor from color scheme over keyboard shortcuts to the way it formats the code. Everything can be changed, which might take a bit of setting up when the standards don’t match, but I found just changing issues once they arise to be sufficient. And! code formatting changes can be saved on a per project base. This is unbelievably great and neither Visual Studio nor especially Xcode get anywhere near this.

Clion Settings


As already mentioned, it’s cross platform Java. If you have ever used a Java application on OS X, you will know that they have a certain degree of not “getting it right”. This may sound first world problem-ish, but I’m used to applications behaving a certain way, including keyboard shortcuts. CLion is on the better end of the spectrum, it is surprisingly good at pretending to be a native application, it only falls apart when what usually is a window is no longer one. Especially noticeable when switching into Mission Control and suddenly CLion is no longer visible. On the up side, it gets things like ctrl + a and ctrl + e right-ish. ctrl + a doesn’t move to the beginning of the line but rather to where the indentation ends, wether that is preferable is up to. There probably even is a setting for it somewhere, I just haven’t found it yet. But all in all, it feels very native. Even on Windows, but mostly because everything feels native there.

CMake integration

CLion uses CMake as build system and CMake only. And it does a pretty good job at that, it correctly gets the targets out of CMake and can keep track of them, so making changes and not losing target specific settings in CLion works well. The downside is that you have to touch and write the CMakeList.txt yourself, CLion does not provide many smart tools to work with it there. This is fine by me, but could potentially be bothersome for some people. I like being able to script the build system, instead of having a defined set of checkboxes like Xcode provides, even though it is somewhat more work. But really, that is about all there is to that. They have announced plans to support normal Make files, but for the time being it’s CMake and the integration works well.

Static analysis, aka inspections

CMake runs static analysis on the code at all times (of course, it can be disabled in the settings). This is a great source of warm knees, empty batteries and what the fuck were they thinking?! The good thing is, it works reasonably well once it works and it did catch some things for me already. The bad thing is, it still breaks quite often and I’m torn between turning an otherwise useful feature off or just ignoring false positives. The whole problem is that instead of using a real compiler like Clang to parse the code, they have written their own parser, lexer and static analysis tool and it fails spectacularly at times. Normal C++ idioms like scope guards trigger unused variable warnings. Side effects aren’t properly deduced either:

void test()
    std::atomic<bool> end = false;

    auto f = [&](){
        end = true;



    // Code is never reached warning here, because the side effect of f is never taken into account

It’s not the end of the world, but it is so incredibly annoying. And there seems to be very little care about this. I filed 5 bug reports about broken inspections, over half a month ago, and so far there has been no sign of anyone even bothering to read them.

I want to love this feature so badly, but it just doesn’t work properly. Especially in projects that make usage of lambdas it seems. Yes, C++11 is hard to parse properly, so for the love of god, use something that can actually do that.

On top of that, CLion tries to be helpful by automatically including files when you use an identifier that it thinks is in that file but you haven’t included it in the current translation unit. That feature just doesn’t work. On OS X it constantly tries to include Cocoa and Foundation, two Objective-C frameworks that are neither linked through CMake nor do they make sense in a C++ context. The worst part is that it never tells you that it did that, so when you are scrolled in far enough and don’t see the line numbers magically change, good luck ever finding out about it before hitting compile. It’s just annoying. It does seem to do that less often than during the EAP builds though, which is at least something.


I don’t need to talk about compiler integration, this is where the CMake integration shines as it simply takes care of that. Debugger integration is sadly not its strength. It ships with GDB 7 but you an supply your own GDB, if it is version 7, and that’s it. I would really like to see LLDB integration. It’s planned for “late summer” according to Jetbrains, but I want it now, because the GDB integration sucks. And I’m not sure if it is just GDB not really having much love for mach-o binaries or CLion not getting it right, but half the time my symbols simply don’t resolve and I’m left with an unsymbolicated call stack. Also, breakpoints half of the time never trigger and are simply ignored. That sucks big time. It sucks so bad that I just fire up LLDB in the console and just work from there. This is NOT good for an IDE.

On Windows, things work much better and I haven’t run into these issues. But then again, I don’t feel like booting up windows just to have a functioning debugger.


CLion feels very mature at points and then again super beta at others. The debugger integration issue is a huge annoyance for me and borderline dealbreaker if I wasn’t trying to love CLion so much. Don’t get me wrong, I still recommend it because I think it’s a good IDE for everything else, but come on. Maybe wait until the 1.1 or 1.2 if you can before dropping your money for a license, since the updates aren’t limited to major version but by time, so you will only get 1 year of free updates before having to drop money again. Don’t get me wrong, I don’t think that is a bad thing, I like supporting software I use often and regularly, but it might just be too early to buy it yet.

Definitely keep an eye on it though if that is even remotely of your interest. I’ll keep developing Rayne with it, since I like the IDE and especially its customization abilities quite a lot.

Integrating Crashlytics into Build Bots

Testflight has seemingly no interest in its regular business anymore and broke the crashreport symbolication a long time ago. We are quite dependent on that though, we don’t want to know how many times the app crashed but where it crashed. So, a week and a bit ago we jumped ship to Crashlytics, which is a really nice platform to analyze crashes. The only issue is that their dSYM upload requires a run script build phase, so their upload script runs as part of the build process. Now, you can add plenty of ifs around that to make sure that you don’t upload debug dSYMs, but still, chances are you will end up uploading more dSYMs than you need to. And I was on cruiseship wifi and am now in hotel wifi, both are shitty, and I don’t want Crashlytics to use up bandwidth that I don’t have to upload dSYMs that we don’t need. We have a build server running Xcode bots, that uploads builds to Testflight and these are the builds for which dSYMs are needed. Local crashes I can debug using the debugger.

So, I spend the day trying to figure out how the Crashlytics binary works using the disassembler Hopper and lldb, after the naive way of just batching it into a post integration script didn’t work. Actually, the start was quite easy, since the run binary complained about missing environment variables:


After providing these, it bailed with:

Crashlytics: Use a Target Run Script Build Phase
Make sure the Crashlytics command is added to your project Target and not the scheme 'Post-actions'.
Then, Build your project to continue.
(Crashlytics error 602)

Looking that string up in Hopper led to the discovery that it also expects the SRCROOT variable to be set and after providing that... Nothing. The binary exited without error code, but I could see that there was no upload going on. Looking into the for hints, I found a crashreport from the

Assertion failed: (0), function -[CLSXcodeIntegration openURL:withReplyEvent:], file /Users/crashlytics/buildAgent/work/741cdaa878dfaeb/MacApp2_5/MacApp/Controllers/Integrations/CLSXcodeIntegration.m, line 81.

Okay, cool, someone put an assert(0) on line 81 of a source file I have no access too. Don’t put too much info in, buddy. So, lldb attached to the Fabric app and a breakpoint set. Turns out, it openURL:withReplyEvent: is an Apple Script endpoint, and the URL parameter is not a NSURL. Apparently Crashlytics is creating a plist with information about the build and copies the dsym and app file into an intermediate directory and then posts an Apple Event to the Fabric App which opens the plist to find out what to do. That plist also contains the environment variables, however, stepping a bit more through the code and looking at it Hopper as well, it expects a bunch of more environment variables which the Crashlytics app isn’t complaining about ever when missing.

Also, for some reason, someone thought it was a great idea to do the equivalent of this:

@catch(NSException *e)
    assert(0); // Line 81

Again, please, don’t try to be too helpful here...

So, long story short, here is the complete list of environment variables that need to be present in order to get Crashlytics and Fabric running:


On the upside, I’m getting quite good at working with lldb and Hopper. On the downside, I’m not sure if I really want to. Maybe this post will help someone encountering the same issues, or at least, help future me.

Firedrake memory corruption bug

There was a bug that I couldn’t figure out for the life of me. It was somewhere deep in my hobby kernel Firedrake and it made zero sense.

It manifests as memory corruption, more specifically, at some point a pointer suddenly becomes zero. I tried to narrow it down with printf() debugging, but that didn’t get me very far because at that point the scheduler is already running and regular task switches occur, which have the side effect of the kernel not running in consecutive order any longer. Luckily, QEMU, my go to emulator, has support for GDB. The easy solution is therefore to fire up GDB, attach it to the remote debugger exposed by QEMU and set a watchpoint on the address... And suddenly everything was fine, the pointer was no longer overwritten and retained its correct value.

I have an uncommitted .bochrc file that I sometimes use when I want to understand what is truly going on at the CPU side, since Bochs is not only incredibly slow, but also verbose when it comes to APIC and MSRs etc, which usually are more like black boxes. Bochs verified that the pointer is indeed overwritten as it has the same behaviour. It didn’t tell me why, at least not out of the box.

I put the whole thing aside for days. I disabled the memory manager and just used whole pages for every allocation. I disabled reclaiming memory and turned the free/delete functions into stubs. It worked, somewhat but still broke somewhere else. I rewrote the memory manager as I suspected it to be broken since a long time already. It broke again.

Then I just decided to let Bochs trace all memory access, reading and writing. It took five minutes to get through Grub and another two to get it to load the kernel and have that one crash. I ended up with a 3gb log file that took another two or so minutes to import into Sublime Text and which made me glad I have an SSD and 16gb of RAM in this laptop. It still took about 20 minutes to search the output for the address I was interested in, with Sublime Text hanging for a good 1-3 minutes when jumping around.

And then it made click. The linear address 0x18008, the one that was getting overwritten, was previously mapped to 0x8008, the physical address that contains the SMP bootstrap location (ie the code that all non bootstrap CPUs execute to be hoisted out of real mode and get into protected mode and then rendezvous with the Firedrake bootstrap CPU). The value at the physical address was 0x0. Later 0x18008 is mapped to another location, but when I was rewriting the virtual memory interface, I forgot the code to invalidate the page table entry when remapping virtual addresses. Writes where going to the new physical locations, and reads where still served from the old one.

And that’s why no hardware breakpoints where helping and why the Bochs hardware watchpoints where useless. And I guess QEMU disables TLB simulation when GDB is attached, or something like that. Not that a GDB watchpoint would’ve helped, the memory was never actually overwritten in the first place after all.

I feel incredibly stupid right now.

Printing with Wood and Metal on an Ultimaker 2

I've been printing lots of robots today on my Ultimaker 2, trying out different materials and what settings to use to print with them. I've used the special filaments from Colorfabb, who had the genius idea to mix normal PLA filaments with metal and wood to allow normal printers to print with different materials. And because pictures say more than 1000 words, or so I've heard, here is the result:

Print result

From left to right: Natural PLA, WoodFill, CopperFill, BronzeFill, GlowFill

General things I've noticed

All of these filaments are PLA based, although how much PLA is in them depends on the filaments. So to get that out of the way, if you use these filaments, you are still technically printing with PLA. That being said, they don't feel like PLA! The WoodFill feels like wood, the copper and bronzeFill feel like metal and have the right weight to it. It's amazing! The idea of these is absolutely genius.

Another thing I've noticed that, even though they are all PLA mixes, printing with them is different from PLA and each requires different settings, both in the slicer and on the printer. Trying to print them like normal PLA generally does not work at all, luckily though, Colorfabb has pre-made profiles for Cura, so at least the slicer settings are easy to get done (I'll link to them individually).


WoodFill was the trickiest to get printing properly. I had to try various things, because it's trivially easy to get it to clog up the nozzle. Also worth noting, Colorfabb used to sell WoodFill and now sells WoodFill Fine. The difference is that the fine one works with 0.4mm nozzles like the one found in the Ultimaker 2, but also requires a heated bed (also found on the Ultimaker 2).

I ended up getting it to print reliably with a print speed of 70mm/s at 205°C with a flow rate of 105%. The trick is to keep the flow rate high, the Colorfabb guys also gave me the tip of increasing the layer height to 0.27mm to increase the flow even further. Getting it to print very fine details is almost impossible with this material, and it doesn't like sticking to itself or the build plate. It's a brute force kind of material, squeeze it out and press it onto the existing structure and just pray. And don't print too slow or decrease the flow rate, because then it will clog up the nozzle. As for the build plate stickiness, glue works wonders here.

The reward for all of the hassle is a great print! Seriously, even though it lacks very fine details, it just looks good. The seams that you get when printing with such big layer heights actually add to the wooden look, and it feels just like wood. Also, while printing, the room will smell like a wood workshop. This can be a plus or negative though, depending on you.

The profile can be found here.


CopperFill is the easiest to handle, right after stock PLA. You load the profile into Cura, set the print temperature to 200°C and print at ~50mm/s. The CopperFill gives you much more leeway though in terms of print speed, and I haven't got it to clog up the nozzle at all.

There actually is not much more to say, the print will have a reddish colour, and it feels like metal and has just the right weight to it. It's a terrible conductor though, so not exactly usable to print wires with.

You really want to post process the CopperFill though, so get some sandpaper and grind it down. Or do it like me, and do it partially to then decide that this is hard work and not fun at all. But if you do, be careful of the fine details: The material is kind of soft, though it hardens out a bit after the print, but it's still easy to sand down your details. And if the structure is thin, it's easy to just break it off by accident.

The profile can be found here.


As bronze is just a copper alloy, I figured it would print pretty much just like the CopperFill does. It does not. For starters, the temperature needs to be higher, I achieved the best results at 210°C, with the print bed heated to just 55°C. Print speed was again ~50mm/s, but just like the CopperFill, it has some leeway where it still looks good. Also noteworthy is that the BronzeFill really doesn't like to be a string and instead likes to drip out of the nozzle, especially when changing the filament and when the printer warms up and squeezes out the first bits of filament before starting to print.

Secondly, the smell is just awful. It's not lingering and goes away fast, but it just doesn't smell good. Post processing is pretty much the same as the CopperFill, if you want it to shine, get some sandpaper. Or don't, because again, it's hard work and your details will suddenly be gone.

Edit: A thing I wanted to mention but forgot, the BronzeFill is a bitch to clean! If you change the filament, you'll have to print quite a bit to get it all out and that's especially noticeable on lighter coloured filaments.
The profile can be found here.


This is just PLA. Print it like PLA. But it's so fucking cool, so I included it anyways. And I also included a picture of just the glow fill:

GlowFill in the dark

Perfect for Halloween :)
Edit: Although it's aggressive to the nozzle and wears it down quickly, so you may want to keep a spare one handy if you want to print a lot with GlowFill.