Wednesday, 4 October 2017

Automated installation of VS 2017 build tools

Visual Studio 2017 has re-done the whole installation procedure, with the goal of making what used to be very painful - preparing a UI-less build agent image for automated .NET builds - nice and simple. And fast. Well, it's not quite there yet. So as I was reading Chris' post on building AMI images for TeamCity build agents with Packer I was nodding along until I came to the bit where VS 2015 tools get installed. What about current tooling?

I wouldn't recommend using chocolatey to install it, unfortunately, even though a package is available. The new installer has a nasty habit of exiting silently (or hanging) if something is amiss - and you'll want to be able to choose your VS workload packages which chocolatey doesn't support.

What can fail? The installer tends to abort if any of the folders it tries to create already exists. That's why you're likely to have more luck if you don't install the .NET 4.7 framework separately - also, any .target or task dlls that are not yet provided by the installer, should be scripted for installation later, not before. Took me a whole day to find this out.

The command line parameters for the installer aren't too obvious either. "wait" doesn't wait unless you wrap it in a PowerShell script. "quiet" prints no diagnostics (well, duh) but "passive" displays a UI - there's no option for "print errors to the command line". If you're struggling, you'll end up re-creating your test VMs and switching between multiple runs of "passive" and "quiet" to see if things finally work. Oh and the download link isn't easy to find either (seriously, it seems to be completely missing from the documentation - thankfully, StackOverflow helps). And getting the parameters in the wrong order end up with the installer hanging.

The short PowerShell script that finally worked for me is:

$Url = 'https://aka.ms/vs/15/release/vs_buildtools.exe'
$Exe = "vs_buildtools.exe"
$Dest = "c:\\tmp\\" + $Exe
$client = new-object System.Net.WebClient
$client.DownloadFile($Url,$Dest)
$Params = "--add Microsoft.VisualStudio.Workload.MSBuildTools `
--add Microsoft.VisualStudio.Workload.WebBuildTools `
--add Microsoft.Net.Component.4.7.SDK `
--add Microsoft.Net.Component.4.7.TargetingPack `
--add Microsoft.Net.ComponentGroup.4.7.DeveloperTools `
--quiet --wait"
Start-Process $Dest -ArgumentList $Params -Wait
Remove-Item $Dest

Is it faster than the VS 2015 installation? Not really, the old one had an offline version you could pre-load, the new one is completely online (if you re-run it you'll get newer components!). And with VS15, a t2.micro instance was enough to run the AMI creation job - this one needs a t2.medium to finish installation in a reasonable amount of time. At least it includes most of the things that were missing before (still waiting for dotnetcore-2.0 to be included).

Monday, 31 March 2014

Using zram for memory compression on Gentoo

After reading an excellent compression on LWN about memory compression in Linux kernel and learning from a Google engineer that they employ zram to increase their workstation available memory (on top of the installed physical 48 GB ...), I've decided to give it a go. There are currently three different algorithms for memory compression being trialed in the Linux kernel, of those, zram is the simplest, but also the most mature - it's also battle tested, as it is enabled by default e.g. on Google Chromebooks. It's also available as an option in Android 4.4

zram works by presenting itself to the kernel as a swap device, while it is in fact backed by RAM. It has a fixed compression ratio of 50% (or, to be more exact, swapped out pages are either stored two-in-one for actual RAM page used or one-in-one if for some reason they don't compress). This simplifies access, keeping page offsets predicatable. A recommended configuration reserves up to 100% of physical RAM for compressed access - this memory will be released back when the memory pressure subsides. This also assumes the pessimistic scenario of incompressible pages - in practice, the zram devices should not take much more than 50% of their advertised capacity, resulting in 150% potential memory load before swapping would need to occur.

Configuration starts with enabling the kernel module:

Device Drivers  --->
    [*] Staging drivers --->
        <M> Compressed RAM block device support

This is done as a module, so that configuration can be easily changed via /etc/modprobe.d/zram.conf:

options zram num_devices=3

I've got the module set to auto-load via /etc/modules-load.d/zram.conf containing just a single line:

zram

Also needed is an entry for udev telling it how to handle zram block devices and setting their size (in /etc/udev/rules.d/10-zram.rules):

KERNEL=="zram[0-9]*", SUBSYSTEM=="block", DRIVER=="", ACTION=="add", ATTR{disksize}=="0", ATTR{disksize}="2048M", RUN+="/sbin/mkswap $env{DEVNAME}"

And the last step is an /etc/fstab entry so that those block devices are actually used:

/dev/zram0  none  swap  sw,pri=16383  0 0
/dev/zram1  none  swap  sw,pri=16383  0 0
/dev/zram2  none  swap  sw,pri=16383  0 0
/dev/zram3  none  swap  sw,pri=16383  0 0

I've seen guides recommending creation of ext4 volumes on zram devices for temporary folders. I would not advise that. Instead, create a standard tmpfs volume, with the required capacity, which will result in better performance - as unused zram device will release the memory back to the kernel.

I've been using this setup since November and haven't had any issues with it. I highly recommend enabling this on your workstation as well - after all, there's no such thing as too much RAM.

Thursday, 20 June 2013

Przelewy zagraniczne do Polski i wymiana walut

A teraz coś bez związku z programowaniem. Jeśli pracujesz za granicą albo spłacasz kredyt walutowy (euro/franki/etc.) to temat pewnie jest Ci znany: masz pieniądze na koncie w innym kraju (albo tylko w obcej walucie) i potrzebujesz je przelać do Polski lub przekonwertować, płacąc oczywiście jak najmniejszą prowizję. Wypróbowałem już kilka wariantów, więc przedstawię pokrótce metodę, którą aktualnie stosuję oraz parę alternatyw.

Tons of money by Paul Falardeau

Zacznę od sytuacji najprostszej, czyli kredytu walutowego: od września 2011 banki działające w Polsce muszą za darmo udostępnić klientom rachunek techniczny umożliwiający spłatę rat bez płacenia spreadu. A ten, w zależności od banku, potrafił dojść i do 6%. Tak, tyle dodatkowo oddajesz swojemu bankowi, jeśli pozwalasz mu, żeby wymieniał walutę za Ciebie. Dwa lata temu, kiedy KNF wymusił na bankach zmiany, internet wysypał kantorami online, pozwalającymi wymieniać pieniądze po kursie zbliżonym do rynkowego, z minimalną prowizją. Ze sprawdzonych przeze mnie najlepszą ofertę (prawdopodobnie ze względu na największe obroty) ma Walutomat, założony w Poznaniu przez byłych pracowników Allegro. Firma zarejestrowana jest jako kantor i podlega takiemu samemu nadzorowi Ministerstwa Finansów jak fizyczne punkty wymiany walut, co jak dla mnie jest wystarczającym uwierzytelnieniem. Prowizja za wymianę wynosi 0.2%, wpłaty i wypłaty do większości dużych banków są darmowe. Korzystając z usług banku, w którym Walutomat ma swoje konto, zazwyczaj wymiana pieniędzy zajmuje około 4 godzin, z czego większość czasu to oczekiwanie na zaksięgowanie przychodzącego przelewu przez bank. Serwis wszystkie operacje potwierdza kodami SMS, w ten sam sposób może też informować o realizacji zleceń i otrzymaniu przelewu. Jedynym mankamentem jest nieobsługiwanie przelewów zagranicznych (wymagania KNF).

Jeśli często zdarza Ci się robić zakupy w zagranicznych serwisach internetowych, warto założyć kartę płatniczą do konta walutowego (oferuje taką np. Alior) i zamiast zostawiać przewalutowanie w gestii Visa/Mastercard (ok. 4% prowizji), wykonywać je samemu.

Wariant drugi: pracujesz poza granicami Polski, ale w strefie euro. Sytuacja właściwie taka sama, jak w wariancie pierwszym, bo dzięki SEPA przelewy w obrębie Unii są bezpłatne (praktycznie - przepisy wymagają, by były "nie droższe niż lokalne"). Potrzebne Ci będzie konto walutowe w polskim banku (każdy porządny prowadzi takie za darmo). Ot i cała filozofia - przelew SEPA powinien być zaksięgowany następnego dnia roboczego, co czasem prowadzi do absurdów, bo potrafi dojść szybciej niż lokalny (Irlandia, khem, khem). Co do wymiany walut, odsyłam znów do Walutomatu, nie znalazłem tańszej alternatywy. Zdecydowanie nie wykonuj przelewu walutowego na konto prowadzone w złotówkach, bo bank skasuje do 10% za wymianę.

Najciekawiej (czytaj: najbardziej upierdliwie) robi się, gdy pracujesz poza strefą euro, np. w Wielkiej Brytanii. Splendid isolation i te sprawy. Przelewy zagraniczne z UK są drogie, w okolicach 20£ (albo i więcej, zależnie od banku). WBK udostępnia tańsze przelewy (2.5£), ale z niskimi limitami (750£) i koniecznością noszenia gotówki na pocztę. Dużo z tym zachodu. Z drugiej strony, jeśli zarabiasz naprawdę dużo, to Citi oferuje darmowe przelewy między swoimi placówkami w dowolnych krajach - ale każe sobie sporo płacić za konto które nie wykazuje wystarczających miesięcznych wpływów (1800£ + 2 Direct Debit w Wielkiej Brytanii / 5000 zł w Polsce). Najwygodniejszym rozwiązaniem, jakie dotychczas znalazłem, jest TransferWise (link z moim identyfikatorem polecającego, pierwsza wymiana bez prowizji). Normalna prowizja jest wyższa niż w Walutomacie, bo 0.5% (minimum 1£), ale brak opłaty za przelew znacząco zwiększa jej atrakcyjność. Z prostych obliczeń wychodzi, że poniżej 6500£ w jednej transakcji tańszy będzie TransferWise (zakładając 20£ za przelew). Teoretycznie wymiana może trwać do 4 dni roboczych, moje jak dotąd realizowane były w około 4 godziny (od przelewu w Wielkiej Brytanii do wpłynięcia pieniędzy na konto w Polsce). Firmę założył pierwszy pracownik Skype; siedzibę mają w Shoreditch, wylęgarni londyńskich start-upów, są też zarejestrowani w brytyjskim Financial Services Authority jako pośrednik w międzynarodowym transferze pieniędzy. Od czerwca oferuje też alternatywę dla pobierania płatności (usługi typu PayPal) pod nazwą GetPaid.

Podsumowując te wszystkie opcje w jednym akapicie: jeśli potrzebujesz wymienić waluty w Polsce - skorzystaj z Walutomatu. Jeśli chcesz przekazywać pieniądze do/z Wielkiej Brytanii albo Stanów Zjednoczonych, skorzystaj z TransferWise.

Sunday, 19 May 2013

SquashFS Portage tree saving time and space

Gentoo Portage, as a package manager, has one annoying side-effect of using quite a lot of disk space and being, generally, slow. As I was looking to reduce the number of small file writes that emerge --sync inflicts on my SSD, I've came back to an old and dusty trick - keeping your portage tree as a SquashFS file. It's much faster than the standard setup and uses less (76MB vs almost 400MB) disk space. Interested? Then read on!

Squashes by Olivia Bee

Requirements:

  • SquashFS enabled in the kernel: File systems -> Miscellaneous filesystems -> <M> SquashFS 4.0 - Squashed file system support and [*] Include support for ZLIB compressed file systems
  • Installed sys-fs/squashfs-tools
  • Distfiles moved out of the portage tree, e.g. (in /etc/portage/make.conf): DISTDIR="/var/squashed/distfiles"

I'm also assuming that your /tmp folder is mounted as tmpfs (in-memory temporary file system) since one of the goals of this exercise is limiting the amount of writes emerge --sync inflicts on the SSD. You are using an SSD, right?

You will need an entry in /etc/fstab for /usr/portage:

/var/squashed/portage /usr/portage squashfs ro,noauto,x-systemd.automount 0 0

This uses a squashed portage tree stored as a file named /var/squashed/portage. If you are not using systemd then replace ro,noauto,x-systemd.automount with just ro,defaults.

Now execute mv /usr/portage/ /tmp/ and you are ready to start using the update script. Ah yes, forgot about this part! Here it is:

#!/bin/bash
# grab default portage settings
source /etc/portage/make.conf
# make a read-write copy of the tree
cp -a /usr/portage /tmp/portage
umount /usr/portage
# standard sync
rsync -avz --delete $SYNC /tmp/portage && rm /var/squashed/portage
mksquashfs /tmp/portage /var/squashed/portage && rm -r /tmp/portage
mount /usr/portage
# the following two are optional
eix-update
emerge -avuDN system world

And that's it. Since the SquashFS is read only, this script needs to first make a writeable copy of the tree (in theory, this is doable with UnionFS as well, but all I was able to achieve with it were random kernel panics), then updates the copy through rsync and rebuilds the squashed file. Make sure you have a fast rsync mirror configured.

For me, this decreased the on-disk space usage of the portage tree from over 400MB to 76MB, cut the sync time at least in half and made all emerge/eix/equery operations much faster. The memory usage of a mounted tree will be about 80MB, if you really want to conserve RAM you can just call umount /usr/portage when you no longer need it.

Monday, 8 April 2013

Designing a public API

Disclaimer: a bit over a month ago I've joined the API team at 7digital. This post was actually written before that and stayed in edit mode far too long. I've decided not to update it, but instead to publish it as it is with the intention of writing a follow-up once I have enough new interesting insights to share.

The greatest example of what comes from a good internal API that I know of is Amazon. If you're not familiar with the story the way Steve Yegge (now from Google) told it, I recommend you read the full version (mirrored, original was removed). It's a massive post that was meant for internal circulation but got public by mistake. There's also a good summary available (still a long read). Steve followed up with an explanation after Google PR division learnt that he released his internal memo in public. If you're looking for an abridged version, there's a good re-cap available at API Evangelist.

motorway at night by Haxonite

I'd recommend you read all of those, but preferably not right now (apart from the 2-minutes re-cap in the last link), as it would take a better part of an hour. A single sentence summary is: you won't get an API, a platform others can use, without using it first yourself, because a good API can't, unfortunately, be engineered, it has to grow.

So to start on the service-oriented route, you have to first take a look at how various existing services your company has interact with each other. I am sure you will find at least one core platform, even if it's not recognised as such, with existing internal consumers. It's probably a mess. Talk to anyone who worked with those projects (probably most programmers) and you'll have lots of cautionary tales how API can go wrong, especially when you try to plan it ahead and don't get everything right. And you won't, it's just not possible.

Some of the lessons I've learnt so far (I'm sure others can add more!):

  1. You need to publish your API.

    Last team I was with did this, sort of - we had NuGet packages (it's a .Net API, not a web one, ok?). Still, those packages contain actual implementation, not only surface interfaces, so they are prone to breaking. And they expose much more than is actually needed/should be used, so a whole lot of code is frozen in place (see 2.).
  2. You need a deprecation mechanism.

    Your API will change. Projects change, and API needs to reflect this. It's easy to add (see next point), but how do you remove? Consumers of the API don't update the definition packages, we've had cases where removing a call that was marked as [Obsolete] for over a year broke existing code.
  3. You need to listen to feedback from consumers.

    Internal consumers are the best, because you can chat with them in person. That's the theory, at least, I've seen project teams not talk to each other and it becomes a huge problem. Because of problems/lacks in API, we had projects doing terrible things like reading straight from another database or even worse, modifying it. This won't (hopefully) happen with an external consumer, but if the other team prefers to much around in your DB instead of asking for an API endpoint they need, you don't have a working API.
  4. Your API needs to perform.

    Part of the reason for problems mentioned in 3. is that our API was slow at times. There were no bulk read/update methods (crucial for performance when working with large sets of items), we had bulk notification in the form of NServiceBus queues but it had performance problems as well. If the API is not fast enough for what it's needed for, it won't be used, it's that simple.
  5. You need to know how your API is used.

    This last point is probably the most important. You won't know what you can remove (see 2.) or what is too slow (see 4.) if you don't have some kind of usage/performance measurement. Talk to your Systems team, I'm sure they will be happy to suggest a monitoring tool they already use themselves (and they are the most important user of your reporting endpoints). For Windows Services, Performance Counters are a good start, most administrators should already be familiar with them. Make sure those reports are visible, set up automatic alarms for warning conditions (if it's critical it's already too late to act). Part of this is also having tests that mirror actual usage patterns (we had public interfaces that weren't referenced in tests at all) - if a public feature does not have automated test then forget about it, it could as well not exist. Well unless you consider tests "we have deleted an unused feature and a month later found out another project broke" (see 2.).

In summary, the shortest (although still long!) path to a useful public API is to use it internally. Consumers with a quick feedback cycle are required to create and maintain a service-oriented architecture, and there's no faster feedback than walking to your neighbour's desk.

Sunday, 16 December 2012

SSD, GPT, EFI, TLA, OMG!

I finally bought an SSD, so I took the drive change as an excuse to try out some other nifty new technologies as well: UEFI and GPT. Getting them to work (along with a dual-boot operating systems - Gentoo + Windows 7) wasn't trivial so I'll describe what was required to get it all humming nicely.

The hardware part was easy. The laptop I have came with a 1TB 5.4k extra-slow hard drive plugged into it's only SATA-3.0 port, but that's not a problem. There's another SATA-2.0 port, dedicated to a DVD drive - why would anyone need that? I've replaced the main drive with a fast Intel SSD (450MBps write, 500MBps read, 22.5K IOPS - seriously, they've become so cheap that if you're not using one you must be some kind of a masochist that likes to stare blankly at the screen waiting for hard drive LEDs to blink), ordered a "Hard Driver Caddy" off eBay ($9 including postage, although it took 24 days to arrive from Hong Kong) and started system installation.

HDD and SSD on an open laptop

Non-chronologically, but sticking to the hardware topic: the optical drive replacement caddy comes in three different sizes (for slot drives/slim 9.5mm/standard 12.7mm) and that's pretty much the only thing you have to check before you order one. Connectors and even the masking plastic bits are standardised, so replacement operation is painless. A caddy itself weights about 35g (as much as a small candy bar), so your laptop will end a bit leaner than before.

DVD and an HDD in the caddy:

DVD and HDD in a replacement caddy

You'll want to remove the optical drive while it's ejected, as the release mechanism is electrical, and one of two hooks holding the bezel is only accessible when the drive is opened. I used a flat screwdriver to unhook it, but be careful, as the mask is quite flimsy and might break. Only a cosmetic problem, but still. Showing the hooks:

That's pretty much everything that's needed from the hardware side - now to the software. I was following a Superuser post Make UEFI, GPT, Bootloader, SSD, USB, Linux and Windows work together, which describes the dual boo installation procedure quite well. My first problem was that I couldn't get a UEFI boot to work from a DVD (when I still had it). Went for the easiest solution with Ubuntu Live USB that managed to start in the UEFI mode just fine.

There are quite a few "gotchas" here: you can't install a UEFI system if you're not already booted into UEFI mode (check dmesg output for EFI messages). The starting payload needs to be 64 bit and reside on a FAT32 partition on a GPT disk (oversimplifying a bit, but those are the requirements if you want to dual-boot with Windows). A side-note for inquiring minds: you'll also need a legal copy of Windows 7/8, as the pirate bootloaders require booting in BIOS mode. Oh, and your SATA controller needs to be set to AHCI mode, because otherwise TRIM commands won't reach your SSD drive and it will get slower and slower as it fills with unneeded (deleted, but not trimmed) data.

Once I had Ubuntu started, I proceeded with a mostly standard Gentoo installation procedure. Make sure you do your GPT partitioning properly (see the Superuser post, although the 100MB for EFI boot partition might be too much - I have 16MB used on it and that's unlikely to change) and remember to mount the "extra" partition in /boot/efi before you install Grub2. Additional kernel options needed are listed on Gentoo Wiki, Grub2 installation procedure for UEFI is documented there as well. Make sure that your Linux partitions are ext4 and have the discard option enabled.

All of this resulted in my machine starting - from pressing the power button to logging onto Xfce desktop - in 13 seconds. Now it was time to break it by getting Windows installed. Again, the main hurdle proved to be starting the damn installer in UEFI mode (and you won't find out in which mode it runs until you try to install to a GPT disk and it refuses to continue because of unspecified errors). Finally I got it to work by using the USB I had created for Ubuntu, replacing all of the files on the drive with Windows Installation DVD contents and extracting the Windows bootloader. That was the convoluted part, because a "normal" Windows USB key will only start in BIOS mode.

  • Using 7zip, open file sources/install.wim from the Windows installation DVD and extract \1\Windows\Boot\EFI\bootmgfw.efi from it.
  • On your bootable USB, copy the folder efi/microsoft/boot to efi/boot.
  • Now take the file you extracted and place it in efi/boot as bootx64.efi.

This gave me an USB key that starts Windows installer in UEFI mode. You might want to disconnect the second drive (or just disable it) for the installation, as sometimes Windows decides to put it's startup partition on the second drive.

Windows installation done, I went back to Ubuntu Live USB and restored Grub2. Last catch with the whole process is that, due to some bug, it won't auto-detect Windows, so you need an entry in /etc/grub.d/40_custom file:

menuentry "Windows 7 UEFI/GPT" {
 insmod part_gpt
 insmod search_fs_uuid
 insmod chain
 search --fs-uuid --no-floppy --set=root 6387-1BA8
 chainloader ($root)/EFI/Microsoft/Boot/bootmgfw.efi
}

The 6387-1BA8 identifier is the partitions UUID, you can easily find it by doing ls -l /dev/disk/by-uuid/.

Dual-booting is usually much more trouble than it's worth, but I did enjoy getting this all to work together. Still, probably not a thing for faint of heart ;-) I also have to admit that after two weeks I no longer notice how quick boot and application start-up are (Visual Studio 2012 takes less than a second to launch with a medium size solution, it's too fast to practically measure), it's just that every non-SSD computer feels glacially slow.

In summary: why are you still wasting your time using a hard drive instead of an SSD? Replace your optical drive with a large HDD for data and put your operating system and programs on a fast SSD. The hardware upgrade is really straightforward to do!

Sunday, 30 September 2012

Handling native API in a managed application

Although Windows 8 and .NET 4.5 have already been released, bringing WinRT with them and promising the end of P/Invoke magic, there's still a lot of time left until programmers can really depend on that. For now, the most widely available way to interact with the underlying operating system from a C# application, when the framework doesn't suffice, remains P/Invoking the Win32 API. In this post I describe my attempt to wrap an interesting part of that API for managed use, pointing out several possible pitfalls.

rusted gears

Lets start with a disclaimer: almost everything you need from your .NET application is doable in clean, managed C# (or VisualBasic or F#). There's usually no need to descend into P/Invoke realms, so please consider again if you really have to break from the safe (and predictable) world of the Framework.

Now take a look at one of the use cases where the Framework does not deliver necessary tooling: I have an application starting several children processes, which may in turn start other processes as well, over which I have no control. But I still need to turn the whole application off, even when one of the grandchild processes breaks in a bad way and stops responding. (If this is really your problem, then take a look at KillUtil.cs from CruiseControl.NET, as this way ultimately what I had to do.)

There is a very nice mechanism for managing child processes in Windows, called Job Objects. I found several partial attempts of wrapping it into a managed API, but nothing really that fitted my purpose. An entry point for grouping processes into jobs is the CreateJobObject function. This is a typical Win32 API call, requiring a structure and a string as parameters. Also, meaning of the parameters might change depending on their values. Not really programmer-friendly. There are a couple of articles on how the native types map into .NET constructs, but it's usually fastest to take a look at PInvoke.net and write your code based on samples there. Keep in mind that it's a wiki and examples will often contain errors.

What kind of errors? For one, they might not consider 32/64 bit compatibility. If it's important to you then be sure to compile your application in both versions - if your P/Invoke signatures aren't proper you'll see some ugly heap corruption exceptions. Other thing often missing from the samples is error checking. Native functions do not throw exceptions, they return status codes and update the global error status, in a couple of different ways. Checking how a particular function communicates failure is probably the most tricky part of wrapping. For that particular method I ended up with the following signature:

[DllImport("kernel32", SetLastError = true, CharSet = CharSet.Auto)]
private static extern IntPtr CreateJobObject(IntPtr lpJobAttributes, string lpName);

Modifiers static extern are required by P/Invoke mechanism, private is a good practice - calling those methods requires a bit of special handling on the managed side as well. You might also noticed that I omitted the .dll part of the library signature - this doesn't matter on Windows, but Mono will substitute a suitable extension based on the operating system it's running on. For the error reporting to work, it's critical that the status is checked as soon as the method returns. Thus the full call is as follows:

IntPtr result = CreateJobObject(IntPtr.Zero, null);
if (result == IntPtr.Zero)
    throw new Win32Exception();

On failure, this will read the last reported error status and throw a descriptive exception.

Every class holding unmanaged resources should be IDisposable and also include proper cleanup in it's finalizer. Since I'm only storing an IntPtr here I'll skip the finalizer, because I might not want for the job group to be closed in some scenarios. In general that's a bad pattern, it would be better to have a parameter controlling the cleanup instead of "forgetting" the Dispose() call on purpose.

There's quite a lot of tedious set-up code involved in job group control that I won't be discussing in detail (it's at the end of this post if you're interested), but there are a couple of tricks I'd like to point out. First, and pointed out multiple times in P/Invoke documentation (yet still missing from some samples) is the [StructLayout (LayoutKind.Sequential)] attribute, instructing the runtime to lay out your structures in memory exactly as they are in the file. Without that padding might be applied or even the members might get swapped because of memory access optimisation, which would break your native calls in ways difficult to diagnose (especially if the size of the structure would still match).

As I mentioned before, Win32 API calls often vary their parameters meaning based on their values, in some cases expecting differently sized structures. If this happens, information on the size of the structure is also required. Instead of manual counting, you can rely on Marshal.SizeOf (typeof (JobObjectExtendedLimitInformation)) to do this automatically.

Third tip is that native flags are best represented as enum values and OR'ed / XOR'ed as normal .NET enums:

[Flags]
private enum LimitFlags : ushort
{
    JobObjectLimitKillOnJobClose = 0x00002000
}

Wrapping unmanaged API often reveals other problems with it's usage. In this case, first problem was that Windows 7 uses Compatibility Mode for launching Visual Studio, which that wraps it and every program started by it in a job object. Since a job can't (at least not in Windows 7) belong to multiple groups, my new job group assignment would fail and the code would never work inside a debugger. As usual, StackOverflow proved to be helpful in diagnosing and solving this problem.

However, my use case is still not fulfilled: if I add my main process to the job group, it will be terminated as well when I close the group. If I don't, then a child process might spin off children of its own before it is added to the group. In native code, this would be handled by creating the child process as suspended and resuming it only after it has been added to the job object. Unfortunately for me, turns out that Process.Start performs a lot of additional set-up that would be much too time consuming to replicate. Thus I had to go back to the simple KillUtil approach.

I've covered a couple of most common problems with calling native methods from a managed application and presented some useful patterns that make working with them easier. The only part missing is the complete wrapper for the API in question: