Setting up my infrastructure – Part 3: Bug tracker requirements

In my last blog post I explained why I want to automate as much as possible. For this I want a build server. The next thing that’s really important to me is, to log every issue and idea I have and have myself organized through the project.

For that I’m going to evaluate some tools, and before you start with downloading and installing all sort of tools and testing them, you should know what you’re looking for.

So lets start:
My personal requirements for a bug / task tracking tool:

  • Easy input of tasks / bugs / features / ideas
    I don’t want to spend a lot of time on ‘managing’ my task manager, and I certainly don’t want to overmanage myself.
  • Work log / spend time tracking
    Even though it’s a pet project, I’d like to see how much time a certain feature has cost. Additionally, if I can see my estimates vs. reality, and I log the reasons why I needed less/more time, I can improve my estimates.
  • Change logs / Release Notes report
    I’d love to be able to generate my release notes out of the tasks that have been fixed for a certain release, so I have to maintain this information only in a single place. Like in an additional field of the task, where I just enter the information that should be visible in the release notes.
  • Documentation
    Not necessarily required in the Bug tracker, but if I can note some technical details (i.e. in custom fields), then I know where to look for the information to build up the real project’s documentation.
  • Integration with VCS / Build server
    In a perfect world I would be able to see the related task(s) from a commit, the related commits from a task, the builds within the build server affected by those commits and the other way round: I would be able to open the corresponding tasks directly from the build in the build server.
  • An easy API

If that tool would support a little bit of analysis / reporting, that’d be totally great. I already mentioned the release notes generation above, but also time spent on features vs. bugs would be an interesting figure for my project. The last bullet point would hopefully make up for all the things that I would like, but that are not supported out of the box. Nevertheless, I actually don’t want to lock me in a specific tool, so writing custom stuff for a single solution would only be that last option, since that time is definetly lost when switching tools.

In the next post I want to share my thoughts on what a ci / build server should be capable of, before we want to start with the real evaluation.

Continue with the next part, or see the other parts in this series:

[contentblock id=infrastructurelinks]

Setting up my infrastructure – Part 2: Automate everything

This post is the second part in the series about my pet project, and this is about automating stuff.

As I already mentioned, I want my personal project to be automated as much as it makes sense. That starts with automatic builds whenever I check in some new code, automated testing starting with unit testing, later integration testing and then, last but not least, automated UI testing and of course code coverage analysis while testing.

To be able to do the automated UI testing, I need the UI to run somewhere to test, so that will include automated deployment to one or more test environments.

I also want to automate the process of creating beta and of course release builds. If I’m going to release binary packages to the public, I also need to package them up in some sort of installer or NuGet packages. So when I start with that stuff, I’m going to automate that too.

Some of you might ask why I’m so bought in automating all that stuff and putting so much work in the backing infrastructure when this is ‘just a pet project’.

Well, there is mainly one reason: All those tasks are in fact pretty tedious, like running the whole test suite before committing, building, packaging up and deploying the project etc.

When I work on this project, I want to concentrate on the actual work, like getting things to work like I want them to work, see what I just did and not spent a lot of time on some boring and also error-prone tasks.

So I feel that every minute I spend for my infrastructure will pay for me later in the project. That is also why I’m going to chose my environment carefully and evaluate some products. More on that in my next blog post, where I want to introduce the build server and bug tracker’s in my first evaluation round. I also want to explain to you the important things that those tools need to be capable of – of course in my personal point of view and for this very pet project.

Continue with the next part, or see the other parts in this series:

[contentblock id=infrastructurelinks]

Setting up my infrastructure – Part 1: Basic tools

For my new pet project I want to use good and efficient tooling. Since I want to create a tool for me and other .NET developers and I feel at home on this platform, I’m going to use C# for the project.

I have my personal MSDN Professional subscription, and so I use Visual Studio 2012 Professional for development. I add my personal ReSharper licence for productivity and I chose Windows 8 Professional as my development OS (in a VM on my MacBook Air). Being totally in the Microsoft .NET ecosystem I’m also going to use MSBuild and MSTest.

Update: Talking about VM on my Mac, I use VMWare Fusion for that. I also have VMware Workstation running on my home server for my build server virtualization, but that will be part of another post.

For source code versionioning I chose Git. Mainly, because I feel that even if Mercurial currently has better tooling support on Windows, Git is more mainstream and tooling is becoming better. As Git clients I currently use the GitHub client and of course the official Git commandline client. I host my sources on BitBucket from Atlassian. They give you private repositories for free, and since I invited some guys I also can collaborate with 3 others if I want without the need to pay for a private shared repository.

Besides that, I of course have the usual .NET developer tools like The Regulator for working with regular expressions, LinqPad for small test thingies and DotPeek as my decompiler.

Now, besides that I need additional tooling to keep track of my tasks, so I need a bug / issue / task tracker. And I don’t want to build releases manually or do manual testing, so I will need some sort of automatic build & test tooling, which leads me to some continuous integration / build server. Chosing which tool is best here will take some time, and so I started to evaluate different solutions. More on that in a separate post.

So the toolset for my pet project is right now:

Continue with the next part, or see the other parts in this series:

[contentblock id=infrastructurelinks]

My new pet project

As already announced on twitter, this year is the year of my pet project.

I’m going to develop something and, hopefully, will be able to release it this year. I can’t tell you much about it at this stage, but the main idea is to create some developer tooling where I couldn’t find anything useful on the market up to now and of course to try out new things and stuff.

I also want to improve my personal process of development with some experiments during this project. The first will be to set up all required infrastructure I consider important for such a project.

During my efforts I want to inform you about tools and techniques I use during this experiment, so stay tuned for more.

The ‘Apache on OS X Mountain Lion’ problem

Whoah, I just dived into severe problems with the apache web server on my MacBook Air, running OS X 10.8 – Moutain Lion.

In preparation of my sessions “JavaScript” and “HTML5” at the EKON 16 conference in november I wanted to set up the web server that is in fact included in the OS X installation.

In previous versions of OS X there was a ‘Web sharing’ option in the system preferences, but this was removed in Mountain Lion. There are several posts out there in the wild showing how to manually enable apache and PHP. I found this instruction on the intertubes and read (but not exactly followed) it.

In my megalomania I went: “I already did that, it can’t be that difficult now. Lets go.” (several years ago I set up apache with PHP 3.something on a Linux system).

So this is what I did:
I skipped the first part of the instruction (starting apache) and directly went to the part where I enabled the user-specific /Sites directory. I set up the directory with all required options, allo/deny rules etc. and THEN I tried to start the web server.

Guess what? It didn’t work.

So the next thing was to look for the error logs (remembering I once had experience with Linux and error logs were a good hint on where to look for my stupid errors). Sadly, the error log folder was empty. So I double-checked the apache config for alternate log folder configurations – and found none.

A experienced apache administrator would guess that there is something wrong with the config file itself, so that apache would not even know about a configured error folder to put its logs into – but I again needed to search for starting problems with apache: Port already taken? Nope. Wrong host name? Nope. It was quite a bit of trial and error until I found this blog post about troubleshooting apache on Mountain Lion. It hinted me to this little command:

sudo bash -x /usr/sbin/apachectl -k start

Starting apache this way prints the messages directly onto the console – and so I could see where apache failed to start. It was in the user-specific /Sites config. The problem was a simple typo in the closing tag (I hacked in ‘diectory’).

So far, so good. Apache launched, my user website worked at my http://localhost/~Sebastian url – but it responded with a 403 – ‘Not authorized’ every time I accessed the folder. Strangely enough, it delivered files when I directly navigated to them (i.e. localhost/~Sebastian/test.html).

In several other attempts I found that the default configuration for the root folder disallowed directory listings and did not allow overwriting this, and how to enable PHP (that commented out entry was hidden very well).

Struggling with apache configs distributed to several locations, unix command lines, nano and a extremely bloated apache documentation I am really grateful that I usually work with IIS in my day to day business. Gladly I don’t have to set up mod_mono on my MacBook Air (yet).

Ask a Ninja: Do I need Typescript?

If the .Net Ninja would have been asked this question, this would be the answer:

A few days ago Anders Hejlsberg showed a new thing currently brewing in the Microsoft labs: TypeScript.

TypeScript is:

  • JavaScript
  • + some (optional) language extensions
  • + a Compiler (more of a extractor, in fact), that removes the extensions and throws out vanilla JavaScript

The compiler itself is also written in TypeScript, so it can be compiled down to pure JavaScript and run wherever JavaScript will run too.

So, now that we know that TypeScript is a mere superset on top of normal JavaScript – what is in these additions that could be interesting?

  • Strong typing
  • Classes
  • Interfaces
  • Simple inheritance
  • Modules

Well, in fact that’s pretty much it. With some annotations in Pascal-Style (that is, colon + type identifier) you can define that a specific variable, function argument or function return value needs to be of a certain type.

var testFunc = function(arg1: string) { return "Argument was: " + arg1; };

Now the TypeScript compiler knows that only strings should be passed into the function assigned to testFunc. And it can infer from the input argument and the operation within this function, that the return value must also be a string. Now, when you try to pass i.e. a number into this function, the compiler will warn you about this, and the same goes when you want to add a number to the return value of this function.

Actually not only the compiler, but also the full IDE support in VisualStudio will highlight this as a potential problem. Also the IDE is so smart to restrict your Intellisense autocompletion to valid types only. These simple annotations are a big player in making JavaScript a bit safer when working with different types.

TypeScript also allows you, to annotate external libraries like jQuery, Prototype, Qooxdoo etc., and it comes with some of them already pre-annotated to give you a head-start.

The other interesting thing is that the way of modularizing the scripts sticks strongly to what is currently proposed to become the next EcmaScript 6 standard. Of course this is only a specification draft by now, and will take some time to be finalized, and it is not sure if the specs will stay this way forever, but this way it is very likely that what you learn with TypeScript can be used in the future for vanilla JavaScript too.

Ask the Ninja: “So, do I *need* TypeScript?
Ninja says:
Need as in totally and absolutely required? Of course no.

TypeScript is an addition to JavaScript that, if used correctly, can help you avoid some nasty bugs. And only, if you are a fan of strong typing and come to JavaScript from other strong typed languages on the .NET or Java Platform or even from Delphi. Then TypeScript is targeted for you!

When you already are a happy JavaScript developer and make use of the dynamic typing features of the language, switch prototype chains on your objects as required and love applying and removing things at run-time, then there is nothing in TypeScript for you.

The Robocopy exit code and MSBuild postbuild error problem

The recent blog post of my friend Jeroen Pluimers about “Robocopy exit codes 0 and 1 usually mean success” reminded me of a problem I had with this fact a few years ago.

Robocopy behaves strange…
Just to sum his blog post up again: Robocopy returns exit code 0 if it successfully had nothing to do (i.e. all target files were already up to date), and returns with exit code 1 if it successfully copied something. Values larger than 1 hint to a potential problem while values equal or greater 8 indicate failure. In fact, the return values are a bit mask and only values that have bits 4 or 5 set indicate a real error situation.

… and doesn’t go well with MSBuild
Now, since exit codes other than 0 mean success and 0 is in fact pretty seldom, only indicating that nothing had to be done, robocopy is a very problematic thing to call in your MSBuild pre and post build targets.

MSBuild interprets the exit codes of the pre and post build calls and if something is not okay, the whole build process errors out. And ‘not okay’ for MSBuild is any other exit code other than zero. So, when robocopy tells the caller “Hey, I copied your files and was successful in doing that: 1!”, MSBuild goes and reads: “1? Oh darn, something went wrong. I need to abort”.

What to do about that?
To solve this problem, there are – like so often – many ways.

First of all, the not-so-elegant solution would be to wrap the call to robocopy into a custom batch file (.bat or .cmd). This in turn would call robocopy (just pass through all arguments one to one), examines the return code and if robocopy inidcates success (probably even with codes up to 7), calls ‘exit 0’ itself. In any other case it could write a better condition on the console and exit 1 to indicate an error. See this table (also mentioned by Jeroen) for the list of conditions. This is not-so-elegant because you won’t get more detailed information about error conditions than what your script returns and it is not very easy to configure. The even worse solution would be, to hard code the files to copy in your batch file, so that whenever your project changes in what to copy, you need to update your file.

Then there is the question, if calling the external robocopy is required at all. You can do a lot with the normal copy task that ships with MSBuild. It allows you to copy multiple files to multiple locations, do hardlinks instead of copying, check if the destination is newer and don’t copy then and a lot more. You also can control the copy task much more easier using simple item groups and properties in msbuild so that you simply could add files to your copy task by adding some meta information to your project files.

If you really, really need to rely on robocopy because you need it’s even more advnced features, then there is a full-featured MSBuild task wrapping the robocopy calls in a MSBuild-ish way in the MSBuild extension pack.

Conclusion
As so often, a tasks that seem so easy brings up some unexpected problems and strange error messages. Even besides the three options mentioned above, there are of course a lot of other possible ways to achieve what you need to copy from your MSBuild project. This post is merely an explanation of the root cause of those problems and a hint on what you can do to avoid the problems arising from the strange error-code behaviour of robocopy.