Setting up my infrastructure – Part 6: The evaluation candidates

In this post I want to introduce the evaluation candidates for the bug tracker and the continuous integration software I’m going to use for my pet project.

Since I want to spent not too much time on my infrastructure, I just want to check out two or three candidates for each, and I already have a list of bug trackers I’m definitely not going to use, so I start with them first. Also, the Wikipedia comparison sheet of issue tracking systems is a good reference to exclude some applications. I’m going to start from the side of the bug tracker and from that I make an inner join on the available CI servers with integration possibility as the join condition.

My filters

Again my little disclaimer: This is my personal list of filters for my personal pet project. They may or may not apply to your use case or requirement catalog.

First of all, I want to host the solution myself and not depend on someone else’s infrastructure. Then as the next thing I already mentioned I want at least a minimum integration of bug tracker and CI server, so anything that totally does not know from each other is not in my scope, as well as tools that don’t integrate with any CI server solution.

You already know that I use a Windows Server for hosting, and I don’t want to mess around too much with my IIS, so I’d like to stick with solutions that are either ASP.NET or PHP applications, or that are not hosted in the IIS directly. Besides that I don’t want to manually administer an extra apache on a system. I don’t know enough of that and I don’t want to spent my time learning how to manage another web server when I already know how to manage my IIS and administering web servers is not my main business. I’d rather spent my time learning more about things that really push my skills forward and making me more specialized.

When thinking about the database, I want to use either MySQL/MariaDB or Microsoft SQL Server Express. I know how to manage both as well as Oracle (which I don’t want to set up and keep it running myself without the help of an experienced Oracle DBA), and learning setting up and running yet another database is not on my to-do list for now.

Those restrictions already strike out a lot of possible systems, and the next one will make the list even shorter: I don’t want to use something that is not commercially maintained. There are several reasons for that. If there’s a bug that itches me, I don’t want to hope that the community is going to fix it. In several open source projects the normal answer to a bug report is “where’s the pull request for the fix?”. I don’t want to dig into the code of my bug tracker to fix issues myself. I’m willing to pay for my tooling even if I try to keep expenses low.

This is the last filter: The software should be affordable for a one person show and scale up to a small team of about 5 until it gets more expensive.

The candidates

After applying my filters to the list of available bug trackers, only a few are left over. They are only commercial solutions where I can rely on support. Then I additionally filtered a bit more for products from companies where I have a feeling that they are well-known in the developer communities so I can additionally rely on fast help via StackExchange.

First of all, something what I tested some time ago and is indeed a good software for keeping track of your project and your to-dos is FogBugz, but the self-hosted edition is too expensive for me (the entry point is a 5 user licence at 999 USD).

As I already use a lot of stuff from Atlassian, it would be logical to check out their solution too. This would be Jira. It is the same 10 USD for 10 User entry point and integrates with Stash, FishEye and Crucible. That would make it a first class citizen in my current environment. They also have a build server, Bamboo, that would fit in nicely too. So Jira and Bamboo are my first candidates for the evaluation.

Besides that I already use tooling from JetBrains (ReSharper, DotPeek), and they also offer a bug tracking tool called YouTrack and a build server named TeamCity. For both tools JetBrains offer free licences that restrict either the number of possible users or build configurations. So with 10 users and my single project I would be in the free licence area for both, and upgrades are affordable for larger teams starting at 450€ for a 10-user YouTrack licence and 25 users is a mere 225€ upgrade. TeamCity upgrade is more expensive, but it is also possible and allowed to set up more than one free TeamCity instance if it really would be necessary. This seems a good pack and so they are in the evaluation.

So far I am very happy with SoureTree, ReSharper and DotPeek and I have a feeling that both companies can deliver a decent bug tracking and continuous integration software for my needs. That’s why I chose to stop with picking candidates at this point. Evaluating four products is a not so trivial task already and if both products in a category would fail, then I still can choose other candidates to check.

Continue with the next part, or see the other parts in this series:

[contentblock id=infrastructurelinks]

Setting up my infrastructure – Part 5: Additional tools, server and hosting

In this post I’m going to mention all the other necessary stuff for a project like mine.

Preamble: Actually, it is a spare time, private thing, and as such I don’t want to spent too much money on it. I also don’t (yet?) know how long this will take and as such don’t want to pay too much. Especially not on subscriptions for services.

So, where to start? I think source control is the most important thing for a software project, so let’s go.

Source control

I chose Git. In the first infrastructure post I already mentioned some of my versioning tooling (which in fact already changed up to now). I have a lot of experience with SVN, not yet so much with Git, but as I already mentioned it seems that from an adoption and acception point of view Git is the new mainstream source control tool. It is powerful, it is cross-platform, and GUI clients support is growing. My other alternative would be Mercurial (Hg), but despite it’s better windows GUI clients, adoption is not that good and I want to be able to ask questions on StackOverflow and get help quickly.

So, I already said I was using Bitbucket from Atlassian for hosting free private repositories. This is only partially correct by now. I decided to self-host my repositories and use bitbucket as an additional off-site backup for my repositories. Why is that? I don’t want to be fully dependent on a single point of failure (Bitbucket). They host in the cloud, and we all saw that the big cloud players like Amazon with EC3 and Microsoft with Azure can encounter large-scale problems. Even if Atlassian takes all precocious measures to keep their service available, which is probably not the case given a lot of people are only using the free stuff, something really stupid like expired certificates at the cloud side could render the service unavailable for hours or even days.

My idea is the following: I mainly work on my self-hosted repository. Whenever my build server has a new successful build, it will automatically push that to the Bitbucket repo. This way I have a repo backup on my dev mashine, Bitbucket with the latest fully working state (since you commit and push often, that should not be too far away from my local copy) and of course my self-hosted repo. That should be enough safety in case something happens to my notebook, my server or Atlassian.

Speaking of Atlassian, they have this great Git client SourceTree for Mac. They recently announced opening up a beta test for SourceTree for Windows via Twitter. Guess what? I signed up 😉

You see, I use Bitbucket from them, I use SourceTree on the Mac from them and I’m eager to get experience with their SourceTree for Windows. Atlassian is very present in my Git-centric versioning environment, which is why I also started to use their product Stash. Stash is BitBucket on my own server. I can create repositories, manage permissions (okay, currently I’m the only user) and have it automatically manage my branches. And it is very cheap at yearly 10 USD for 10 potential users. So when my project succeeds, and I stock up my development team beyond 10, then I for sure will have the money to upgrade.

Source quality

Since you now the tooling I use to store my sources and to manage it on my server and my development machine I want to introduce another tool I bought and installed, even if it’s usefulness is (currently) questionable. I bought FishEye and Crucible from Atlassian to. At 10 USD each it was not a real investment, and I feel that FishEye lets me keep control over my code more easily. It allows fast searching through all the project code (in 5 repositories for 10 users) and lets me browse through the history of my code in a convenient way. Crucible as a code review tool is probably not of so much use for a one man show, but perhaps later on somebody want to join my efforts with this project and potentially participate on revenues, if this becomes successful. Crucible is the only tool thats the 10 USD for only 5 and not 10 users.

Hosting

For a long time I had a hosted Linux root server (dune) at Strato for 49€ / month. It used to host my email server (I completely switched to Gmail for my domain a few years ago), hosted my first blogs, some home pages and discussion forums for the guilds when I still was playing. Besides that I had a very small Windows Server at 1&1 (smarthost), which I got for 14 € / month as a special offer during my studies. But it was not powerful enough to replace all services on dune.

As I already posted, this blog (and almost all other things hosted on dune and smarthost) now moved to Gallifrey. Gallifrey is a big Windows Server 2012 ‘Level 4’ V-Server at Strato, with 4 virtual CPU cores, 4 Gig of ram and a 250 GB HDD. Enough power to host those littles web sites, my blog and my complete build environment. I ordered Gallifrey when there was a 6-month free offer and it is at 29€ / month. So I canceled dune and smarthost, which will in fact save me about 34€ / month while at the same time offering more power.

Backup

As already mentioned, my sources will be automatically backed up to BitBucket. By now, I also put the sources of this blog and all other homepages into Git repositories which are also automatically backed up this way. All databases are dumped on a regular basis and copied over both my home server and a cloud storage. Same goes for the working directories with config files and changing contents. They are copied to a backup location, zipped and transferred together with the database dumps. All that is triggered by a scheduled task on the v-server.

Summary

So the toolset for my pet project is right now:

Update: Fixed some typos. Thanks Manuel 🙂

Continue with the next part, or see the other parts in this series:

[contentblock id=infrastructurelinks]

Setting up my infrastructure – Part 4: Build server requirements

Okay, after the short delay I want to continue with my pre-thoughts for the tooling evaluation for my pet project. I already mentioned my requirements for a task and bug tracking tool to coordinate my work and keep me on track.

Now the second important thing is to stay in control of my code I’m going to produce driven by those tasks. For that I want a good suite of unit- and integration tests. The next logical step is having the tests run at every checkin, so a CI / build server is required. This also automatically opens up the possibility to automatically deploy certain parts of my project to test, staging and eventually production environments. So this is something I totally want to go for.

My personal requirements for a CI / build server:

  • Work fine with .NET environments
    I want to write something for .NET developers. As such, my pet project is a .NET project itself. I like to use the things that come in the same box as .NET (MSBuild, MSTest). Whatever tool I chose, it should support that out of its own box.
  • Work fine with Git
    Okay, that probably is a no-brainer. Git is the new SVN (just talking about adoption here, please don’t kill me for that comparison), and I assume all tools out there will have Git support in some degree.
  • Can build branches automagically
    That probably comes with ‘fine’ Git support. Whenever I create a new branch and push it to the repo the CI server builds from, this branch should build too automatically. This way I know for sure everything is working before merging stuff.
  • Easy to setup
    I actually want to work on my project and don’t spend all my spare time with getting my infrastructure up and running.
  • Integrate with my bug tracker
    As already mentioned in the previous post, a two-way integration of Ci server and task tracking would be extremely cool, but is not an absolute must-have.
  • Allow extensions with reports easily
    I think about code coverage analysis, running FxCop and or StyleCop on the Buildserver and have their reports displayed directly with the build report. Not from the very beginning, but such things should be possible.

So, that’s pretty much it for the CI / build server.

The next post will bring some light in the darker areas of the infrastructure part, like where to host my Git repositories and what additional tooling may be nice when working with the code. This will also raise some questions about hosting in general as well as setup and tooling on the server, which may affect the tools that I’m going to evaluate.

Continue with the next part, or see the other parts in this series:

[contentblock id=infrastructurelinks]

Some delay with my pet project

Just a short update about my pet project and the evaluation posts:

I’m a little bit late with my project. The reason is that I had some high load periods at my job and I had to finish an article for a magazine.

Now with the high load managed and the article written, I’m fully commited to my project again and will continue with my tool evaluation. Be prepared to read more soon.

Setting up my infrastructure – Part 3: Bug tracker requirements

In my last blog post I explained why I want to automate as much as possible. For this I want a build server. The next thing that’s really important to me is, to log every issue and idea I have and have myself organized through the project.

For that I’m going to evaluate some tools, and before you start with downloading and installing all sort of tools and testing them, you should know what you’re looking for.

So lets start:
My personal requirements for a bug / task tracking tool:

  • Easy input of tasks / bugs / features / ideas
    I don’t want to spend a lot of time on ‘managing’ my task manager, and I certainly don’t want to overmanage myself.
  • Work log / spend time tracking
    Even though it’s a pet project, I’d like to see how much time a certain feature has cost. Additionally, if I can see my estimates vs. reality, and I log the reasons why I needed less/more time, I can improve my estimates.
  • Change logs / Release Notes report
    I’d love to be able to generate my release notes out of the tasks that have been fixed for a certain release, so I have to maintain this information only in a single place. Like in an additional field of the task, where I just enter the information that should be visible in the release notes.
  • Documentation
    Not necessarily required in the Bug tracker, but if I can note some technical details (i.e. in custom fields), then I know where to look for the information to build up the real project’s documentation.
  • Integration with VCS / Build server
    In a perfect world I would be able to see the related task(s) from a commit, the related commits from a task, the builds within the build server affected by those commits and the other way round: I would be able to open the corresponding tasks directly from the build in the build server.
  • An easy API

If that tool would support a little bit of analysis / reporting, that’d be totally great. I already mentioned the release notes generation above, but also time spent on features vs. bugs would be an interesting figure for my project. The last bullet point would hopefully make up for all the things that I would like, but that are not supported out of the box. Nevertheless, I actually don’t want to lock me in a specific tool, so writing custom stuff for a single solution would only be that last option, since that time is definetly lost when switching tools.

In the next post I want to share my thoughts on what a ci / build server should be capable of, before we want to start with the real evaluation.

Continue with the next part, or see the other parts in this series:

[contentblock id=infrastructurelinks]

Setting up my infrastructure – Part 2: Automate everything

This post is the second part in the series about my pet project, and this is about automating stuff.

As I already mentioned, I want my personal project to be automated as much as it makes sense. That starts with automatic builds whenever I check in some new code, automated testing starting with unit testing, later integration testing and then, last but not least, automated UI testing and of course code coverage analysis while testing.

To be able to do the automated UI testing, I need the UI to run somewhere to test, so that will include automated deployment to one or more test environments.

I also want to automate the process of creating beta and of course release builds. If I’m going to release binary packages to the public, I also need to package them up in some sort of installer or NuGet packages. So when I start with that stuff, I’m going to automate that too.

Some of you might ask why I’m so bought in automating all that stuff and putting so much work in the backing infrastructure when this is ‘just a pet project’.

Well, there is mainly one reason: All those tasks are in fact pretty tedious, like running the whole test suite before committing, building, packaging up and deploying the project etc.

When I work on this project, I want to concentrate on the actual work, like getting things to work like I want them to work, see what I just did and not spent a lot of time on some boring and also error-prone tasks.

So I feel that every minute I spend for my infrastructure will pay for me later in the project. That is also why I’m going to chose my environment carefully and evaluate some products. More on that in my next blog post, where I want to introduce the build server and bug tracker’s in my first evaluation round. I also want to explain to you the important things that those tools need to be capable of – of course in my personal point of view and for this very pet project.

Continue with the next part, or see the other parts in this series:

[contentblock id=infrastructurelinks]

Setting up my infrastructure – Part 1: Basic tools

For my new pet project I want to use good and efficient tooling. Since I want to create a tool for me and other .NET developers and I feel at home on this platform, I’m going to use C# for the project.

I have my personal MSDN Professional subscription, and so I use Visual Studio 2012 Professional for development. I add my personal ReSharper licence for productivity and I chose Windows 8 Professional as my development OS (in a VM on my MacBook Air). Being totally in the Microsoft .NET ecosystem I’m also going to use MSBuild and MSTest.

Update: Talking about VM on my Mac, I use VMWare Fusion for that. I also have VMware Workstation running on my home server for my build server virtualization, but that will be part of another post.

For source code versionioning I chose Git. Mainly, because I feel that even if Mercurial currently has better tooling support on Windows, Git is more mainstream and tooling is becoming better. As Git clients I currently use the GitHub client and of course the official Git commandline client. I host my sources on BitBucket from Atlassian. They give you private repositories for free, and since I invited some guys I also can collaborate with 3 others if I want without the need to pay for a private shared repository.

Besides that, I of course have the usual .NET developer tools like The Regulator for working with regular expressions, LinqPad for small test thingies and DotPeek as my decompiler.

Now, besides that I need additional tooling to keep track of my tasks, so I need a bug / issue / task tracker. And I don’t want to build releases manually or do manual testing, so I will need some sort of automatic build & test tooling, which leads me to some continuous integration / build server. Chosing which tool is best here will take some time, and so I started to evaluate different solutions. More on that in a separate post.

So the toolset for my pet project is right now:

Continue with the next part, or see the other parts in this series:

[contentblock id=infrastructurelinks]

My new pet project

As already announced on twitter, this year is the year of my pet project.

I’m going to develop something and, hopefully, will be able to release it this year. I can’t tell you much about it at this stage, but the main idea is to create some developer tooling where I couldn’t find anything useful on the market up to now and of course to try out new things and stuff.

I also want to improve my personal process of development with some experiments during this project. The first will be to set up all required infrastructure I consider important for such a project.

During my efforts I want to inform you about tools and techniques I use during this experiment, so stay tuned for more.

The ‘Apache on OS X Mountain Lion’ problem

Whoah, I just dived into severe problems with the apache web server on my MacBook Air, running OS X 10.8 – Moutain Lion.

In preparation of my sessions “JavaScript” and “HTML5” at the EKON 16 conference in november I wanted to set up the web server that is in fact included in the OS X installation.

In previous versions of OS X there was a ‘Web sharing’ option in the system preferences, but this was removed in Mountain Lion. There are several posts out there in the wild showing how to manually enable apache and PHP. I found this instruction on the intertubes and read (but not exactly followed) it.

In my megalomania I went: “I already did that, it can’t be that difficult now. Lets go.” (several years ago I set up apache with PHP 3.something on a Linux system).

So this is what I did:
I skipped the first part of the instruction (starting apache) and directly went to the part where I enabled the user-specific /Sites directory. I set up the directory with all required options, allo/deny rules etc. and THEN I tried to start the web server.

Guess what? It didn’t work.

So the next thing was to look for the error logs (remembering I once had experience with Linux and error logs were a good hint on where to look for my stupid errors). Sadly, the error log folder was empty. So I double-checked the apache config for alternate log folder configurations – and found none.

A experienced apache administrator would guess that there is something wrong with the config file itself, so that apache would not even know about a configured error folder to put its logs into – but I again needed to search for starting problems with apache: Port already taken? Nope. Wrong host name? Nope. It was quite a bit of trial and error until I found this blog post about troubleshooting apache on Mountain Lion. It hinted me to this little command:

sudo bash -x /usr/sbin/apachectl -k start

Starting apache this way prints the messages directly onto the console – and so I could see where apache failed to start. It was in the user-specific /Sites config. The problem was a simple typo in the closing tag (I hacked in ‘diectory’).

So far, so good. Apache launched, my user website worked at my http://localhost/~Sebastian url – but it responded with a 403 – ‘Not authorized’ every time I accessed the folder. Strangely enough, it delivered files when I directly navigated to them (i.e. localhost/~Sebastian/test.html).

In several other attempts I found that the default configuration for the root folder disallowed directory listings and did not allow overwriting this, and how to enable PHP (that commented out entry was hidden very well).

Struggling with apache configs distributed to several locations, unix command lines, nano and a extremely bloated apache documentation I am really grateful that I usually work with IIS in my day to day business. Gladly I don’t have to set up mod_mono on my MacBook Air (yet).

Ask a Ninja: Do I need Typescript?

If the .Net Ninja would have been asked this question, this would be the answer:

A few days ago Anders Hejlsberg showed a new thing currently brewing in the Microsoft labs: TypeScript.

TypeScript is:

  • JavaScript
  • + some (optional) language extensions
  • + a Compiler (more of a extractor, in fact), that removes the extensions and throws out vanilla JavaScript

The compiler itself is also written in TypeScript, so it can be compiled down to pure JavaScript and run wherever JavaScript will run too.

So, now that we know that TypeScript is a mere superset on top of normal JavaScript – what is in these additions that could be interesting?

  • Strong typing
  • Classes
  • Interfaces
  • Simple inheritance
  • Modules

Well, in fact that’s pretty much it. With some annotations in Pascal-Style (that is, colon + type identifier) you can define that a specific variable, function argument or function return value needs to be of a certain type.

var testFunc = function(arg1: string) { return "Argument was: " + arg1; };

Now the TypeScript compiler knows that only strings should be passed into the function assigned to testFunc. And it can infer from the input argument and the operation within this function, that the return value must also be a string. Now, when you try to pass i.e. a number into this function, the compiler will warn you about this, and the same goes when you want to add a number to the return value of this function.

Actually not only the compiler, but also the full IDE support in VisualStudio will highlight this as a potential problem. Also the IDE is so smart to restrict your Intellisense autocompletion to valid types only. These simple annotations are a big player in making JavaScript a bit safer when working with different types.

TypeScript also allows you, to annotate external libraries like jQuery, Prototype, Qooxdoo etc., and it comes with some of them already pre-annotated to give you a head-start.

The other interesting thing is that the way of modularizing the scripts sticks strongly to what is currently proposed to become the next EcmaScript 6 standard. Of course this is only a specification draft by now, and will take some time to be finalized, and it is not sure if the specs will stay this way forever, but this way it is very likely that what you learn with TypeScript can be used in the future for vanilla JavaScript too.

Ask the Ninja: “So, do I *need* TypeScript?
Ninja says:
Need as in totally and absolutely required? Of course no.

TypeScript is an addition to JavaScript that, if used correctly, can help you avoid some nasty bugs. And only, if you are a fan of strong typing and come to JavaScript from other strong typed languages on the .NET or Java Platform or even from Delphi. Then TypeScript is targeted for you!

When you already are a happy JavaScript developer and make use of the dynamic typing features of the language, switch prototype chains on your objects as required and love applying and removing things at run-time, then there is nothing in TypeScript for you.