Technical problems, a solution and Rackspace cloud monitoring

Some of you may have noticed that my blog experienced some technical difficulties yesterday morning.

For some reason I couldn't find out the IIS still served static files, but anything that had to do with code like this Blog, my TeamCity, YouTrack, Stash and Fisheye applications did not respond anymore. The sad thing was that I even couldn't RDP into my VM, and so I had to trigger a reboot through the hosters web interface.

What I really disliked was that I noticed the problem only when I wanted to log into my blog to check for comments and spam.
To improve that I thought about monitoring my server or better the services it runs. So I asked Google to suggest some monitoring solutions that could help me out.

First hit

The first hit was Rackspace Cloud Monitoring. The price of 1.50 USD / month is great because I don't want to spend a lot for checking my private stuff, but everything at about 5€ / month would be okay for me. The feature set described on their homepage was okay for me. What I really need is some service that makes a request against my blog and checks if it returns a 200 status code, and alert me if this is not the case.

So I signed up for a Rackspace cloud account. After a few minutes I got called to verify my account and the guy on the other end of the line offered help for getting started with them. I really like this approach, because it really takes down the barriers.

My first and single difficulty

After I signed up and was activated I logged into the management portal and looked for the monitoring options. Guess what? Nothing there. Their homepage stated it should be easy to configure the monitoring through the portal, but I could not find an option.

I tweeted about that and almost immediately I got a response with a link to a getting started video. Honestly, this was the point where I was really impressed. The Rackspace community obviously is very strong and willing to help. That's great.

So, watching the video I learned that I could set up monitoring for a VM that I host on Rackspace, but if I delete that VM the monitoring setup would vanish too. Nothing for me, because I don't need a VM but just the monitoring.

After tweeting about that I got this very helpful response:

I didn't want to use the API, because I actually wanted to easily click together my simple 200-check. So I tried out this labs-GUI.

The setup

I didn't dig into the documentation before I started. Actually I thought it should be possible to figure out how to set up a simple HTTP monitoring by just clicking through it. The labs GUI is a very basic Twitter Bootstrap interface that just enables you to access the functionality. Right now there is no real UX, but that's okay. It works 😉

First I entered an 'Entity'. I thought this would be the thing to monitor, so I entered 'Gallifrey', the name of my server. Turned out I got it right. What I could do additionally is to install a monitoring agent on Gallifrey to have it send data about CPU, memory and disk usage to Rackspace that I could use for my monitoring too.

Entities

For this entity I now could add a 'Check'. I named it 'Blog' as I wanted to check the blog on Gallifrey.

Here I could configure that this is a HTTP check, the URL to test and from which locations Rackspace should test this. I checked London and two U.S. locations as 3 zones cost the same as just a single one.

Now, this check alone won't help me. I need to tell the system what to do after a check and what are the error and ok conditions: Enter 'alarms'.

Alarms are the actual thing I want: A mail, whenever something goes wrong. The alarm is fed with the information from the check, evaluates it by rules I enter (the something) and where to mail the information to.

I started with my status code check (see screenshot on the right).
Status code alert

For the check language I had to check the documentation, but the samples are very self-explanatory so that I had this check running in minutes.

I then added another step that should notify about the performance of my blog. For this I used this check:

if (metric['duration'] > 2500) {
  return CRITICAL, "HTTP request took more than 2.5 seconds, it took #{duration} milliseconds."
} 
if (metric['duration'] > 1800) { 
  return WARNING, "HTTP request took more than 1.8 seconds, it took #{duration} milliseconds."
}
return OK, "Overall performance is okay: #{duration} milliseconds."

The values may seem a bit high, but since two of the three check locations aren't in Europe I have to take some transatlantic latency into account. These values seem to work, because with lower values I already got quite some mails warning me that the performance seemed low 😉

Noticing the alerts

To be really notified I created an filter in my e-mail account that marks the mails with 'critical' or 'warning' status as important. This way I get notified directly because I don't let my phone notify me of every mail I receive.

Conclusion

Rackspace is very fast, easy to use and has a great community that helps you getting started in minutes.

With just about 15 to 20 minutes effort and a current investment of 1.50 USD / month I have a very easy to set up and hopefully reliable monitoring for my personal blog. This way I can react faster when something strange happens.

Disclaimer: I'm just a new customer of Rackspace and not related to them in any other way than that I'm paying them to monitor my blog.

 

Ask a Ninja: Where do you get your Ninja-skills from?

My second "Ask a Ninja" post is about where to get your skills from.

Well, first of all training, experimenting, using a good portion of your spare time for improving your skills. And then, of course, from others. Others that are willing to share their experience and their knowledge. Preferably in a medium that can be persisted (but also one-to-one sessions are invaluable for sharing knowledge).

Because of that I have a quite impressive library full of technical books. In the recent years I moved my library into an electronic form. I have a lot of ebooks and carry most of my library with me all the time on my Kindle. That way I can look up and re-read the important things whenever necessary. You don't need to know everything, but you need to know where you have that information when it's required.

So, what's the point of this blog post you ask?

Well, I just stumbled upon an impressive library in ebook form that is available for free from Microsoft. Its a Huge collection of free Microsoft ebooks for you. They cover Sharepoint, SQL Server, Visual Studio, Web Development, Windows Store development, Azure and Windows.

So, if you want to improve your Ninja-Skills - go and grab them while their hot and start to read. And of course, spent some time to experiment with the knowledge to tighten what you just read. 😉

A small infrastructure upgrade

In my "Setting up my infrastructure" posts I explained why I chose JetBrains TeamCity and YouTrack over the Atlassian tools Jira and Bamboo. It was not because of the feature set but because the setup was way more hazzle free and it seems maintaining them would be easy.

Well - I was right 😉

Just yesterday JetBrains released a new major update of TeamCity: Version 8.0.

My feature request ticket indicated that it was included in the 8.0 release, and so I updated to check that out.

I downloaded the 400+ MB installer, which took a while, even to my VServer with a good bandwidth reading. I then installed the new version, which took care of everything else. Installation took a minute or so (with uninstalling the previous version). Starting up TeamCity 8 required a database update, so it asked me for the authentication token it wrote in its log files for startup. Digging that out took a minute too. Database upgrade took like another minute - while I must admit that there is no real load on my TeamCity installation, there's almost no data in it. All my Agents automatically updated themselves within another two or three minutes.

After all, it was not much more than perhaps 10 to 15 minutes for the update, most of the time just waiting for something to complete. To be honest I am very pleased with the experience and the required effort. JetBrains made administrating this build server really painless and smooth. Kudos.

And after I checked: Yes, TeamCity 8.0 now can work with MSTest from the Visual Studio Agent installation and does not require either a full VisualStudio installation or a custom path to the MSbuild.exe anymore.

Unobtrusive MSBuild: Using Git information in your assemblies

For my current project I wanted to add some information from Git as well as some additional build environment info into my assemblies at compile time.

My usual approach to this is adding some additional steps in my build process. I learned a lot about MSBuild from Sayed I. Hashimi (@sayedihashimi) who wrote the book Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build (by the way a must-read if you want to dig into how MSBuild works). MSBuild is very powerful and easy to extend, and so I think it's the best way to solve this.

Since I developed some MSBuild tasks and targets for internal stuff at my place of work I started to create MSBuild stuff in a way that I like to call unobtrusive MSBuild. My idea was to design my MSBuild project extensions in a way that they can be used by just adding a project include and optionally adding some configuration properties right before the include. This keeps them portable, reusable and flexible enough to be used in slightly different environments. Continue reading "Unobtrusive MSBuild: Using Git information in your assemblies"

Ask a Ninja: Automated WordPress blog backup using Git

I thought I had posted this already, but the article list of my blog tells otherwise. Early this year I posted how I moved this blog from the old server to the current one. After that I thought I also could automate the backup this way.

So, what are the required steps?

  • Create a dump of the database.
  • Add the dump and all local modifications to the local repository.
  • Commit the changes to the local repo.
  • Push to a remote repository.

In my case I like to go sure and push to two remote repositories.

So, this is the script that will backup my blog and push it to my repos:

D:
cd D:Websdotnetninja.de
SET PATH=%PATH%;D:MariaDBbin
del backup.sql
mysqldump --skip-dump-date -u backup blog_dotnetninja.de > backup.sql
git add .
git commit -m "Automatic backup"
git push origin
git push backup master
exit

To automate the backup I just created a simple scheduled task to execute this script once a day.
Restoring the blog from the backup is as easy as described in my blog post about the move.

Custom deployment scripts – with mstest – for Windows Azure Website git deployment

I just started another project. It is hosted on Windows Azure and I'm using Git deployment for this website.

This was very fine and I am extremely impressed how easy it was to get started with it. Then I ran into a little problem.

Sidenote: My project relies on NuGet packages, and I, personally, have the strong opinion that compiled stuff does not belong into my source code versioning system. This is why I did not check in the NuGet.exe into Git, but just the NuGet.config and NuGet.Targets files configured to download NuGet.exe when it's missing. Of course I make my build dependent on a NuGet package server, but since I could host my own gallery on a custom domain, and configure that domain in my NuGet.config, I could take control over this dependency at any time.

I wanted my project to incorporate information about the Git commit hash it is built from, the Git branch it was built from and other little details. For that the MsBuild Community Tasks project offers some nice helpers. So I added the NuGet package of this project to my solution.

The Problems

Now there is this chicken-egg problem: When MsBuild encounters a UsingTask declaration, it automatically loads the assembly that contains the task. If that assembly is not there, using the task will fail. Now, the NuGet download of the packages - including the task library - happens as part of the build. That is, after the project files are loaded. So the fresh downloaded file was not found when importing the projects and... the build fails.

To avoid this problem, I cheated a little bit on MsBuild: I added another project to my solution that also has the MsBuild Community Task project listed in it's packaged.config. Then I manually set my web application project to be build after this 'BuildSupport' project. Now the BuildSupport project build downloads the community task library, which is then available when the project import is defined in the web application's project file. It's just a small cheat, though.

Then the next problem: The BuildSupport project is not actually 'required' to build the website project, and so the Git deployment build process simply does not build it. The task library is not downloaded prior to executing the actual build process of the application, and so it fails. I could not get the project to build the 'BuildSupport' project before the actual web application on Azure.

The Solution

After a little bit of research I found this can be achieved by using a custom deployment script.
I was a bit afraid that I had to figure out how the actual deployment works to add a step just in front of the actual compile, but there is some infrastructure in place to help us out with that.

For a .NET developer this will feel strange, but you'll need node.js in the first place. The Windows Azure Command Line Tools are a node.js package, and we'll need that to get started with the actual live deployment script. So, after installing node.js, we're going to install the package:

npm install azure-cli -g

This will globally install the Azure CLI for use on our console. Now we navigate to our solution directory and let the Azure CLI generate the deployment script that will automatically run to deploy our application to Azure if we don't do anything custom:

azure site deploymentscript --aspWAP ApplicationFolderApplication.csproj -s Solution.sln

This will generate two files for you. First there is a .deployment file. This is a file structured like a oldfashioned .ini configuration file, telling Azure that there is a custom deployment file and how its name is. It's content simply is:

[config]
command = deploy.cmd

It also reveals the second generated file, the actual deployment script called deploy.cmd. This is the interesting part for us so far. I'm not posting the full script but rather go through the sections.

First there is a check that node.js is available. It is assumed that this is available on Azure, but to test the deployment script locally you'll also need node.js. We just installed it, so we're all set, but the next one checking out the solution could be missing node.

Then the script defines some environment variables for folders. Like where the build artifacts will be placed and where the actual files to deploy will be placed. This defaults to /artifacts/wwwroot and can be overridden by setting the corresponding environment variables before the deployment.

In a thirds step, the script checks if kudu is installed. Kudu is the actual deployment engine running on Azure, and is also capable of running on your machine. After that additional paths are configured.

In the fourth step the actual compiling and deployment work is done, and the fifth is just some error handling.

So, let's have a look at the actual important stuff in the file:

:: 1. Build to the temporary path
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%MyApplication.WebMyApplication.Web.csproj" /nologo /verbosity:m /t:Build /t:pipelinePreDeployCopyAllFilesToOneFolder /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release /p:SolutionDir="%DEPLOYMENT_SOURCE%.\" %SCM_BUILD_ARGS%
IF !ERRORLEVEL! NEQ 0 goto error

:: 2. KuduSync
call %KUDU_SYNC_CMD% -v 50 -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
IF !ERRORLEVEL! NEQ 0 goto error

Now, thats actualy very easy: MsBuild is called for the web application project, and then Kudu is launched to do the actual deployment.

What we want to achieve now is to build the full solution upfront to have all required NuGet packages downloaded before the actual project is being built. And while we are actually getting our hands dirty in a custom deployment script, why don't add running the unit tests of the project as part of the deployment? So, if a test fails, deployment will fail too. I think that's a good idea.

So what I did was adding these two steps just in front of the two default steps:

:: 1. Build solution
echo Build solution
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%MySolution.sln" /nologo /verbosity:m /t:Build /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release /p:SolutionDir="%DEPLOYMENT_SOURCE%.\" %SCM_BUILD_ARGS%
IF !ERRORLEVEL! NEQ 0 goto error

:: 2. Running tests
echo Running tests
vstest.console.exe "%DEPLOYMENT_SOURCE%MyApplication.Web.TestsbinReleaseMyApplication.Web.Tests.dll"
IF !ERRORLEVEL! NEQ 0 goto error

That's it. I just copied the build line and pointed it to my solution, and I added a call to the MsTest tooling to run my tests.

So, with very little tweaking I could remove all dependencies to actual binaries I would have to check in otherwise and I have the Azure git deployment run my unit tests on every deployment. That's what I call easy and convenient.

Why FireMonkey is so fundamentally wrong in every aspect of it’s being

A short time ago I had a harsh twitter argument with Nick Hodges (@NickHodges) about the FireMonkey framework in Delphi XE4 (you may know that I had the start of my professional career with Delphi and started as a speaker at Delphi conferences).

It all started with the definition of 'native' or - even worse - 'true native', but let's start at the beginning. I know the audience of my blog is mostly .NET focused, so let me get you all set with the required background information.

FireMonkey

So, let's start what FireMonkey is - or aims to be. FireMonkey is a application development framework (or platform, as Embarcadero likes to call it) and contains components that should enable the developer to build cross-applications with a singe code base for Windows, Mac OS X, iOS and soon to come Android. FireMonkey is written for Delphi and can also be used from C++ within Embarcaderos C++ Builder.

So the main idea is, that you design your forms with FireMonkey components and controls, double-click on buttons to add your business logic like Delphi developers did this for the last decades and then be able to compile the application for Windows, for Mac OS X and for iOS without changing it anymore.

And indeed, this works technically...

Architecture

...but this is also already the point where it starts getting wrong.

FireMonkey, by it's RAD approach, encourages the developer to click his user interface together, double-click on controls and put all the logic onto the form - where it doesn't belong. I'm not going to argue with anybody about decoupling, testability of code and all the other Clean Code aspects and concepts. A good developer should have the inner urge to produce code at a certain minimum quality level and putting everything on the form is nothing that helps here. So, the basic concept that FireMonkey encourages is wrong.

New and not-so-advanced developers tend to adopt this bad style and start running into a direction that will end up in fatality. Good developers instead will start by building up a good architecture for their application. Most probably working with tests, that makes the usage of some DI container a no-brainer. This most probably also lead to a good decoupled architecture on the application frontend, perhaps introducing some MVC concept for their GUI. Only this enables them to take a good approach to real and thoughtful cross platform development, but more on that in a minute.

Cross-Platform

Let's talk about cross platform development in a general way, before we go back to FireMonkey.

Every platform has its specifics. And a user - that is, in fact, our customer we want to sell our application too - chooses his platform for some reason. There are multiple approaches to make the user happy, and the most simple thing is, to integrate the app seamlessly into the environment (platform) the user chose to please him.

Let's talk a little about UX. I'm thinking about the overall user experience with your application here, not only the look & feel of the GUI. It's the whole full package including a good guidance through the workflow, helping the user to not enter crap into your app, assisting him to solve problems when he does, make everything accessible for everyone, especially impaired users, and of course also response times and stuff. As said, the whole package.

All platform vendors have thought about how their platform / devices should behave, how software should behave on the platform and what they expect from an application. They offer UX guidelines that describe what fits into the environment and how applications can fit seamlessly into the platform, providing an overall exquisite user experience to the guys you want money from.

Comparing just Apples (iOS and the Mac) and Googles (Android), which are the current relevant platforms for FireMonkey besides Desktop-Windows, UX guidelines shows you how different the platforms are. They are fundamentally different in how the control flow in applications is expected from the user. Leave alone Windows RT for tables and Windows Phone, which have a radical new approach to interacting with applications. But since Windows RT and Windows Phone are not (yet ?) supported, we don't need to get into those details right now. Just so far right now: Delphi is marketed as the best/fastest/most productive dev tool for Windows. Why can't you target Windows RT? Or write Windows Store applications for Windows 8 with it? Well, thats another topic. But still taking into account, that the main target audience for FireMonkey are Delphi (and as such mainly Windows-) developers, this leads into a fatal direction:

FireMonkey encourages the following: The Windows-Developer designs his FireMonkey form for Mobile Devices just as he would design a Windows-Application form. This for Windows designed UX is ported in a one-size-fits-all attempt onto the Mac (not so extremely terrible bad), but also to iOS and later Android (overly extremely bad).

Why is this bad? Because the user chose his platform with something in his mind. This something is the overall user experience with the device and of course with the applications he gets from the store within the platform itself. It's a closed ecosystem for his needs. He expects his iPhone/iPad-Applications to come in his loved iOS style or he expects his Android-Applications to come in an Android-Style. So again, we need to please our customers because they are the ones buying our applications and giving us their money. So how can we make them happy? Give them, what they expect.

Users expectations, and a sub-plot

That can be done only in one way: To embrace the platform and behave like a good citizen on that platform.
Developing a 'native' application is not the only way, but whatever technology you choose, the result should still integrate seamlessly into the environment.

A good example is Exfm. They have a music sharing service, and used to publish a 'native' iOS application. Native as in written in Objective-C with XCode. It had a 4-star rating. Still they did a rewrite of the app - with HTML5 and JavaScript - based on PhoneGap. During the rewrite and despite the fact that they were actually programming a web application that runs in a UIWebView browser, they incredibly focused on iOS detail behaviour like the scroll bounce thingy or the possibility to scroll-bounce elements that don't need to be scrolled because they are not larger than the area displaying them. They even mimicked the iOS behaviour that you can tap and hold on a button, move away from it, slide over the button again and lift your finger to trigger it. They did that with HTML 5.

They did that for one reason: To behave like a good citizen on iOS. To please their users.

And now they have a HTML application that you can't tell apart from a native application that uses the native UI controls of iOS. The new app now got a 4.5 star rating and more downloads than ever. Here's an article with a lot of more detail information about the little iOS details that make the app feel 'right' on iOS.

Now let's get back to FireMonkey.

Einheitsbrei

I'm trying to introduce a new word to my english speaking friends: 'Einheitsbrei'. You already use some german words like Zeitgeist and Kindergarten, and now it's time for 'Einheitsbrei'.

Einheitsbrei is a word that could be translated with 'boring standard mash'. Is used deprecative and describes things that are boring, common, and don't have any specific characteristic or outstanding elements.

FireMonkey apps are Einheitsbrei. They are sub-standard on every platform and don't take into account the little, loved by users, elements of the platform they are running on. And in some cases they don't even get the basic things right.

Architecture, the second: Doing it better

In the first architecture section I described that a good application architecture very probably involves some sort of DI and MVC on the GUI part.

Nick said on twitter:

So, let's take this for granted. When I can call 'any API I want' with Delphi for iOS, then I would have full access to all native UI controls on iOS. When I already have MVC in my application, then there is nothing that would hinder me as a developer to use the DI container to instantiate a view for iOS that is using the iOS native UI controls, and to instantiate a view for Android that makes use of the native Android UI controls there.

With just a little bit more effort on the views using the platform APIs, bypassing FireMonkeys UI controls, your app could behave like a good citizen on the very specific platform my customer chose for himself for a reason. Remember: that customer guy is the guy I need to make happy because I want him to give me his money.

Conclusion

Yes, it's more effort. Yes, it will take longer. Yes, it will require you to learn about the platforms UX design guidelines and about the platforms native UI controls. But it's worth it. Like extfm, who re-wrote an existing app with the goal to their iOS users more happy and another one that makes their Android users happy.

FireMonkey instead encourages you to produce Einheitsbrei. And this is just so wrong. You will find out, when you don't get the ratings required to have enough sales for your app. Users are cruel. They buy your app, and rate it down when they don't like it. And they tell other users to not buy your app, when they are not happy with it. They will, however, rate your app up and tell others to also buy when they really like it. But just when the app's buttons are so good, that they want to lick them.

Your app needs to be outstanding, of high quality and provide a well designed user experience to be successful and to be able to compete against other applications. Einheitsbrei doesn't sell. And FireMonkey, by design, encourages Einheitsbrei. You won't do yourself a favor by using it.

This is my opinion on why FireMonkey is just so wrong.

Setting up my infrastructure – Part 8: A little bit more evaluation

After a little break I'm back again with the next findings in my little pet project.

First of all I wanted to check out the build servers before diving into the bug trackers. So I set up the first assembly for my project, with just one class and a unit test for it. As I already mentioned in my requirements it is a .NET project and I want to have at least the very basics (building, unit testing etc.) covered.

My test case for this was very easy:
Set up a project in the build server, have it check out the sources of the project and let it build.
After building is okay, add the configuration for unit tests and let the test run.
Check if it builds when checking in new code, and check how it reacts when either the build breaks or unit tests fail.

So, now this is, what I encountered with the systems:

Bamboo

Without digging too deep into the documentation (Documentation? Yes, this is the url. Check.), I set up a build plan in Bamboo to build my project. I set up a first stage that should check out the project from source control and build it.

Well, the setup was done quickly, but the build would not start. What I did was the following: I configured two local agents, both are capable of running the build. The build plan had a single stage "Build" with two jobs: Checkout and call msbuild with my solution file.

When I manually selected 'Run' it was queued and 'waiting to be build' forever. When clicking onto it, and selecting the only stage with the hourglass besides it from the sidebar, it told me this status: "Status: Job has not yet been queued. Waiting for prior stages to complete.".

This was where I went like WTF? What prior stages? This was the only existing stage in the only existing plan.
After spending several hours with this situation I decided to create a User account on my Bamboo evaluation installation for someone to support me and opened a support ticket at Atlassian.

A few hours later the support logged in - and just saw a failed build that started like 20 minutes before.
After that the server behaved like normal: A new checkin resulted in an almost immediate new build like I would have expected.

So, why did the build fail? I used a Git repository on BitBucket for the tests, and configured the version control settings to check out from Git. I though that was the obvious way to do it. I was wrong, as Bamboo would not check out the sources. As I later found out, since the Git (hint! hint!) repo is hosted on BitBucket, I needed to select 'BitBucket' instead of Git from the repository type selection to be able to use it. My dear. After that it worked.

So, after this little problem I went on to configure the unit tests. I added a new stage for the tests to the build plan and configured a new job that would call MsTest and run the tests.

That job failed on every approach to run it. It always told me that the assembly was not found.
To make a long (several hours over a few days!) story short: Obviously the different stages in a build plan are executed on the same agent, but in different working directories.

At the time of this evaluation there wasn't anything hinting to that in the Atlassian docs. By now, there is a small article explaining that you need to configure Artifacts to bring build results from one stage into another, but without further explaining how to set up the artifacts in detail.

Back then, I tried to figure out the different directories but that required a lot of changes to my build scripts that wouldn't work on the dev machines after that, and so I put the MsTest job in the build stage to have it execute.

Guess what? After changing the build stage it wouldn't start again with the same delayed until forever problem I already had until I waited several hours when it suddenly executed.

After all, my experience with Bamboo wasn't really turning me on.

TeamCity

I already mentioned that we use TeamCity at my work place. So setting up the initial project was a bit new to me, but I knew what the settings were and where I had to tune a little bit. So the initial setup of my project with two build steps build and test was done in a few minutes.

Of course everything worked from the very instane I clicked on run.
Then I went on to activate code coverage reports for the test build step by simply activating dotCover from a little drop down in the test build step. After the next build I had a complete test coverage report on TeamCity.

Conclusion

I struggled a lot with Bamboo. It has a steep learning curve and the documentation is.. well, let's simply call it it could be better. A lot. The Atlassian tools in general, and Bamboo is no exception, are obviously powerful, but you have to spend a LOT of time with the system to get it running like you expect it to - and I don't want to spend too much time fiddling around with my toolset.

TeamCity on the other hand is streamlined and guides you through the process of configuring your builds. Everything I needed was set up in a matter of minutes and everything worked directly as expected from the very beginning.

Actually, at this point it was clear for me to use the JetBrains tools and not Atlassians, but I still had a quick look at Jira and YouTrack, which I want to describe in a dedicated blog post.

See the other parts in this series:

[contentblock id=infrastructurelinks]

I’m done with Drobo, too…

I made a mistake. A big mistake. Something I can correct, and which I will correct very soon.

My mistake? I already teasered it in my last post about my pet project: I bought a Drobo S as a storage solution. The Drobo S is the predecessor of the current Drobo 5D.

The title of this post is a clone of the I'm done with Drobo post of Scott Kelby. In this post he describes that he had some issues with his Drobo. He eventually ended up in a situation where all drives in his Drobo were stillokay, the Drobo itself wasn't and since his device was out of warranty he would have to buy into an extended support package to be able to access his data.

Well, my own situation is not (yet) that bad, but I have a strong feeling I may end up in a similar position.

Now, what are my issues with my Drobo S?

I already use my second replacement unit. The very first Drobo I received after ordering had a problem with the drive bay in the middle and wouldn't recognize a disc in it. In the first replacement unit all five slots worked and that was fine for almost a year.

Then, as I already mentioned in my previous post, my server suddenly started loosing the connection to the Drobo. On a regular basis I came home and my home server would miss the drive. Only a reboot of the host computer would (most probably, but not always) fix this.

This of course was very annoying, but was not extremely critical because I only had media files stored in the Drobo which were available through a TVersity media server. I could not stream videos through my home when the drive was lost, but that was okay in the beginning. The connection was eSATA, because USB is too slow for streaming two full-HD streams at once.

It became more critical when I started to run my evaluation VM on that home server and placed the virtual hard disk of the server on drive D (my Drobo drive). A disconnect could leave the VM in an inconsistent state and probably damage my infrastructure.

Then the disconnects started to happened more frequently over time until I encountered this issue daily and even multiple times on a single evening. As a software developer I know how to troubleshoot and check for possible error sources: The Drobo also lost it's eSATA connection to another machine. USB was fine on both, but as already mentioned not an option because of the slowness of USB.

The Drobo service tried hard to fix this and eventually sent me another replacement unit.
This was okay. Now - guess what happened then? The replacement unit starts to show the same issues too. Now, even the USB connection get's dropped once in a while.

So, while I initially was extremely happy with my Drobo and it's performance, I'm currently in a state of constant alert for when my Drobo will eventually fail and won't be accessible anymore - together with all my data I stored on it.

Of course I have a backup of the important data (honestly, my terabyte large video archive isn't important enough to keep it as a backup, so that would be a loss, but given the time I can spend on watching them it wouldn't be that hard). But the main idea of a large storage, with a very fast connection directly attached to my home server is to have direct and instant, always-on access to the data. Something, that I thought my Drobo could provide. But something, that a Drobo obviously isn't capable of providing in a reliable way.

So I'm done with Drobo, because I can't trust my device to function properly any longer.
I need to check for alternatives soon. If anybody knows of a solution for my problem, that is holding currently about 6 TB of data, more incoming, with very good performance and data throughput (just like a normal internal HDD), so please tell me.

Setting up my infrastructure – Part 7: The evaluation begins: Installations

After I picked the evaluation candidates I first tried a test-setup on a development VM at home.

Download

For this I downloaded the evaluation products from the Atlassian homepage and the free installers from JetBrains. Please note the slight difference between a 'product' and an 'installer' download. I wanted to do a side-by-side installation of all tools on the same VM to compare them easily.

Just as a little side-note, I will do a blog post on my hardware that drove me crazy during the evaluation. Just so far by now: I have a Drobo storage attached to my server at home, and I had the virtual server hard disks on that drive. Now guess what happens when suddenly the host machine looses the connection to the Drobo. Regularly and over and over again. But as said, this will be part of a separate blog post on its own.

So, after the download I ended up with two .exe installers for YouTrack and TeamCity, and with two zip archives for Jira and Bamboo. The Atlassian web site then directed me to a documentation link where I had to look for the installation instructions matching my setup.

Installation

All four products are Java-based.

JetBrains solved this very sound by obviously packaging the required runtime directly into their programs. I did not need to install Java on the system before installing YouTrack and TeamCity. Both programs as well as the first build agent of TeamCity were installed as autostart windows services automagically. They installed fine and directly started to run on the their corresponding port that I could change in the installer.

Now the tricky part began: Installing the Atlassian tools. First of all, the documentation suggested to install the 32-bit SDK, even on 64-bit machines. Just to get this straight: We're talking about software that aims to be run in a production environment for enterprises, and they suggest using a lot of ram. This was my first WTF-moment with the Atlassian tools. I loved to choose a 64-bit runtime, and not the SDK but the real runtime, but well...

So, I installed Java. The JDK. For 32 bit. I then had to unzip the zip file and choose an installation and an instance folder: Second WTF-moment. The instance folder is something like the working directory of the program. Okay, so I did. In a small side-note in the installation documentation there is mentioned that there should be no space in any path name. "Any" means, no spaces in the path to Java, the products installation directory and the products instance directory. Of course, Java is installed in "C:Program Files..." by default. With a space in it.

Being a software developer myself I can only shake my head about such a ridiculous requirement. A software should be written in a way that it can cope with valid paths on the corresponding operating system. Especially software that is intended to help other software developers. Well, of course I ran into problems with my default Java installation location and had to uninstall and re-install Java again to another location.

The next tricky part was installing the Atlassian software as a windows service. You have to manually use a Java service wrapper tool for that. Oh, and I almost forgot: To configure Jira and Bamboo you need to manually edit configuration files, which are not really well documented...

After all, I got all four system to run. That is, I could open them in the browser and set the systems up.

So far, it's an extremely clear plus for YouTrack and TeamCity. Installation is very easy and no hassle with config files, Java paths and service wrapper tools. The Atlassian stuff might be suited for enterprise use with a special person dedicated to setting up, configuring, fine-tuning and maintaining the system, but for a one-man show the overhead of a simple tool installation seems too much.

In the next post I'm going to describe the first functionality tests.

See the other parts in this series:

[contentblock id=infrastructurelinks]