Technical problems, a solution and Rackspace cloud monitoring

Some of you may have noticed that my blog experienced some technical difficulties yesterday morning.

For some reason I couldn't find out the IIS still served static files, but anything that had to do with code like this Blog, my TeamCity, YouTrack, Stash and Fisheye applications did not respond anymore. The sad thing was that I even couldn't RDP into my VM, and so I had to trigger a reboot through the hosters web interface.

What I really disliked was that I noticed the problem only when I wanted to log into my blog to check for comments and spam.
To improve that I thought about monitoring my server or better the services it runs. So I asked Google to suggest some monitoring solutions that could help me out.

First hit

The first hit was Rackspace Cloud Monitoring. The price of 1.50 USD / month is great because I don't want to spend a lot for checking my private stuff, but everything at about 5€ / month would be okay for me. The feature set described on their homepage was okay for me. What I really need is some service that makes a request against my blog and checks if it returns a 200 status code, and alert me if this is not the case.

So I signed up for a Rackspace cloud account. After a few minutes I got called to verify my account and the guy on the other end of the line offered help for getting started with them. I really like this approach, because it really takes down the barriers.

My first and single difficulty

After I signed up and was activated I logged into the management portal and looked for the monitoring options. Guess what? Nothing there. Their homepage stated it should be easy to configure the monitoring through the portal, but I could not find an option.

I tweeted about that and almost immediately I got a response with a link to a getting started video. Honestly, this was the point where I was really impressed. The Rackspace community obviously is very strong and willing to help. That's great.

So, watching the video I learned that I could set up monitoring for a VM that I host on Rackspace, but if I delete that VM the monitoring setup would vanish too. Nothing for me, because I don't need a VM but just the monitoring.

After tweeting about that I got this very helpful response:

I didn't want to use the API, because I actually wanted to easily click together my simple 200-check. So I tried out this labs-GUI.

The setup

I didn't dig into the documentation before I started. Actually I thought it should be possible to figure out how to set up a simple HTTP monitoring by just clicking through it. The labs GUI is a very basic Twitter Bootstrap interface that just enables you to access the functionality. Right now there is no real UX, but that's okay. It works 😉

First I entered an 'Entity'. I thought this would be the thing to monitor, so I entered 'Gallifrey', the name of my server. Turned out I got it right. What I could do additionally is to install a monitoring agent on Gallifrey to have it send data about CPU, memory and disk usage to Rackspace that I could use for my monitoring too.

Entities

For this entity I now could add a 'Check'. I named it 'Blog' as I wanted to check the blog on Gallifrey.

Here I could configure that this is a HTTP check, the URL to test and from which locations Rackspace should test this. I checked London and two U.S. locations as 3 zones cost the same as just a single one.

Now, this check alone won't help me. I need to tell the system what to do after a check and what are the error and ok conditions: Enter 'alarms'.

Alarms are the actual thing I want: A mail, whenever something goes wrong. The alarm is fed with the information from the check, evaluates it by rules I enter (the something) and where to mail the information to.

I started with my status code check (see screenshot on the right).
Status code alert

For the check language I had to check the documentation, but the samples are very self-explanatory so that I had this check running in minutes.

I then added another step that should notify about the performance of my blog. For this I used this check:

if (metric['duration'] > 2500) {
  return CRITICAL, "HTTP request took more than 2.5 seconds, it took #{duration} milliseconds."
} 
if (metric['duration'] > 1800) { 
  return WARNING, "HTTP request took more than 1.8 seconds, it took #{duration} milliseconds."
}
return OK, "Overall performance is okay: #{duration} milliseconds."

The values may seem a bit high, but since two of the three check locations aren't in Europe I have to take some transatlantic latency into account. These values seem to work, because with lower values I already got quite some mails warning me that the performance seemed low 😉

Noticing the alerts

To be really notified I created an filter in my e-mail account that marks the mails with 'critical' or 'warning' status as important. This way I get notified directly because I don't let my phone notify me of every mail I receive.

Conclusion

Rackspace is very fast, easy to use and has a great community that helps you getting started in minutes.

With just about 15 to 20 minutes effort and a current investment of 1.50 USD / month I have a very easy to set up and hopefully reliable monitoring for my personal blog. This way I can react faster when something strange happens.

Disclaimer: I'm just a new customer of Rackspace and not related to them in any other way than that I'm paying them to monitor my blog.

 

Ask a Ninja: Where do you get your Ninja-skills from?

My second "Ask a Ninja" post is about where to get your skills from.

Well, first of all training, experimenting, using a good portion of your spare time for improving your skills. And then, of course, from others. Others that are willing to share their experience and their knowledge. Preferably in a medium that can be persisted (but also one-to-one sessions are invaluable for sharing knowledge).

Because of that I have a quite impressive library full of technical books. In the recent years I moved my library into an electronic form. I have a lot of ebooks and carry most of my library with me all the time on my Kindle. That way I can look up and re-read the important things whenever necessary. You don't need to know everything, but you need to know where you have that information when it's required.

So, what's the point of this blog post you ask?

Well, I just stumbled upon an impressive library in ebook form that is available for free from Microsoft. Its a Huge collection of free Microsoft ebooks for you. They cover Sharepoint, SQL Server, Visual Studio, Web Development, Windows Store development, Azure and Windows.

So, if you want to improve your Ninja-Skills - go and grab them while their hot and start to read. And of course, spent some time to experiment with the knowledge to tighten what you just read. 😉

A small infrastructure upgrade

In my "Setting up my infrastructure" posts I explained why I chose JetBrains TeamCity and YouTrack over the Atlassian tools Jira and Bamboo. It was not because of the feature set but because the setup was way more hazzle free and it seems maintaining them would be easy.

Well - I was right 😉

Just yesterday JetBrains released a new major update of TeamCity: Version 8.0.

My feature request ticket indicated that it was included in the 8.0 release, and so I updated to check that out.

I downloaded the 400+ MB installer, which took a while, even to my VServer with a good bandwidth reading. I then installed the new version, which took care of everything else. Installation took a minute or so (with uninstalling the previous version). Starting up TeamCity 8 required a database update, so it asked me for the authentication token it wrote in its log files for startup. Digging that out took a minute too. Database upgrade took like another minute - while I must admit that there is no real load on my TeamCity installation, there's almost no data in it. All my Agents automatically updated themselves within another two or three minutes.

After all, it was not much more than perhaps 10 to 15 minutes for the update, most of the time just waiting for something to complete. To be honest I am very pleased with the experience and the required effort. JetBrains made administrating this build server really painless and smooth. Kudos.

And after I checked: Yes, TeamCity 8.0 now can work with MSTest from the Visual Studio Agent installation and does not require either a full VisualStudio installation or a custom path to the MSbuild.exe anymore.

Unobtrusive MSBuild: Using Git information in your assemblies

For my current project I wanted to add some information from Git as well as some additional build environment info into my assemblies at compile time.

My usual approach to this is adding some additional steps in my build process. I learned a lot about MSBuild from Sayed I. Hashimi (@sayedihashimi) who wrote the book Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build (by the way a must-read if you want to dig into how MSBuild works). MSBuild is very powerful and easy to extend, and so I think it's the best way to solve this.

Since I developed some MSBuild tasks and targets for internal stuff at my place of work I started to create MSBuild stuff in a way that I like to call unobtrusive MSBuild. My idea was to design my MSBuild project extensions in a way that they can be used by just adding a project include and optionally adding some configuration properties right before the include. This keeps them portable, reusable and flexible enough to be used in slightly different environments. Continue reading "Unobtrusive MSBuild: Using Git information in your assemblies"

Ask a Ninja: Automated WordPress blog backup using Git

I thought I had posted this already, but the article list of my blog tells otherwise. Early this year I posted how I moved this blog from the old server to the current one. After that I thought I also could automate the backup this way.

So, what are the required steps?

  • Create a dump of the database.
  • Add the dump and all local modifications to the local repository.
  • Commit the changes to the local repo.
  • Push to a remote repository.

In my case I like to go sure and push to two remote repositories.

So, this is the script that will backup my blog and push it to my repos:

D:
cd D:Websdotnetninja.de
SET PATH=%PATH%;D:MariaDBbin
del backup.sql
mysqldump --skip-dump-date -u backup blog_dotnetninja.de > backup.sql
git add .
git commit -m "Automatic backup"
git push origin
git push backup master
exit

To automate the backup I just created a simple scheduled task to execute this script once a day.
Restoring the blog from the backup is as easy as described in my blog post about the move.

Custom deployment scripts – with mstest – for Windows Azure Website git deployment

I just started another project. It is hosted on Windows Azure and I'm using Git deployment for this website.

This was very fine and I am extremely impressed how easy it was to get started with it. Then I ran into a little problem.

Sidenote: My project relies on NuGet packages, and I, personally, have the strong opinion that compiled stuff does not belong into my source code versioning system. This is why I did not check in the NuGet.exe into Git, but just the NuGet.config and NuGet.Targets files configured to download NuGet.exe when it's missing. Of course I make my build dependent on a NuGet package server, but since I could host my own gallery on a custom domain, and configure that domain in my NuGet.config, I could take control over this dependency at any time.

I wanted my project to incorporate information about the Git commit hash it is built from, the Git branch it was built from and other little details. For that the MsBuild Community Tasks project offers some nice helpers. So I added the NuGet package of this project to my solution.

The Problems

Now there is this chicken-egg problem: When MsBuild encounters a UsingTask declaration, it automatically loads the assembly that contains the task. If that assembly is not there, using the task will fail. Now, the NuGet download of the packages - including the task library - happens as part of the build. That is, after the project files are loaded. So the fresh downloaded file was not found when importing the projects and... the build fails.

To avoid this problem, I cheated a little bit on MsBuild: I added another project to my solution that also has the MsBuild Community Task project listed in it's packaged.config. Then I manually set my web application project to be build after this 'BuildSupport' project. Now the BuildSupport project build downloads the community task library, which is then available when the project import is defined in the web application's project file. It's just a small cheat, though.

Then the next problem: The BuildSupport project is not actually 'required' to build the website project, and so the Git deployment build process simply does not build it. The task library is not downloaded prior to executing the actual build process of the application, and so it fails. I could not get the project to build the 'BuildSupport' project before the actual web application on Azure.

The Solution

After a little bit of research I found this can be achieved by using a custom deployment script.
I was a bit afraid that I had to figure out how the actual deployment works to add a step just in front of the actual compile, but there is some infrastructure in place to help us out with that.

For a .NET developer this will feel strange, but you'll need node.js in the first place. The Windows Azure Command Line Tools are a node.js package, and we'll need that to get started with the actual live deployment script. So, after installing node.js, we're going to install the package:

npm install azure-cli -g

This will globally install the Azure CLI for use on our console. Now we navigate to our solution directory and let the Azure CLI generate the deployment script that will automatically run to deploy our application to Azure if we don't do anything custom:

azure site deploymentscript --aspWAP ApplicationFolderApplication.csproj -s Solution.sln

This will generate two files for you. First there is a .deployment file. This is a file structured like a oldfashioned .ini configuration file, telling Azure that there is a custom deployment file and how its name is. It's content simply is:

[config]
command = deploy.cmd

It also reveals the second generated file, the actual deployment script called deploy.cmd. This is the interesting part for us so far. I'm not posting the full script but rather go through the sections.

First there is a check that node.js is available. It is assumed that this is available on Azure, but to test the deployment script locally you'll also need node.js. We just installed it, so we're all set, but the next one checking out the solution could be missing node.

Then the script defines some environment variables for folders. Like where the build artifacts will be placed and where the actual files to deploy will be placed. This defaults to /artifacts/wwwroot and can be overridden by setting the corresponding environment variables before the deployment.

In a thirds step, the script checks if kudu is installed. Kudu is the actual deployment engine running on Azure, and is also capable of running on your machine. After that additional paths are configured.

In the fourth step the actual compiling and deployment work is done, and the fifth is just some error handling.

So, let's have a look at the actual important stuff in the file:

:: 1. Build to the temporary path
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%MyApplication.WebMyApplication.Web.csproj" /nologo /verbosity:m /t:Build /t:pipelinePreDeployCopyAllFilesToOneFolder /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release /p:SolutionDir="%DEPLOYMENT_SOURCE%.\" %SCM_BUILD_ARGS%
IF !ERRORLEVEL! NEQ 0 goto error

:: 2. KuduSync
call %KUDU_SYNC_CMD% -v 50 -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
IF !ERRORLEVEL! NEQ 0 goto error

Now, thats actualy very easy: MsBuild is called for the web application project, and then Kudu is launched to do the actual deployment.

What we want to achieve now is to build the full solution upfront to have all required NuGet packages downloaded before the actual project is being built. And while we are actually getting our hands dirty in a custom deployment script, why don't add running the unit tests of the project as part of the deployment? So, if a test fails, deployment will fail too. I think that's a good idea.

So what I did was adding these two steps just in front of the two default steps:

:: 1. Build solution
echo Build solution
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%MySolution.sln" /nologo /verbosity:m /t:Build /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release /p:SolutionDir="%DEPLOYMENT_SOURCE%.\" %SCM_BUILD_ARGS%
IF !ERRORLEVEL! NEQ 0 goto error

:: 2. Running tests
echo Running tests
vstest.console.exe "%DEPLOYMENT_SOURCE%MyApplication.Web.TestsbinReleaseMyApplication.Web.Tests.dll"
IF !ERRORLEVEL! NEQ 0 goto error

That's it. I just copied the build line and pointed it to my solution, and I added a call to the MsTest tooling to run my tests.

So, with very little tweaking I could remove all dependencies to actual binaries I would have to check in otherwise and I have the Azure git deployment run my unit tests on every deployment. That's what I call easy and convenient.