Project Linker for Xamarin Studio

When working in C# projects that target multi plataforms and need to have a high code reuse ration, like those that must run on iOS, Android, Windows Phone and any other desktop platform where .NET might run (Windows itself, Linux and Mac) it is common to face a problem: how to share code between the different projects, while maintaining your sanity?

PCL (Portable Class Libraries) is a possible solution, however it is a fairly new feature even in Visual Studio (it is fully supported starting in the 2012 version), and Xamarin Studio / MonoDevelop still have some heavy work to do. Also, PCL imposes some restrictions on what you are allowed to do as well.

One common approach is to have a “Core” project that targets the full .NET framework, and a series of “Linked” projects with compiler flags specific to each platform. This is very well described at the page “Sharing Code Options” at Xamarin’s website. However, do that by hand is not productive and error prone. Visual Studio users can make use of a extension, but how about Xamarin Studio?

Wait no more, now it is possible to do project linking in Xamarin Studio / MonoDevelop as well, with the Project Linker extension I created.

This addin was developed in order to automatically create and maintain links from a source project to a set of target projects, in order to make code sharing between different plataforms a more friendly task. It is based on the Visual Studio Project Linker extension.

The main reason behind the development of this addin was that I was working on a project that targets multiple plataforms using the great MvvmCross library, including iOS, Android, Windows, Mac and Linux, and PCL support was very lacky on Xamarin Studio, and even on Visual Studio 2012 it had its problems. Also, PCL programming imposes some restrictions that I found harder to workaround than using a third party plugin to do the file linking between the projects.

Features

  • Automatically replicates “new”, “delete” and “rename” operations from the source to the target projects
  • Allows to perform a manual synchronization, which is useful if you use both Visual Studio and Xamarin Studio or if someone else does not have the plugin and changes the project
You can install the extension directly from within Xamarin Studio, with the “Addins manager” tool. More instructions and information you can find at http://rafaelsteil.github.io/monodevelop-project-linker/

How to fix “VT-x locked or unavailable in MSR” in VirtualBox

While following the tutorial on How to install OS X Mountain Lion in Virtualbox with iAtkos from the good folks of MacBraker I came accross the error “VT-x locked or unavailable in MSR“, which was totally new for me. Heck, I even didn’t know that it was supposed to mean.

It turns out that VT-x is a virtualization techology created by Intel to allow the, well, virtualization of other software, and 64 bits systems require it. It must be supported by the processor, and unless you have a very old computer, the odds are that you already have support for it. It may be just disabled.

VT-x usually (always?) come disabled, so it is necessary enable it in the BIOS of your computer. Restart it, press the key to enter BIOS setup (mine was F2, sometimes it is F10 or even DEL), and search there for “VT”, “VT-x”, “Virtualization Technology” (which was my case) or something like that.

Turn it on and POWER OFF your computer – a simple restart is not enough. Pull the energy cord off. Done that, your VirtualBox image should work fine now.

Windows drivers for Samsung Android Devices

I can’t understand why, but it appears that even getting the Android devices to run on development mode on Windows is a pain in the ass. Maybe I didn’t read the manual, but who does anyway? It’s USB PNP after all, only that the PNP part does not work as expected.

I had this Samsung Galaxy Tab 2 7″ device that I needed to deploy some applications from Windows, but no matter what I did, the USB drivers never installed properly.

After some Googleing, I discovered that I needed to install a software called “Samsung Kies“, which does the black magic that allows the device to run properly. In my case I first searched for “gt-p3110 windows 7 driver“, where gt-p3110 is the device model, and then I discovered the page “http://www.samsung.com/uk/support/model/GT-P3110TSABTU-downloads“, where it is available to download.

It sound dumb know that I understand, but it required a good ammount of time to find it the first time!

Fixing “Gem bundler is not installed” when using Capistrano

Sometimes (just sometimes, otherwise it wouldn’t be Rails) you may get the following error when deploying an app with Capistrano:

 ERROR: Gem bundler is not installed, run `gem install bundler` first.

The problem is, you do have bundler installed, but even though it fails with cap. To solve this problem, run the following command in the server, with the user that Capistrano runs:

rvm wrapper `rvm current` bundle bundle

After that you should be fine. Thanks to Diego Plentz for the tip *cof* hack *cof*.

Apache remote logging with rsyslog

One of the challenges of running a cluster of webservers is to decide where to store the log files for post-processing, considering they are relevant for you for any reason. The first approach most people think is to keep them locally and sync to a central server using rsync from time to time. However, such approach can still lead to missing entries if the machine goes down between synchronizations, which is specially true when using Amazon AutoScaling to dynamically add and remove servers from the grid (which is our case).

A much better approach is to log everything to a remote server right on spot, instead of relying on local files. The most common solution is to use rsyslog, which comes with default with the majority of Linux distributions, or syslog-ng, which claims to be a better tool than rsyslog.

In order to make it work with rsyslog, three things are necessary: configure the webserver (Apache, in my case) to delegate the logging to an external program (/usr/bin/logger), configure the log server to accept connections either in UDP or TCP ports, and configure the client (which should be the same machine where the webserver is running) to send the information to the server.

 To configure Apache, open its configuration file and change the directive CustomLog to it pipes the contents to an external program, like this:

CustomLog "|/usr/bin/logger -t apache -p local6.info" combined

The next step is to configure rsyslog client, which usually is located at /etc/rsyslog.conf, where we need at least specify that the level “local6.info” should be sent remotely – you may choose to send everything to the remote log server, however in my case I only want to send the Apache stuff. It works like this:

# Syntax:
# <level> @<IP>:<port>
local6.info @10.11.12.13:514

For “<level>” you can pass “*.*” to send everything. The single “at” symbol (@) means that the connection will be made via UDP, while two @ is for TCP.

The last step is the server, also in /etc/rsyslog.conf, where we need to enable the UDP module and specify where the Apache logs should be written. The following is the minimum you need:

$ModLoad imuxsock.so
$ModLoad imklog.so

# Provides UDP syslog reception
$ModLoad imudp.so

# The port where to listen
$UDPServerRun 514

# Write all apache logs to this file (please note the comma)
$template apacheAccess,"/var/log/apache_access_log"

# If the log's tag is "apache" and matches
# the defined level, send it to a specific file
if $syslogtag == 'apache' then {
    local6.info ?apacheAccess
    & ~
}

That’s all you need.

One last thing: officially each line is limited to 4kb by rsyslog, although I have heard that the kernel ring size also plays a role. In any case, content bigger than that will be trimmed.

Zero downtime deploy script for Jetty

One of the challenges when developing Java web applications is to deploy new versions of the app without any perceptible downtime by the end users – in fact, this impacts virtually any platform, although in Java it could be trickier than for PHP or Rails, for example. The problem is that most servlet containers need to first shutdown the context in order to load it again, an operation that can take several seconds to complete (or, in a worst scenario, several minutes, depending of how your webapp is built).

When you have a cluster of servers serving the same app it may not be such a big problem, as one possible approach is to deploy the new version one box at a time. On the other hand, it is fairly common to have a single machine (despite its size) with a single webserver to do all the work, and there lies a monster.

In order to address such issue, I have created a bash script that does some tricks with Jetty and Apache configuration files that allows us to deploy a new version of the application and switch to it (as well switch back to the older version if necessary) with no noticeable downtime. Although it was created with the environment we have in production there where I work in mind, it is easy to adapt it to your needs (or vice-versa). The script assumes the following:

  • Jetty’s hot deploy feature should be disabled (basically, set “scanInterval” to 0 in jetty-contexts.xml)
  • Apache is in front of Jetty through mod_proxy
  • Your app is deployed as an open directory (e.g, not as a war), ideally using Capistrano or another similar tool
  • The ports 8080 and 8081 are available
  • The environment variable JETTY_HOME points to the Jetty installation directory
  • The environment variable APACHE_HOST_CONF points to the Apache configuration file for the host you are dealing (ideally not httpd.conf, but “example.conf”)
It works this way: you use the script “jetty_deploy.sh” as workhorse in place of the usual “jetty.sh”. To start a new instance, run “jetty_deploy.sh start_new”, and the script will change the proper configuration files to listen on the “opposite” port (e.g, 8081 if 8080 is the current one, or vice-versa), start a new Jetty server and wait until the context fully starts. After that it will restart Apache, which will then proxy all requests to the new jetty server. If something goes wrong you can use “jetty_deploy.sh rollback”, and if everything is OK, you can stop the previous and old instance by running “jetty_deploy.sh stop_previous”. Simple as that.
The project is freely available at https://github.com/rafaelsteil/jetty-zero-downtime-deploy, and please make sure you read the instructions in the file “jetty_config”. In fact, it is advisable to either understand how “jetty_deploy.sh” does its job.

Acessing a local machine from the outside using a reverse proxy

This is really neat: have you ever wanted to access your local machine or any other computer that it’s located in the local network from the outside world? I did, many times. The first and most common approach is to enable a DMZ in the router so all traffic that comes through the public IP address (for example, your modem’s IP) goes to a specific local computer. This works faily well, but you have to have access to the router’s configuration, and it is limited to a single destination machine.

If you are in an environment you don’t control, like your company, to change the router configuration is not an option, so how to we solve this problem? Using reverse tunneling through SSH.

For those new to the concept of tunneling, it is a trick that you can do with virtually any ssh client so all traffic from a local port goes to a remote server and port. It is also extremely useful to bypass corporate firewall rules and to use your remote server as a SOCKS proxy (so you can access Hulu from a blocked country). You can find more details at http://www.revsys.com/writings/quicktips/ssh-tunnel.html, http://ocaoimh.ie/2008/02/13/how-to-use-ssh-as-a-proxy-server/ and Google.

But the main question here is the opposite: how to we allow anyone to access our local machines? The answer is a reverse runnel. It is very simple to get it up and running, but it took me some good hours to find the correct approch, even because I didn’t know what to search for in the beginning. The steps are:

  • Change a setting in your server’s sshd_config
  • Choose a remote port to connect to
  • Choose the local (destination) port to route the traffic
  • Open the revere tunnel

So I have a webserver running on my localhost at port 8080, and I want that when someone calls “example.com” at port 15000, it sees my local webserver. In order to do that, first go to your remove server and add the following line to sshd_config (probably /etc/ssh/sshd_config)

GatewayPorts yes

This is strickly necessary, otherwise the tunnel will only work from the server to your machine (and not from any address to your machine). You can close the ssh session if you want.

Now, from your local machine, open the tunnel:

ssh -N -R "*:15000:localhost:8080" username@example.com

type the ssh password, and that’s all. The “-N” argument just hangs the ssh client, instead of opening a remote shell, “-R” is for the reverse tunnel itself, “*:” is to listen on all interfaces (strickly necessary, otherwise it will only listen in the loopback interface), “15000″ is the port at example.com that users will connect to, “localhost” is your own local machine, and “8080″ is the port in your local machine that will get the traffic.

This is very useful for development purposes, like when I had to test Amazon SNS using an HTTP endpoint.

Get the public / local IP of your EC2 instance via command line

While you can get the public and private IP address of your Amazon EC2 instance via the AWS Console, it may be extremely useful to query it from anywhere you can make an HTTP request, such a shell script. The operation is really simple, just make a GET request to the following URL from within your EC2 instance:

Local IP:

curl http://169.254.169.254/latest/meta-data/local-ipv4

Public IP:

curl http://169.254.169.254/latest/meta-data/public-ipv4

I often use this feature to pre-configure services and update configuration files, as in EC2 you get a new IP Address each time you reboot.

 

Fixing java.lang.OutOfMemoryError: PermGen space in Tomcat

If you have ever used Tomcat for development purposes for more than 10 minutes (especially within Eclipse), you certainly have encountered the infamous messag “OutOfMemoryError: PermGen space”, and the only solution was to restart Tomcat.

Despite the fact that you should be aware that such message may indicate a deeper problem, there is an easy’n dirty solution via JVM options. I have come to the fix after browsing the Interwebs for a while, and your luck may vary. The solution consists in setting up the following set of JVM arguments:

-server
-XX:+DisableExplicitGC
-XX:MaxPermSize=256m
-XX:PermSize=256m
-XX:MaxNewSize=256m
-XX:NewSize=256m
-XX:+UseConcMarkSweepGC
-XX:+CMSClassUnloadingEnabled
-XX:+CMSPermGenSweepingEnabled

I have added them all for the sake of laziness, although the last three are said to be used with caution (especially the last one)

Setting up within Eclipse

Even in 2012 I still use the Sysdeo Tomcat Plugin because of its simpliciy over Eclipse’s WTP Servers. Open Window -> Preferences (Command + , if you are using a Mac) and go to Tomcat -> JVM Settings -> Append to JVM parameters.

Setting up Tomcat standalone

Another approach is to add the arguments directly to Catalina.sh. Locate it in Tomcat’s bin directory and set the JAVA_OPTS variable, like this:

JAVA_OPTS="-XX:+DisableExplicitGC -XX:MaxPermSize=256m -XX:PermSize=256m
-XX:MaxNewSize=256m -XX:NewSize=256m -XX:+UseConcMarkSweepGC
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled"

Just restart Tomcat and you should be good to go.