Dropbox and SELinux

OK, so Dropbox isnt 100% Open Source but Im a pragmatic kinda guy and I do love Dropbox. However it (Dropbox) doent seem to like SELinux it seems.

I know its so tempting to reach for the “turn off SELinux” switch but wait, its actually very simple to make SELinux allow Dropbox to work.

It turns out that Dropbox tries to do some naughty stuff that SELinux is there to protect us from – namely executing out of the memory buffer. This type of thing is usually done by programs trying to do malicious things on the system and happily SELinux protects us from this – but that prevents Drop from running.

How to Fix It

There is a nice and simple way to fix this and no I dont mean disable SELinux 😉

There is a boolean that you could flip that turns off this protection – namely allow_execstack

sudo setsebool allow_execstack 1

However this is going way to far as you all now allowing any process to execute from stack, which isnt a good idea.

The best way is to tell SELinux that you just want Dropbox to be able to do this and nothing else. The way that you do this is you label the executable file, in this case /usr/bin/dropbox, as type execmem_exec_t

You can do this with a quick chcon, but thats not the best way to do it, the following two lines will fix Dropbox to work with SELinux

sudo semanage fcontext -a -t unconfined_execmem_exec_t /usr/bin/dropbox
sudo restorecon -v /usr/bin/dropbox

Now if you take a look at the SELinux contetxt of the file, you can see its got the right label

ls -lZ /usr/bin/dropbox
-rwxr-xr-x. root root system_u:object_r:execmem_exec_t:s0 /usr/bin/dropbox

If you spend a little time to understand the basics of SELinux (file contexts and booleans) you will find it is quite straight forward to work on a system with SELinux turned on

If you are interested in learning more about this stuff, check out the Dan Walsh blog

OSG

Operating System Choice for Critical Systems

It NEVER ceases to amaze me that when selecting an operating system for a critically important role, that people still chose Windows. Now this isnt a rant about how Linux or BSD are better or more secure than Microsoft Windows. I mean I think its quite an easy argument but one thing that is not up for debate is that Microsoft Windows is the most targeted operating system when it comes to Malware.

So why, for the love of all things good in the world, do you chose the most targeted OS for your critical systems. Here are just three recent incidents/reports that prompted this rant

1. The investigation into the recent Spanish air crash noted that a critical ground system, that was designed to spot problems and alert people was actually switched off as it was infected with malware

http://www.technewsdaily.com/malware-implicated-in-fatal-spanair-crash-1078/

2. The latest worm currently doing the rounds and allegedly targeted at Irans Nuclear Reactor. Iran have admitted that some of their systems are indeed infected with this malware. Its a nuclear reactor for gawd sake.

http://www.computerworld.com/s/article/9188147/Iran_admits_Stuxnet_worm_infected_PCs_at_nuclear_reactor

3. My favorite though was the recent announcement about an infection in a United States military network – their worst infaction ever, was caused by an infected USB drive.

That code spread undetected on both classified and unclassified systems, establishing what amounted to a digital beachhead, from which data could be transferred to servers under foreign control.

http://www.itpro.co.uk/626428/infected-usb-caused-biggest-us-military-breach-ever

For gawd sake people, if its a critical system, dont chose the most malware targeted operating system. It makes no sense at all.

VMware Left Me

Was it me? I dont know, I was loyal, but they left me anyway – well thats how it feels

Long Time Fan
Ive been a long time (read 2000/2001) fan of VMware – they were the first and, you could argue, still are the best in their space. Im a Linux fan, have been for a while and one of the reasons that I liked VMware was because the software I bought from them (yes I paid for Workstation and upgrades) was available for my OS of choice. Whats more they took the time to make sure that the windows worked with GTK2 looks. This to me meant that they liked their Linux users, they gave a crap about us.

I was so disappointed when I moved my home server from VMware Server 1 to VMware server 2 as the Linux client had gone. At least its been replaced with a web interface, that seems like a good idea – then all operating systems can manage the server. The interface came in for some criticism but it did everything I needed it to for the most part and I could manage my home VM server while out and about.

Times change and VMWare came out with their free version of ESX – namely ESXi. Now while ESX also had a decent web interface, ESXi did not. Your only choice of a graphical interface now meant you had to run Windows. So I stayed with Server 2.0

Recently I became aware of “VMWare Go” which was a “new web interface of ESXi users”. Yay I thought, good times! Alas no, when I went to log in I was prompted with a message that said “Your broswer must be at least Firefox 3 or higher, or IE v7 or v8 to use this site”. Thats odd I thought as I am running 3.5.5. What I very quickly realised is that this wasn’t to do with browser, it was to do with OS. I tried the site from my dual boot laptop (the only place I have Windows left these days) and I was able to get in with Firefox 3.5.5 on Windows but running the wizard prompts you to download components like the .net framework and other such single platform technology. How utterly disappointing

End of the Road
What did we do VMware? Why did you abandon us? Well anyway, I guess its the end of the road then old friend. Be happy.

Im off to migrate my stuff to Xen or KVM. Im not sure which yet, Xen has Amazon using it and Citrix seem committed to open source. In fact Ian Pratt was on FLOSS Weekly earlier in the year, so they seem to have the right mindset. On the other hand the Redhat road map points to KVM.

Anyway, watch this space. Im going to take my time to decide which to chose – i am on the rebound after all 🙂

OSG

Open Source Music on Hold

I have been working on a new project for work that I thought I would share with you. At work our Music on Hold devices (the things that provide music when you are put on hold) have been going faulty regularly. The device we currently use is a Fortune 2000 MOH from Rocom. It retails for about £260 If you are considering one of these devices, please read on.

Faulty by Design
The Rocom devices seem to last about 12-18 months before going faulty. I suspect that its the flash cartridge but being a proprietary design means its not easily replaceable.

Requirements
Ideally the device will have no moving parts; we did away with the original devices (which were literally CD players) because they were unreliable and not remotley manageable.

All we really need it to do is

  • play music on a loop
  • automatically start after power interuption
  • be remotely managable

Open source
The continued failure of device after device (we have about 70 of them accross EMEA) got me thinking, there must be a better way. I looked at Shuttle PCs but they failed the moving parts criteria (well there are ways but it didnt seem a good fit). Then my mind we to a very small fanless PC that i bought a couple of years back from Aleutia. So I took a look at the website to see if it was viable. The original device has now become the Aleutia T1

While the original device ran Puppy Linux but all the current ones run Ubuntu. Great, so the device is small, fanless and runs a very good, open source, operating system, and has a network port. So far so good.

Next I needed to work out if it would automatically start after a power outage. I dropped a quick email to the guys at Aleutia to see if this was possible and they very quickly responded to confirm that there was a BIOS setting for exactly this requirement. The final part was playing music on a loop. I was expecting it to be quite easy to acheive and I wasnt wrong.

The method of music playback I have gone for is called MPD (Music Playback Daemon), which is easily installable (its in the Ubuntu repos). I quickly installed MPD an uploaded an MP3 to the folder. Finally I added the MP3 to the playlist and set it to repeat and I was in business. Within 30 mins of unpacking the T1 I had it playing back music.

I shutdown the T1 and removed the power adaptor to test its ability to power on automatically. No sooner than I applied the power the device booted up, once the device had booted, MPD started playing the music – WIN

Final Steps
Now that I had a working device up and running, I need to think about how its supported within our company. I guess other people wouldnt be happy with SSHing into it to control it (which is really very simple actually). What I needed was a front end. Needless to say there are many front ends written for MPD. I went with a very simple web front end called MPDPlayer – its one of the many open source front ends listed on the wiki.

Ive done a little customisation of this and added a file upload button so that the whole process can be managed from the web interface.

Test Test Test
Im now in the process of testing and I do seem to have come accross a bug where playback stops after a number of days. I could just schedule a reboot of the device every night but I would prefer this to be a last resort. The MPD forums have given me some into on how to debug MPD, so I shall persue that.

Conclusion
So whats the catch? Well there doesnt seem to be one, plus this solution comes in nearly £100 cheaper than a Rocom and comes with a three year warranty rather than Rocoms 12 month one. Finally, as it uses a standard compact flash card, if it does go faulty, we can very easily replace it.

Overall Im really pleased with how easy its has been to “scracth my own itch” using existing open source projects. I intend to contribute my file upload button back to the MPDPlayer guys in the true Open Source fashion. Im also hoping that this experience will open my companies mind to using more open source solutions in future.

As ever I welcome your comments

OSG

Downtime, DR and the Cloud

Some of you may have noticed that this site was down for a little while. It seems my hosting company were victims of a massive incursion by malicious hackers and, at the time of writing, my original server still hasn’t been restored after 24 hours downtime.

While you have to feel sorry for them and all the extra work that they have been doing to rectify the issue now is a good time to go back over that age old question. Do you have a DR plan? Are you backing up, is your documentation up to date, have you tested a restore? Luckily I was in the process of documenting my setup when this happened and so my pain hasn’t been as great as I should imagine some others are experiencing

I think its also worth mentioning that, as I had no ETA of when my sites would be restored (or even if they could be restored by the provider) I moved everything into Amazons EC2 offering. This seems like an ideal platform for just such an occurrence, if you dont know how long your main site will be down you can very quickly get servers back on-line and then when and if your original platform is ready you can move back and you will only have had to pay for the hours/bandwidth that you have used.

If your on-line presence is important to you, and I cant think of many businesses that this doesn’t apply to, I would encourage you to look at adding something like Amazons Cloud offering to your DR strategy – and don’t forget to test, remember you only pay for the hours that you use and this is from as little as $0.10 an hour

OSG

Wave Goodbye to Email

For sometime now I have been of the opinion that email is broken. It worked at the time but now over 90% of email traversing the internet is spam. Sure, there are pretty good anti spam and anti virus systems but I honestly think we are just postponing the inevitable. I have had this conversation with friends many times and mostly they disagree but I honestly think we need something to replace email

Google Wave

Ive just watch the 80minute talk about Googles Wave and I think they really could be onto something. It combines rich interaction with very social features and its kind of an opt in model, like Facebook or Twitter, where you have to add people to your system. This means no unsolicited waves.

They have been working on this for two years and it looks really good. Dont take my word for it, go and check out the video here

Finally, and I have saved the best until last, they will be releasing it under open source so that you can set up your own Wave platform and it has federation built right in so that it will interoperate with other Wave platforms brilliantly

Lets hope this finally kills off email – it had a good innings but its time for it to go now

OSG

First Steps in the Cloud

Ive been a cloud *client* for quite some time, firstly with Gmail and Google docs, later with Dropbox and Amazons S3 storage (via Jungledisk). I’m also a fan of virtualisation and, while listening to a recent FLOSS Weekly netcast with Ian Pratt, I found out that Amazons EC2 (Elastic Compute Cloud) is indeed based on Xen. Now I had an interesting chat with one of the guys from Citrix recently also, I decided it was time I took a look at Amazons offering.

EC2 offers you the ability to “stand up” multiple servers almost instantly, configure and run them and only ever pay for the number of hours they are up. A server instance starts at $0.10 an hour – this is for their “small Linux instance”, which is 1.7gb ram and 350gb disk space. They also offer Windows instances which are slightly more, but still amazingly low priced. This makes it extremely cost effective to use for large proof of concept work or for full time production. Anyway, let me walk you through my first steps in/on Amazons cloud.

First of all you have to have an Amazon account, as I already had one all I needed to do was to “sign up” for the EC2 service (remember you pay for what you use in server/hours). Two clicks later and Im ready to go.

In my eagerness to get started I overlooked the “Getting Started” video on the front page and decided to see how for I could get without reading the documentation. If you want the short answer – I had my first box up and running in less than 5 minutes. For the more detailed version read on,

aws1-securitygroupsthumbThere are a couple of steps to complete before you get you box up and running and the interface holds your hand nicely through these. Im impressed with the level of security that is setup right out of the box. The two steps you need to do (apart from choosing your instance) are both security related. Firstly you need to select or create the security group – in other words the firewall settings. There are suggested entries there already and customising it is very simple.

Secondly you will need to generate a keypair that you will need to administer the boxes. Again the wizard walks you through this step also, Once those two steps are done and you have chosen your instance type, you click on create and after a minute or so you can see your first instance change its status to starting.

Cool, lets see the console then.

The first instance I chose to create was a Fedora box, so when I hit the “Console” button I was provided with details on how to connect to the instance. For now, you connect to the DNS name that Amazon give you, which maps to a local IP address within Amazons cloud. You can also rent “Elastic IP” addresses for $0.01 per hour, I decided the funky DNS name and private IP was fine for my testing. So I SSH to the DNS name, referencing the file that contains your keypair. The provide the exact syntax that you need to use but its pretty straightforward. You are not prompted for a password as you are using, the more secure, keypairs. And thats it – you have a bash console your box.

aws2-wpsitethumbI yum installed an Apache server and hit the page in my browser and there was the default webpage. I then went on to setup a WordPress install jas i would on a hosted server. Everything went to plan

As my first hour approached its end I shut down the instance and went out. Upon my return I wanted to try a Windows host. Interestingly the previous instance had disappeared. It seems that if you shutdown an instance, for a certain period of time, the diskspace is reclaimed. If you want to keep instances around when they are shutdown you can do this by using Amazons EBS (Elastic Block Store) which is $0.10 per gb per month.

Anyway, as I mentioned above, I decided to try a Windows box next. I selected the Server 2003 and SQL Server 2005 instance. This time the firewall settings suggested were as follows

  • Remote Desktop (3389)
  • HTTP (80)
  • SQL Monitor (1434)

amazon7I accepted the defaults but if I was going to use it “in production” I would close the SQL port. I clicked the button to fire up the instance and a minute or two later it changed its status to “running”. Hitting the console button this time brings up a box explaining how to connect to the server, namely via RDP. Again security is there right out of the box because the local Administrator password is randomly set and then encrypted in the instances log file. To get to this password you have to right click on the instance in Amazons control panel and select decrypt password. You are prompted to paste in your key to a dialog box and a few seconds later your password is displayed.

Pointing your RDP client to the DNS name of the instance and using these credentials gets you logged onto your server – its as easy as that. This would make testing things like large scale Exchange setups, that involve many servers talking to each other, really easy and you wouldn’t have to stump up for the hardware required to do this in your own lab.

This (EC2) is just one of the services that Amazon offer. I’ve been very impressed with my first steps in the cloud, things couldn’t have been any easier to get up and running and I’m pleased to see that security has been part of the core design. When you consider that the underlying technology is Open source then I think its something we (the Open Source community) can be proud of.

OSG

Update:
There is talk on the net about Amazon open sourcing its cloud tools – this would great news and very beneficial for The Cloud as a whole. So nice to see people aren’t trying to lock down or lock you into their offerings – lets hope it turns out to be true

More Screenies

amazon-1amazon4amazon3amazon2

Edit:
Sorry about the lost screenshots, this was due to a major incident at my previous hosting provider. At least I had the databases backed up :-/

Securing Remote Admin

Once you start running/administering your own server, live on the internet, you really need to think about securing access to it. In this post, Im going to look at the different ways that you can achieve this and the pros and cons of each of these

Firstly lets think about what it is that we are trying to prevent and what we it is that we are not trying to prevent. For this discussion I’m going to assume the server that we are running is a simple webserver hosting a blog. So therefore we want the world to be able to view our blog but not to be able to log on to the server and perfrom admin tasks.

We wil assume that the webserver listens on the standard web ports of 80 & 443, that it connects to a mysql server on the same box (running on 3306) and the we administer the box via SSH, again on the standard port – port 22.

Firstly lets take care of the low hanging fruit. MySQL will, by default, bind to the servers live IP address and so expose port 3306 to the world. (by default it wont allow remote logins but its still a port that is exposed that we dont need to have exposed). Opening the config file for MySQL (/etc/my.conf) we can simply use this line “bind-address=127.0.0.1” and restart the service to make it bind to the loopback instead. (We will assume that you did this before you setup your blog software and so the blog software was configured to use 127.0.0.1 as well)

Good, so thats one port now taken care of. This just leaves the web ports (80 & 443) and ssh open to the world (port 22).

The next thing we need to decide is just exactly how secure we want to be – remember the is an inverse relationship between ease of use and great security; its always a trade off.

Relaxed Approach
We may decide that this is only our blog, that we have a regular off site backup of the database and so we arent really too concerned about security. If this is the case we could probably stop right here, making sure that we

  • Use a strong root password
  • keep the webserver up to date with latest patches
  • keep the blog software up to date with the latest patches

Pros – very little work to setup. Can admin from anywhere, doesnt require additional software (ssh keys)
Cons – you are exposing port 22 to the world and could potentialy be at risk to a zeroday attack or someone just guessing/bruteforcing your password

Restricting Access
We may decide that doing a little more to secure remote access is worthy investment, but we dont want to go crazy. Here are some of the things we could do

Limited range of IPs allowed – Use a firewall (IPTables typically) to only allow a few IP addresses access to port 22. This assumes you will always connect from one of these IPs and never need to admin the box from anywhere else
Automated, proactive, blocking of rouge IPs – If we need to make sure that we can admin the box from anywhere (lets say we travel a lot and dont wont to limit access down to a few IPs) we could use tools that watch for, and react to, brute force password atempts.

The two programs I would recommend here are Fail2ban which looks at your logs and if it sees a certain number of failed password attempts will add a firewall rule to block the source IP and DenyHosts which does a similar job but instead of adding a firewall rule, the source IP is added to /etc/hosts.deny. The nice thing about Deny Hosts is that it gives you the ability to sync your entries with other peoples. Lets face it, if someone is brute forcing your box, they are almost certainly doing it to soemone elses as well. There is nothing stopping you using both Fail2ban and DenyHosts at the same time for a belt and braces approach.

Pros – this is much more secure, you have heavily restricted the number of users pounding on your box, while allowing yourself the ability to admin the box
Cons – takes a little more work to setup and you could potentially lock your own IP address out if you are not careful

Higher Security Approach
So we have decided that security of our box is very important and so we are going to go put extra affort into securing it.

Limit to SSH Keys only – we can disable the ability to logon using a username and password full stop, limiting it to SSH Keys only. This means that even though the port may be open to the world, its imune to password brute forcing. You could combine this with the “Restricting Access” approach if you want to go the extra step.

Pros – you have elimated the attackers ability to bruteforce/guess your password, drastically reducing your exposure to a breach
Cons – requires that you have with you the coresponding SSH key when you need to access your server

Paranoid Approach
No matter what, you just arent comfortable with the admin port being visable, you want to retain the ability to remotle admin the box but you dont even want people to be able to see or connect to the admin port. Sound impossible? Not so, we can use one of these two methods to make this happen.

First off is Port Knocking. This means that port 22 is totally firewalled off until the box receives a certain sequence of packets to a predefined set of ports – so maybe the sequence is tcp/6880, udp/3399, tcp/8881 – if the box receives these packets in this sequence then it will open port 22 to the source address for a limited time – at which point you connect.

The downside of this for the ultra paranoid is that if someone sniffs the network at the same time that you send the sequence, then they know you sequence and could replay them and enable visibility to port 22 for themselves. This is where the second approach comes in – SPA

SPA or single pack authentication evolved from port knocking. It addresses the weaknesses (capture and replay) and adds some functionality. In a nutshell you send a single packet to your server with an encrypted payload that describes what you want to do. So for example you may say that you want to enable port 22 on server x and port 2222 on server y – this request is encrypted and sent to the server. The server receives the SPA packet and, if you have used the correct password to encrypt it, decrypts the contents and acts on them. It is imune to a replay attack as the the contents of the packet have a timestamp included in the encrypted payload. I really like this approach and use it to gain access to my home network.

The software I use to do this is called FWKnop and more information can be found here

Pros – you are as secure as is humanly possible, it doesnt get more secure than this unless you disconnect it from the internet and bury it in a bunker
Cons – you need to have the client software installed on the machine you want to admin from, in order to send the SPA packet

Feedback
if you feel I have missed anyting off, made mistakes or just want to let me know about your methods of securing remote access – please use the comments box to give me your feedback

OSG

Next we should probably think about installing a HIDS but I will save that for a future post

Open Source Disk Imaging

Disk imaging is used extensively within the IT departments of most companies. This enables them to quickly build desktops and laptops, to a repeatable standard and backup critical devices in order to quickly recover from a hard disk failure. In the past this has required some fairly expensive and proprietary software. These images are generally stored on a server but engineers can, and regularly do, carry a handful of them around with them.

The individual components required to do this with Open Source software do exist but until recently no-one seems to have tied them together with a nice, web based, front end. Enter FOG – a free open-source computer cloning system, which does exactly that. FOG is a Linux based server, that lets you backup and restore disk images for desktops/laptops and servers without the need to even carry a boot floppy/CD – as it uses PXE to boot from the network.

If setting this up sounds complicated, they do provide a VMWare virtual appliance for you to download use to do your initial testing – however, due to the large amounts of storage and IO demands, the VMWare appliance isn’t recommended for large scale production environments.

My initial tests are very encouraging and so if disk imaging is something that you are interested in, I wholeheartedly recommend checking this project out  – kudos to Chuck Syperski and Jian Zhang for creating this.

OSG