Open Source Music on Hold

I have been working on a new project for work that I thought I would share with you. At work our Music on Hold devices (the things that provide music when you are put on hold) have been going faulty regularly. The device we currently use is a Fortune 2000 MOH from Rocom. It retails for about £260 If you are considering one of these devices, please read on.

Faulty by Design
The Rocom devices seem to last about 12-18 months before going faulty. I suspect that its the flash cartridge but being a proprietary design means its not easily replaceable.

Requirements
Ideally the device will have no moving parts; we did away with the original devices (which were literally CD players) because they were unreliable and not remotley manageable.

All we really need it to do is

  • play music on a loop
  • automatically start after power interuption
  • be remotely managable

Open source
The continued failure of device after device (we have about 70 of them accross EMEA) got me thinking, there must be a better way. I looked at Shuttle PCs but they failed the moving parts criteria (well there are ways but it didnt seem a good fit). Then my mind we to a very small fanless PC that i bought a couple of years back from Aleutia. So I took a look at the website to see if it was viable. The original device has now become the Aleutia T1

While the original device ran Puppy Linux but all the current ones run Ubuntu. Great, so the device is small, fanless and runs a very good, open source, operating system, and has a network port. So far so good.

Next I needed to work out if it would automatically start after a power outage. I dropped a quick email to the guys at Aleutia to see if this was possible and they very quickly responded to confirm that there was a BIOS setting for exactly this requirement. The final part was playing music on a loop. I was expecting it to be quite easy to acheive and I wasnt wrong.

The method of music playback I have gone for is called MPD (Music Playback Daemon), which is easily installable (its in the Ubuntu repos). I quickly installed MPD an uploaded an MP3 to the folder. Finally I added the MP3 to the playlist and set it to repeat and I was in business. Within 30 mins of unpacking the T1 I had it playing back music.

I shutdown the T1 and removed the power adaptor to test its ability to power on automatically. No sooner than I applied the power the device booted up, once the device had booted, MPD started playing the music – WIN

Final Steps
Now that I had a working device up and running, I need to think about how its supported within our company. I guess other people wouldnt be happy with SSHing into it to control it (which is really very simple actually). What I needed was a front end. Needless to say there are many front ends written for MPD. I went with a very simple web front end called MPDPlayer – its one of the many open source front ends listed on the wiki.

Ive done a little customisation of this and added a file upload button so that the whole process can be managed from the web interface.

Test Test Test
Im now in the process of testing and I do seem to have come accross a bug where playback stops after a number of days. I could just schedule a reboot of the device every night but I would prefer this to be a last resort. The MPD forums have given me some into on how to debug MPD, so I shall persue that.

Conclusion
So whats the catch? Well there doesnt seem to be one, plus this solution comes in nearly £100 cheaper than a Rocom and comes with a three year warranty rather than Rocoms 12 month one. Finally, as it uses a standard compact flash card, if it does go faulty, we can very easily replace it.

Overall Im really pleased with how easy its has been to “scracth my own itch” using existing open source projects. I intend to contribute my file upload button back to the MPDPlayer guys in the true Open Source fashion. Im also hoping that this experience will open my companies mind to using more open source solutions in future.

As ever I welcome your comments

OSG

Server Move

A few days ago I moved the server again. If you remember, after my last hosting provider was broken into by malicious hackers and I had no ETA for my server being available, I moved OSG into Amazons EC2 infrastructure.

The process was quite straight forwardand their clear pricing meant that I could be sure roughly how much it would cost. The only part I couldnt work out is the bandwith costs. Anyway I decided to leave OSG with Amazon for one month so that I could get an idea of the total cost of hosting a server instance with them.

Its been about a month now and the costs are in. The bandwith costs were tiny (probably due to the very small ammount of traffic that my site gets) and so my calcualtions were spot on.

Does it compare?

How does it compare ? Well thats not a straight forward comparison as my old server had 512megs of Ram and the smallest Amazon instance is 1.7gigs. So while the Amazon instance is more expensive, when you compare it to a server with the same amount of RAM its actually a very good price.

That said, I dont need all that extra RAM, so I have elected to move OSG back out of the Amazon cloud and back to a hosting provider.

Where did I go?

I have moved over to Linode – these guys seem to have good feedback on http://www.webhostingtalk.com/ and my experience with them has certainly been very good so far compared to my previous hosting provider.

I will no doubt do a mini write up on that in a week or two but please let me know if anything isnt working

OSG

Downtime, DR and the Cloud

Some of you may have noticed that this site was down for a little while. It seems my hosting company were victims of a massive incursion by malicious hackers and, at the time of writing, my original server still hasn’t been restored after 24 hours downtime.

While you have to feel sorry for them and all the extra work that they have been doing to rectify the issue now is a good time to go back over that age old question. Do you have a DR plan? Are you backing up, is your documentation up to date, have you tested a restore? Luckily I was in the process of documenting my setup when this happened and so my pain hasn’t been as great as I should imagine some others are experiencing

I think its also worth mentioning that, as I had no ETA of when my sites would be restored (or even if they could be restored by the provider) I moved everything into Amazons EC2 offering. This seems like an ideal platform for just such an occurrence, if you dont know how long your main site will be down you can very quickly get servers back on-line and then when and if your original platform is ready you can move back and you will only have had to pay for the hours/bandwidth that you have used.

If your on-line presence is important to you, and I cant think of many businesses that this doesn’t apply to, I would encourage you to look at adding something like Amazons Cloud offering to your DR strategy – and don’t forget to test, remember you only pay for the hours that you use and this is from as little as $0.10 an hour

OSG

Wave Goodbye to Email

For sometime now I have been of the opinion that email is broken. It worked at the time but now over 90% of email traversing the internet is spam. Sure, there are pretty good anti spam and anti virus systems but I honestly think we are just postponing the inevitable. I have had this conversation with friends many times and mostly they disagree but I honestly think we need something to replace email

Google Wave

Ive just watch the 80minute talk about Googles Wave and I think they really could be onto something. It combines rich interaction with very social features and its kind of an opt in model, like Facebook or Twitter, where you have to add people to your system. This means no unsolicited waves.

They have been working on this for two years and it looks really good. Dont take my word for it, go and check out the video here

Finally, and I have saved the best until last, they will be releasing it under open source so that you can set up your own Wave platform and it has federation built right in so that it will interoperate with other Wave platforms brilliantly

Lets hope this finally kills off email – it had a good innings but its time for it to go now

OSG

Golden Rules of Disto Choice

Ive been on open source operting system user for a good number of years now and about 3 or 4 years ago, after exetensive testing, I came up with my “three golden rules of OS choice”.

They were

  • Centos on a server
  • Fedora on a Desktop
  • Ubuntu om a Laptop

Two out of those three are obviously redhat based and so you may say why Ubuntu on the laptop? I love Fedora, they always seem to get the good stuff ahead of any other distro and always seem to be inovating in the areas that Im interested it. That said, all that bling/goodness comes with a hit and laptops are generally less wel endowed. So I have always opted for ununtu on a Laptop as it feels lighter and they seem to pay a little more attention to performance tweaks.

So those were my ruls and whenever I broke them, I always always seemed to regret it. with that said, with the advent of Fedora 9, I broke my own rules (as I say, usually never ends well) and put Fedora on the laptop – and it worked well.

Ive been running this way ever since (and upgraded to F10 when it came out) and so Ive been thinking its time to re-evauate the golden rules. With this in mind and with everyone saying how fast ubuntu 9.04 is now I decided to revert back to the original ruleset and get ububntu back on the laptop.

Things havent gone well.

There are things that have come on a long way in Ubuntu – the installer is now much better (much more like anaconda 😉 ). The boot up time and general performance is undoubtably superb. The theme is starting to look really good these days, although still brown but, and heres the killer, I cant have my system locking up three to four times a night and needing me to hold my finger on the power button.

Intel Graphics

Yes, before you say it, I have intel graphics but thats hardly uncommon in a laptop. Its hard to understand how this got out the door in this state (see here).

So while I had high hopes I just cant get passed my system being unusable – that said it couldnt ever stay on my laptop anyway as it doesnt have whole disk encryption. This is really important to me, especially as its a laptop, and has been available in the last two releases of Fedora. I know there is the ability to encrypt individual files now, but thats not what I need.

So overall, my instinct is that this (Ubuntu 9/04) is a great release if you dont have intel graphics, if you do it will be worth you hanging back until thats resolved. It certainly felt much more spritely than Fedora but some of that will no doubt be the fact that it doesnt have the encryption overhead.

Here are my revised new Golden Rules

  • Centos on a server
  • Fedora on a Desktop
  • Fedora on a Laptop

OSG

Additional note: Ive been advised that turning off desktop effects may stop the lock ups – will give it a try and see if it works

First Steps in the Cloud

Ive been a cloud *client* for quite some time, firstly with Gmail and Google docs, later with Dropbox and Amazons S3 storage (via Jungledisk). I’m also a fan of virtualisation and, while listening to a recent FLOSS Weekly netcast with Ian Pratt, I found out that Amazons EC2 (Elastic Compute Cloud) is indeed based on Xen. Now I had an interesting chat with one of the guys from Citrix recently also, I decided it was time I took a look at Amazons offering.

EC2 offers you the ability to “stand up” multiple servers almost instantly, configure and run them and only ever pay for the number of hours they are up. A server instance starts at $0.10 an hour – this is for their “small Linux instance”, which is 1.7gb ram and 350gb disk space. They also offer Windows instances which are slightly more, but still amazingly low priced. This makes it extremely cost effective to use for large proof of concept work or for full time production. Anyway, let me walk you through my first steps in/on Amazons cloud.

First of all you have to have an Amazon account, as I already had one all I needed to do was to “sign up” for the EC2 service (remember you pay for what you use in server/hours). Two clicks later and Im ready to go.

In my eagerness to get started I overlooked the “Getting Started” video on the front page and decided to see how for I could get without reading the documentation. If you want the short answer – I had my first box up and running in less than 5 minutes. For the more detailed version read on,

aws1-securitygroupsthumbThere are a couple of steps to complete before you get you box up and running and the interface holds your hand nicely through these. Im impressed with the level of security that is setup right out of the box. The two steps you need to do (apart from choosing your instance) are both security related. Firstly you need to select or create the security group – in other words the firewall settings. There are suggested entries there already and customising it is very simple.

Secondly you will need to generate a keypair that you will need to administer the boxes. Again the wizard walks you through this step also, Once those two steps are done and you have chosen your instance type, you click on create and after a minute or so you can see your first instance change its status to starting.

Cool, lets see the console then.

The first instance I chose to create was a Fedora box, so when I hit the “Console” button I was provided with details on how to connect to the instance. For now, you connect to the DNS name that Amazon give you, which maps to a local IP address within Amazons cloud. You can also rent “Elastic IP” addresses for $0.01 per hour, I decided the funky DNS name and private IP was fine for my testing. So I SSH to the DNS name, referencing the file that contains your keypair. The provide the exact syntax that you need to use but its pretty straightforward. You are not prompted for a password as you are using, the more secure, keypairs. And thats it – you have a bash console your box.

aws2-wpsitethumbI yum installed an Apache server and hit the page in my browser and there was the default webpage. I then went on to setup a WordPress install jas i would on a hosted server. Everything went to plan

As my first hour approached its end I shut down the instance and went out. Upon my return I wanted to try a Windows host. Interestingly the previous instance had disappeared. It seems that if you shutdown an instance, for a certain period of time, the diskspace is reclaimed. If you want to keep instances around when they are shutdown you can do this by using Amazons EBS (Elastic Block Store) which is $0.10 per gb per month.

Anyway, as I mentioned above, I decided to try a Windows box next. I selected the Server 2003 and SQL Server 2005 instance. This time the firewall settings suggested were as follows

  • Remote Desktop (3389)
  • HTTP (80)
  • SQL Monitor (1434)

amazon7I accepted the defaults but if I was going to use it “in production” I would close the SQL port. I clicked the button to fire up the instance and a minute or two later it changed its status to “running”. Hitting the console button this time brings up a box explaining how to connect to the server, namely via RDP. Again security is there right out of the box because the local Administrator password is randomly set and then encrypted in the instances log file. To get to this password you have to right click on the instance in Amazons control panel and select decrypt password. You are prompted to paste in your key to a dialog box and a few seconds later your password is displayed.

Pointing your RDP client to the DNS name of the instance and using these credentials gets you logged onto your server – its as easy as that. This would make testing things like large scale Exchange setups, that involve many servers talking to each other, really easy and you wouldn’t have to stump up for the hardware required to do this in your own lab.

This (EC2) is just one of the services that Amazon offer. I’ve been very impressed with my first steps in the cloud, things couldn’t have been any easier to get up and running and I’m pleased to see that security has been part of the core design. When you consider that the underlying technology is Open source then I think its something we (the Open Source community) can be proud of.

OSG

Update:
There is talk on the net about Amazon open sourcing its cloud tools – this would great news and very beneficial for The Cloud as a whole. So nice to see people aren’t trying to lock down or lock you into their offerings – lets hope it turns out to be true

More Screenies

amazon-1amazon4amazon3amazon2

Edit:
Sorry about the lost screenshots, this was due to a major incident at my previous hosting provider. At least I had the databases backed up :-/

Linux Epic Fail

When the phenomenon of the netbook first hit there is little doubt it was disruptive, it really shook the market place up. The majority of netbooks too advantage of a Linux based OS to keep the price as low as possible.

This was an amazing opportunity to really drastically change peoples OS habits, if they could be shown that Linux is actually just as useful as the OS they (at least technically) have to pay for, it could well influence their future OS decisions for their desktops and laptops. It also meant that manufacturers were forced to write linux drivers for their hardware, which can only be a good thing.

However, Microsoft were unlikely to take this lying down. I had hoped that if Linux could get enough momentum during those first months, then it would be unstoppable, but it seems that we have lost that battle. Februarys figures show that 96% of netbooks now ship with Windows on them.

This is a truly Epic Fai. I think we all need to take a good hard look at how and why such an opportunity was squandered, then in the unlikely event that we ever have another opportunity like this, then maybe, just maybe, the results could be different

I’d be interested in your thoughts on this, please leave me some feedback

OSG

Gnome 3.0 – Make it pretty !

For the whole time Ive been running Linux (circa Redhat 6) I have always used the Gnome desktop. The reason Im running it is no doubt because thats what came as default in Redhat at the time and has been the default on my OS ever since; as I now run Fedora.

Don’t get me wrong I have tried other desktops but I always seem to end up coming back to Gnome. However, lets face it, its not pretty. Infact it looks like something you could easily have been running 10 years ago. Sure compiz-fusion makes it a little nicer but the phrase about lipstick and pigs comes to mind.

So about a month ago I decided to give KDE a decent try and, to my utter surprise, I’m still using it as my default desktop a month later. You see once you have a nice looking desktop its hard to go back to Gnome. Don’t get me wrong, I don’t think KDE is perfect but at least it looks like its from this decade.

Gnome recently spoke about their plans for Gnome 3.0. To be honest, while they are interesting and exciting, the main thing I want from Gnome is please can you make it look nice? I mean really nice.

I realise this is a little shallow even but if you look at products like Apple then you can see how important this stuff is. We spend hours in front of our computers these days, so lets make it as pleasant as possible eh? I’m not advocating form over function but we already have a very functional desktop, lets smarten it up before we try and start adding even more functionality, please?

OSG

Securing Remote Admin

Once you start running/administering your own server, live on the internet, you really need to think about securing access to it. In this post, Im going to look at the different ways that you can achieve this and the pros and cons of each of these

Firstly lets think about what it is that we are trying to prevent and what we it is that we are not trying to prevent. For this discussion I’m going to assume the server that we are running is a simple webserver hosting a blog. So therefore we want the world to be able to view our blog but not to be able to log on to the server and perfrom admin tasks.

We wil assume that the webserver listens on the standard web ports of 80 & 443, that it connects to a mysql server on the same box (running on 3306) and the we administer the box via SSH, again on the standard port – port 22.

Firstly lets take care of the low hanging fruit. MySQL will, by default, bind to the servers live IP address and so expose port 3306 to the world. (by default it wont allow remote logins but its still a port that is exposed that we dont need to have exposed). Opening the config file for MySQL (/etc/my.conf) we can simply use this line “bind-address=127.0.0.1” and restart the service to make it bind to the loopback instead. (We will assume that you did this before you setup your blog software and so the blog software was configured to use 127.0.0.1 as well)

Good, so thats one port now taken care of. This just leaves the web ports (80 & 443) and ssh open to the world (port 22).

The next thing we need to decide is just exactly how secure we want to be – remember the is an inverse relationship between ease of use and great security; its always a trade off.

Relaxed Approach
We may decide that this is only our blog, that we have a regular off site backup of the database and so we arent really too concerned about security. If this is the case we could probably stop right here, making sure that we

  • Use a strong root password
  • keep the webserver up to date with latest patches
  • keep the blog software up to date with the latest patches

Pros – very little work to setup. Can admin from anywhere, doesnt require additional software (ssh keys)
Cons – you are exposing port 22 to the world and could potentialy be at risk to a zeroday attack or someone just guessing/bruteforcing your password

Restricting Access
We may decide that doing a little more to secure remote access is worthy investment, but we dont want to go crazy. Here are some of the things we could do

Limited range of IPs allowed – Use a firewall (IPTables typically) to only allow a few IP addresses access to port 22. This assumes you will always connect from one of these IPs and never need to admin the box from anywhere else
Automated, proactive, blocking of rouge IPs – If we need to make sure that we can admin the box from anywhere (lets say we travel a lot and dont wont to limit access down to a few IPs) we could use tools that watch for, and react to, brute force password atempts.

The two programs I would recommend here are Fail2ban which looks at your logs and if it sees a certain number of failed password attempts will add a firewall rule to block the source IP and DenyHosts which does a similar job but instead of adding a firewall rule, the source IP is added to /etc/hosts.deny. The nice thing about Deny Hosts is that it gives you the ability to sync your entries with other peoples. Lets face it, if someone is brute forcing your box, they are almost certainly doing it to soemone elses as well. There is nothing stopping you using both Fail2ban and DenyHosts at the same time for a belt and braces approach.

Pros – this is much more secure, you have heavily restricted the number of users pounding on your box, while allowing yourself the ability to admin the box
Cons – takes a little more work to setup and you could potentially lock your own IP address out if you are not careful

Higher Security Approach
So we have decided that security of our box is very important and so we are going to go put extra affort into securing it.

Limit to SSH Keys only – we can disable the ability to logon using a username and password full stop, limiting it to SSH Keys only. This means that even though the port may be open to the world, its imune to password brute forcing. You could combine this with the “Restricting Access” approach if you want to go the extra step.

Pros – you have elimated the attackers ability to bruteforce/guess your password, drastically reducing your exposure to a breach
Cons – requires that you have with you the coresponding SSH key when you need to access your server

Paranoid Approach
No matter what, you just arent comfortable with the admin port being visable, you want to retain the ability to remotle admin the box but you dont even want people to be able to see or connect to the admin port. Sound impossible? Not so, we can use one of these two methods to make this happen.

First off is Port Knocking. This means that port 22 is totally firewalled off until the box receives a certain sequence of packets to a predefined set of ports – so maybe the sequence is tcp/6880, udp/3399, tcp/8881 – if the box receives these packets in this sequence then it will open port 22 to the source address for a limited time – at which point you connect.

The downside of this for the ultra paranoid is that if someone sniffs the network at the same time that you send the sequence, then they know you sequence and could replay them and enable visibility to port 22 for themselves. This is where the second approach comes in – SPA

SPA or single pack authentication evolved from port knocking. It addresses the weaknesses (capture and replay) and adds some functionality. In a nutshell you send a single packet to your server with an encrypted payload that describes what you want to do. So for example you may say that you want to enable port 22 on server x and port 2222 on server y – this request is encrypted and sent to the server. The server receives the SPA packet and, if you have used the correct password to encrypt it, decrypts the contents and acts on them. It is imune to a replay attack as the the contents of the packet have a timestamp included in the encrypted payload. I really like this approach and use it to gain access to my home network.

The software I use to do this is called FWKnop and more information can be found here

Pros – you are as secure as is humanly possible, it doesnt get more secure than this unless you disconnect it from the internet and bury it in a bunker
Cons – you need to have the client software installed on the machine you want to admin from, in order to send the SPA packet

Feedback
if you feel I have missed anyting off, made mistakes or just want to let me know about your methods of securing remote access – please use the comments box to give me your feedback

OSG

Next we should probably think about installing a HIDS but I will save that for a future post

Another SSL Attack

A short time ago I mentioned the vulnerability in certificates that are signed with MD5. Well I have just finished watching the presentation from Blackhat DC 2009 that details a different attack on SSL. Its a very simple attack and the take away here is that you don’t have to defeat SSL to defeat SSL!

Check the presentation over at SecurityTube.net Be sure to make sure you watch the video, not just read the slides, it makes a lot more sense with the audio. Official video link here

Edit: There is a five minute chat with the presenter on you tube here

OSG