Dropbox and SELinux

OK, so Dropbox isnt 100% Open Source but Im a pragmatic kinda guy and I do love Dropbox. However it (Dropbox) doent seem to like SELinux it seems.

I know its so tempting to reach for the “turn off SELinux” switch but wait, its actually very simple to make SELinux allow Dropbox to work.

It turns out that Dropbox tries to do some naughty stuff that SELinux is there to protect us from – namely executing out of the memory buffer. This type of thing is usually done by programs trying to do malicious things on the system and happily SELinux protects us from this – but that prevents Drop from running.

How to Fix It

There is a nice and simple way to fix this and no I dont mean disable SELinux 😉

There is a boolean that you could flip that turns off this protection – namely allow_execstack

sudo setsebool allow_execstack 1

However this is going way to far as you all now allowing any process to execute from stack, which isnt a good idea.

The best way is to tell SELinux that you just want Dropbox to be able to do this and nothing else. The way that you do this is you label the executable file, in this case /usr/bin/dropbox, as type execmem_exec_t

You can do this with a quick chcon, but thats not the best way to do it, the following two lines will fix Dropbox to work with SELinux

sudo semanage fcontext -a -t unconfined_execmem_exec_t /usr/bin/dropbox
sudo restorecon -v /usr/bin/dropbox

Now if you take a look at the SELinux contetxt of the file, you can see its got the right label

ls -lZ /usr/bin/dropbox
-rwxr-xr-x. root root system_u:object_r:execmem_exec_t:s0 /usr/bin/dropbox

If you spend a little time to understand the basics of SELinux (file contexts and booleans) you will find it is quite straight forward to work on a system with SELinux turned on

If you are interested in learning more about this stuff, check out the Dan Walsh blog

OSG

Operating System Choice for Critical Systems

It NEVER ceases to amaze me that when selecting an operating system for a critically important role, that people still chose Windows. Now this isnt a rant about how Linux or BSD are better or more secure than Microsoft Windows. I mean I think its quite an easy argument but one thing that is not up for debate is that Microsoft Windows is the most targeted operating system when it comes to Malware.

So why, for the love of all things good in the world, do you chose the most targeted OS for your critical systems. Here are just three recent incidents/reports that prompted this rant

1. The investigation into the recent Spanish air crash noted that a critical ground system, that was designed to spot problems and alert people was actually switched off as it was infected with malware

http://www.technewsdaily.com/malware-implicated-in-fatal-spanair-crash-1078/

2. The latest worm currently doing the rounds and allegedly targeted at Irans Nuclear Reactor. Iran have admitted that some of their systems are indeed infected with this malware. Its a nuclear reactor for gawd sake.

http://www.computerworld.com/s/article/9188147/Iran_admits_Stuxnet_worm_infected_PCs_at_nuclear_reactor

3. My favorite though was the recent announcement about an infection in a United States military network – their worst infaction ever, was caused by an infected USB drive.

That code spread undetected on both classified and unclassified systems, establishing what amounted to a digital beachhead, from which data could be transferred to servers under foreign control.

http://www.itpro.co.uk/626428/infected-usb-caused-biggest-us-military-breach-ever

For gawd sake people, if its a critical system, dont chose the most malware targeted operating system. It makes no sense at all.

Give Google a Break

Google are developing a new operating system, aimed squarely at the netbook market. The ethos behind it, like with most things at Google in the last 12 months, is speed – they want it to take no more than 7 seconds to boot.

Once logged in you will only have access to a web browser – the browser will be Googles Chrome Browser, as you may expect. There will be no desktop or other apps, everything will be done from the browser. They are going to build in functionality for  working offline, for when you are not connected to the net.

Many people, even in the Linux world, seemed to be opposed to this but I can only see it as a good thing. Under the hood its based on Linux, Google have said they have been working with Ubuntu in this respect. Google have stated that Chrome OS will be Open Source and released the current dev version on Chromium.org. From my point of view I think its going to be good for the Linux platform. The improvement in boot speed and hardware drivers alone can only be good.

I really dont know why Google seem to have so much opposition. I understand peoples concerns about a company that knows so much about its users but they are the only company to have a “do no evil” moto. Whats more Google are also a very transparent company, the information that they have on you can easily be found and deleted if you so wish. For example, if you want to view or delete your web history, just go here and do so.

I do wonder how many people know about the Data Liberation Front, a team of Google engineers who work solely on making sure that you can easily get your data in or out of as many Google products as possible, as simply as possible.

I really do feel that Google are a friend of open source. Their Android phone OS is Open Source and while I know there was some concern over their reation to the Cyanogen mod, when you read into it, you can understand their point of view – plus they worked with the Cyanogen guy to come up with a work arround.

Also, lets not forget the Google Summer of Code. Each year they make this great contribution to Open Source. Im sure its not entirely altruistic but never the less it is a very valuable contribution.

Recenlty Google seemed to cause some more negative ripples with their aquistion of the Etherpad Project. I think anyone who has tried both Wave and EtherPad will understand why Google wanted Etherpad. Etherpads real time document editing is much better than the current Google Wave client. So the Etherpad team have been pulled off Etherpad and put to work on Wave. The controvesy was not so much about this but that the fact they closed Etherpad, a product that many people use and find invaluable. They gave people about a months notice to trasition away from it. The thing I will say about this is that as soon as they became aware of the communities concern, they re-examined the decision and have re-opened EtherPad – in a matter of days. They then said, in a very open way “what were we thinking”.

UPDATE: They have also released the sourcecode for Etherpad under the Apache Licence

This brings me on to Google Wave.I know that people who have been able to try this out are not that overwhelmed with it. What I will say is that its very early days in this products development. I would also so that Wave is all about the protocol underneath that lets you collaborate on document editing and the current Wave client is just the first implementation of a client – there will be other clients. In other words, think of Wave as SMTP and the current client as Outlook Express. There will be better clients

My main point about Wave though is how Google have gone about this. They said, from the outset, that they wanted to create an open protocol, just like SMTP. They also built federation in and they have also desinged it to be extensible, so that people can develope their own plugins. This shows that they are a company that just seem to get it. The understand why Openess is important.

So whats the point of this article, well what Im really saying is give Google a break. Yes they have a lot of information about us and its right to be concerned but their every action to date seems to have been honorable. Lets save the paranoia for companies that treat us and our data appallingly on a daily basis

I’d love to hear your opinion of this subject, please leave a comment or use the contact form

OSG

Downtime, DR and the Cloud

Some of you may have noticed that this site was down for a little while. It seems my hosting company were victims of a massive incursion by malicious hackers and, at the time of writing, my original server still hasn’t been restored after 24 hours downtime.

While you have to feel sorry for them and all the extra work that they have been doing to rectify the issue now is a good time to go back over that age old question. Do you have a DR plan? Are you backing up, is your documentation up to date, have you tested a restore? Luckily I was in the process of documenting my setup when this happened and so my pain hasn’t been as great as I should imagine some others are experiencing

I think its also worth mentioning that, as I had no ETA of when my sites would be restored (or even if they could be restored by the provider) I moved everything into Amazons EC2 offering. This seems like an ideal platform for just such an occurrence, if you dont know how long your main site will be down you can very quickly get servers back on-line and then when and if your original platform is ready you can move back and you will only have had to pay for the hours/bandwidth that you have used.

If your on-line presence is important to you, and I cant think of many businesses that this doesn’t apply to, I would encourage you to look at adding something like Amazons Cloud offering to your DR strategy – and don’t forget to test, remember you only pay for the hours that you use and this is from as little as $0.10 an hour

OSG

First Steps in the Cloud

Ive been a cloud *client* for quite some time, firstly with Gmail and Google docs, later with Dropbox and Amazons S3 storage (via Jungledisk). I’m also a fan of virtualisation and, while listening to a recent FLOSS Weekly netcast with Ian Pratt, I found out that Amazons EC2 (Elastic Compute Cloud) is indeed based on Xen. Now I had an interesting chat with one of the guys from Citrix recently also, I decided it was time I took a look at Amazons offering.

EC2 offers you the ability to “stand up” multiple servers almost instantly, configure and run them and only ever pay for the number of hours they are up. A server instance starts at $0.10 an hour – this is for their “small Linux instance”, which is 1.7gb ram and 350gb disk space. They also offer Windows instances which are slightly more, but still amazingly low priced. This makes it extremely cost effective to use for large proof of concept work or for full time production. Anyway, let me walk you through my first steps in/on Amazons cloud.

First of all you have to have an Amazon account, as I already had one all I needed to do was to “sign up” for the EC2 service (remember you pay for what you use in server/hours). Two clicks later and Im ready to go.

In my eagerness to get started I overlooked the “Getting Started” video on the front page and decided to see how for I could get without reading the documentation. If you want the short answer – I had my first box up and running in less than 5 minutes. For the more detailed version read on,

aws1-securitygroupsthumbThere are a couple of steps to complete before you get you box up and running and the interface holds your hand nicely through these. Im impressed with the level of security that is setup right out of the box. The two steps you need to do (apart from choosing your instance) are both security related. Firstly you need to select or create the security group – in other words the firewall settings. There are suggested entries there already and customising it is very simple.

Secondly you will need to generate a keypair that you will need to administer the boxes. Again the wizard walks you through this step also, Once those two steps are done and you have chosen your instance type, you click on create and after a minute or so you can see your first instance change its status to starting.

Cool, lets see the console then.

The first instance I chose to create was a Fedora box, so when I hit the “Console” button I was provided with details on how to connect to the instance. For now, you connect to the DNS name that Amazon give you, which maps to a local IP address within Amazons cloud. You can also rent “Elastic IP” addresses for $0.01 per hour, I decided the funky DNS name and private IP was fine for my testing. So I SSH to the DNS name, referencing the file that contains your keypair. The provide the exact syntax that you need to use but its pretty straightforward. You are not prompted for a password as you are using, the more secure, keypairs. And thats it – you have a bash console your box.

aws2-wpsitethumbI yum installed an Apache server and hit the page in my browser and there was the default webpage. I then went on to setup a WordPress install jas i would on a hosted server. Everything went to plan

As my first hour approached its end I shut down the instance and went out. Upon my return I wanted to try a Windows host. Interestingly the previous instance had disappeared. It seems that if you shutdown an instance, for a certain period of time, the diskspace is reclaimed. If you want to keep instances around when they are shutdown you can do this by using Amazons EBS (Elastic Block Store) which is $0.10 per gb per month.

Anyway, as I mentioned above, I decided to try a Windows box next. I selected the Server 2003 and SQL Server 2005 instance. This time the firewall settings suggested were as follows

  • Remote Desktop (3389)
  • HTTP (80)
  • SQL Monitor (1434)

amazon7I accepted the defaults but if I was going to use it “in production” I would close the SQL port. I clicked the button to fire up the instance and a minute or two later it changed its status to “running”. Hitting the console button this time brings up a box explaining how to connect to the server, namely via RDP. Again security is there right out of the box because the local Administrator password is randomly set and then encrypted in the instances log file. To get to this password you have to right click on the instance in Amazons control panel and select decrypt password. You are prompted to paste in your key to a dialog box and a few seconds later your password is displayed.

Pointing your RDP client to the DNS name of the instance and using these credentials gets you logged onto your server – its as easy as that. This would make testing things like large scale Exchange setups, that involve many servers talking to each other, really easy and you wouldn’t have to stump up for the hardware required to do this in your own lab.

This (EC2) is just one of the services that Amazon offer. I’ve been very impressed with my first steps in the cloud, things couldn’t have been any easier to get up and running and I’m pleased to see that security has been part of the core design. When you consider that the underlying technology is Open source then I think its something we (the Open Source community) can be proud of.

OSG

Update:
There is talk on the net about Amazon open sourcing its cloud tools – this would great news and very beneficial for The Cloud as a whole. So nice to see people aren’t trying to lock down or lock you into their offerings – lets hope it turns out to be true

More Screenies

amazon-1amazon4amazon3amazon2

Edit:
Sorry about the lost screenshots, this was due to a major incident at my previous hosting provider. At least I had the databases backed up :-/

Securing Remote Admin

Once you start running/administering your own server, live on the internet, you really need to think about securing access to it. In this post, Im going to look at the different ways that you can achieve this and the pros and cons of each of these

Firstly lets think about what it is that we are trying to prevent and what we it is that we are not trying to prevent. For this discussion I’m going to assume the server that we are running is a simple webserver hosting a blog. So therefore we want the world to be able to view our blog but not to be able to log on to the server and perfrom admin tasks.

We wil assume that the webserver listens on the standard web ports of 80 & 443, that it connects to a mysql server on the same box (running on 3306) and the we administer the box via SSH, again on the standard port – port 22.

Firstly lets take care of the low hanging fruit. MySQL will, by default, bind to the servers live IP address and so expose port 3306 to the world. (by default it wont allow remote logins but its still a port that is exposed that we dont need to have exposed). Opening the config file for MySQL (/etc/my.conf) we can simply use this line “bind-address=127.0.0.1” and restart the service to make it bind to the loopback instead. (We will assume that you did this before you setup your blog software and so the blog software was configured to use 127.0.0.1 as well)

Good, so thats one port now taken care of. This just leaves the web ports (80 & 443) and ssh open to the world (port 22).

The next thing we need to decide is just exactly how secure we want to be – remember the is an inverse relationship between ease of use and great security; its always a trade off.

Relaxed Approach
We may decide that this is only our blog, that we have a regular off site backup of the database and so we arent really too concerned about security. If this is the case we could probably stop right here, making sure that we

  • Use a strong root password
  • keep the webserver up to date with latest patches
  • keep the blog software up to date with the latest patches

Pros – very little work to setup. Can admin from anywhere, doesnt require additional software (ssh keys)
Cons – you are exposing port 22 to the world and could potentialy be at risk to a zeroday attack or someone just guessing/bruteforcing your password

Restricting Access
We may decide that doing a little more to secure remote access is worthy investment, but we dont want to go crazy. Here are some of the things we could do

Limited range of IPs allowed – Use a firewall (IPTables typically) to only allow a few IP addresses access to port 22. This assumes you will always connect from one of these IPs and never need to admin the box from anywhere else
Automated, proactive, blocking of rouge IPs – If we need to make sure that we can admin the box from anywhere (lets say we travel a lot and dont wont to limit access down to a few IPs) we could use tools that watch for, and react to, brute force password atempts.

The two programs I would recommend here are Fail2ban which looks at your logs and if it sees a certain number of failed password attempts will add a firewall rule to block the source IP and DenyHosts which does a similar job but instead of adding a firewall rule, the source IP is added to /etc/hosts.deny. The nice thing about Deny Hosts is that it gives you the ability to sync your entries with other peoples. Lets face it, if someone is brute forcing your box, they are almost certainly doing it to soemone elses as well. There is nothing stopping you using both Fail2ban and DenyHosts at the same time for a belt and braces approach.

Pros – this is much more secure, you have heavily restricted the number of users pounding on your box, while allowing yourself the ability to admin the box
Cons – takes a little more work to setup and you could potentially lock your own IP address out if you are not careful

Higher Security Approach
So we have decided that security of our box is very important and so we are going to go put extra affort into securing it.

Limit to SSH Keys only – we can disable the ability to logon using a username and password full stop, limiting it to SSH Keys only. This means that even though the port may be open to the world, its imune to password brute forcing. You could combine this with the “Restricting Access” approach if you want to go the extra step.

Pros – you have elimated the attackers ability to bruteforce/guess your password, drastically reducing your exposure to a breach
Cons – requires that you have with you the coresponding SSH key when you need to access your server

Paranoid Approach
No matter what, you just arent comfortable with the admin port being visable, you want to retain the ability to remotle admin the box but you dont even want people to be able to see or connect to the admin port. Sound impossible? Not so, we can use one of these two methods to make this happen.

First off is Port Knocking. This means that port 22 is totally firewalled off until the box receives a certain sequence of packets to a predefined set of ports – so maybe the sequence is tcp/6880, udp/3399, tcp/8881 – if the box receives these packets in this sequence then it will open port 22 to the source address for a limited time – at which point you connect.

The downside of this for the ultra paranoid is that if someone sniffs the network at the same time that you send the sequence, then they know you sequence and could replay them and enable visibility to port 22 for themselves. This is where the second approach comes in – SPA

SPA or single pack authentication evolved from port knocking. It addresses the weaknesses (capture and replay) and adds some functionality. In a nutshell you send a single packet to your server with an encrypted payload that describes what you want to do. So for example you may say that you want to enable port 22 on server x and port 2222 on server y – this request is encrypted and sent to the server. The server receives the SPA packet and, if you have used the correct password to encrypt it, decrypts the contents and acts on them. It is imune to a replay attack as the the contents of the packet have a timestamp included in the encrypted payload. I really like this approach and use it to gain access to my home network.

The software I use to do this is called FWKnop and more information can be found here

Pros – you are as secure as is humanly possible, it doesnt get more secure than this unless you disconnect it from the internet and bury it in a bunker
Cons – you need to have the client software installed on the machine you want to admin from, in order to send the SPA packet

Feedback
if you feel I have missed anyting off, made mistakes or just want to let me know about your methods of securing remote access – please use the comments box to give me your feedback

OSG

Next we should probably think about installing a HIDS but I will save that for a future post

Another SSL Attack

A short time ago I mentioned the vulnerability in certificates that are signed with MD5. Well I have just finished watching the presentation from Blackhat DC 2009 that details a different attack on SSL. Its a very simple attack and the take away here is that you don’t have to defeat SSL to defeat SSL!

Check the presentation over at SecurityTube.net Be sure to make sure you watch the video, not just read the slides, it makes a lot more sense with the audio. Official video link here

Edit: There is a five minute chat with the presenter on you tube here

OSG

SSL Certificate Vulnerability

This is huge, make no mistake. There has never been such an exploit against PKI this big, to my knowledge. I mean it (PKI) is not a perfect system by a long way but up until now, if you were careful, then you could have a reasonable expectation of your HTTPS connection being secure.

This latest MIM attack disclosed at 25C3 has changed that, now you would have to be very careful indeed to have an expectation of privacy/confidentiallity. Make no mistake, a large portion of the blame lies at the feet of those certificate providers who are still using MD5 hashes instead of SHA. The MD5 flaw/vulnerability (that of increased likelyhood of collisions) has been know for a long time – in fact Schneiers post makes it plain that attacks against MD5 were no longer theoretical, and that was in 2005.

The thing that, to me, makes this worse is that its not just smaller certifcate authorities that are still using MD5 – Thawte and RSA Data Security are two of the biggest providers of certs and they still use MD5 (according to the presentation).

One thing that did suprise me is that the CRL that is used to check against revoked certificates is obtained from within the certifcate itself – so if you are spoofing a cert, you could theoreticaly put your own spoofed CRL in as well. Thats a pretty large whole from where I’m sitting.

A detailed explanation of this exploit/vulnerability is availble here and their slides are here

OSG

Additional Note

Itrs worth considering this post, that points out that not all CAs use a serial number that increments and so not all are vulnerable to this attack – its a valid point but it only tales one vulnerable CA for this to work and while we do need to stop using consecutive serial numbers, I think we also need to stop using MD5 for gawd sake

Additional Note 2

SSL Blacklist (a Firefox extension) has been updated to check for certs that use MD5 as their algorithm (this doesnt mean they are bad per-se – see above note). The extension is available here

New book arrives

nmap-bookLike any true geek Im always elated when I get a package from Amazon and yesterday was no different. My latest book arrived on the doorstep – Fyodors NMAP Network Scanning.

Some may say that the info for this tool is already available on the net but to be honest my decision to buy this book was, in part, so that Fyodor would get some money back for the excellent tool that he has created and regularly updates.I found out that the book had gone into print when I heard his talk at Defcon and decided there and then that I must have it.

Its both interesting and very encouraging to read that he, as an open source author,   choose to use open-source tools to write to book rather than bowing to the pressure to use proprietary software – kudos for that dude. Now to find some time to read it 🙂

OSG

Privacy through Add-ons – LeetKey

While at work the other day, I wanted to send a friend an email but I didn’t want it going through the corporate systems. It used to be that just using webmail was generally enough to give you this protection (using corporate email is obviously out of the question), but in these days of increasing surveillance and paranoia over Data Leakage I know that every packet is being inspected at the gateway for certain text patterns.

I could have used a webmail provider that allows HTTPS connections I guess, such as Gmail (and been sure to check the certificate) but I wanted more than that

Now there are plenty of solutions if you are happy installing some encryption software on your computer but I wanted something that wouldn’t alert people who were looking at my add/remove program list. For example I could use FireGPG but that needs you to have GPG installed and have copies of your public and private keys. Freenigma (new ownership BTW) is similar in that it requires public/private keys to have been setup in advance and didnt quite fit what I needed

Some may suggest Hushmail but things like this are nearly always blocked at the proxy when you work at a large company

All I wanted to be able to do was a little quick (and strong) symmetric encryption and simply text the password to my friend. I briefly considered coding something but that is way beyond my skill levels and I realised that if I wanted it, then surely Im not the only one who did. I continued my search and eventually found the solution in a not particularly obvious Firefox add-on – LeetKey

LeetKey lets you select some text on a web page and convert it into (and back from) Elite Speak,but it doesn’t end there. You can also do other conversions/transforms with it – such as Morse code, base 64 etc etc. It also lets you do 128 bit AES en/decryption ! Fantastic, that is exactly what i was looking for. So now I can have nothing more than a Firefox add-on and I can send private emails to people on the fly.

Why not install it and use the AES Decrypt function with the password monkey to decrypt the following text

Tez4dx4BITAhITAhLVHp2QXdITEzIcy+ffJYvkHEzectITAhXdBaQlGVbROAZ50XE4ZVLcO6c7/R
fRt4a/LfsJW2ll0ttU0hMTIhsG4j06o7rMxFJSX9oy1khtpfhsuZ4yWbWWBb/rnTLa8hNDUhZemZ
btKlJEZoyXJ8vpItD+kG6PzAVymrPfk+7WtYgC0UWOk5+u4hMTIhe2LUuGNhwKQYLVXhyg8Yj8Sf
A0quyQOnfiQt7RxTfLKx0kRUQsenITkhuq0hMTIhLXWRdVwoszcvTKW6eHZKITEzIVctuo4I5Lc0
J03k2dR1zmIHTC3ALuKJn5KIlPiZVSohMTAhjAavLbBCp7hwJhhjxKQW2UWSKWQtVeKhicPd4bIv
lq5RkSE5IR/aLViFkhgjoosFhtnqNjjGPQgtLBw7ITQ1IXH7PJbPNcT5f/U49S2nD9t1z194Mtd9
uG2m4PkrLf7bBPBtvi8BP7Te2LKx6LotFRtQw6NmGdXfa+0uU0udwy3ubptSITkhBtyWHfQ3ciO2
yW0ta9VHnFdUenKSDpl4r5InIy0hMzMhT1839g7XHb2HVUr+e2UPLb60RA51buqYf/5j1xQhMzMh
ziE5IS06RyDjd9hNbV7sfVFN6WabLfgbc9OaJJjAY5qcatXe1nUtcGLnOzcxnBNUa3iJi7tBVi33
2SA1tI6w0M1VnwL1aCT7LepdnUHXvI1RTZad9mT7bkstNnSfOK3+cz8eHSExMSF78snObi1VXqiq
hHHm0+mfQBv+EP/GLSUo8HxR1Ut4Ht0hOSHDE90C4C1MZSExMyE+OcOpqjfxIoCiITEwIao4Lf+R
gCoSKoqK9mlMKzjndWMt7ei6I/dN47eLbRmlpGiMZy23vjzLJFPnYv1APRF9ITAh8CExMyEtHY8D
h6MRITE2MCH9YoyLv8DoQkYtaPCiFhYrqmhdwq2UKowXFC2+sF/VGCxzLI9vl6ObITEzIY/9Lf6I
ITE2MCGE7uZvAiExMSEmcgY5uNj3LUk4w02JRyExMyHvE/iVjbSQVx8tlZzqJwTe0KHaasoCDtFZ
ry1kITAhiUWxBa5S9GerK3PNMaktOP4hMTEh7fyzB00TSkm7iyExNjAhUIAtQMwPZDDlXUpxnh+U
ho6ZwC1ahoTpSPYcUvVWaJ0hNDUhq6aaLfBtLrksE/f6KeAuifjXHZgtMQhDLMlH8tvDbF68SxXr
wi27ZEnYu4cuKwWusUOHgknTLRwbbGFG8iDjdNDfMDfJkEItq+vW2iNkITMzITuvQv1+kRDhOy0+
8GnQFUfcyGH7oiExMCHv2jgdLc2ZcRbC6XBbxDIvXyf/twMtrzyreZPg+ONidy6l2hH8zS1S++lY
sl7n4G5q7ViRxyk+LWohMzMhn5DX/Q==

Have Fun, OSG