Playing With IPFS


I recently had some time off and, as we are in lockdown, it was the ideal opportunity to do some geeky stuff. I took the opportunity to take a look at IPFS - and frankly I was so impressed I moved my blog to use IPFS for its entire storage. Lets dive into it

IPFS was initially released to the world in February 2015 but recently has picked up a lot of momentum as a popular browser (Brave) added native support for it. I fact, I am a Brave user and probably that’s how I was reminded of its existence.

IPFS is a big topic and I won’t cover everything in this one post, but there should be enough to get you stated. I suspect I will return to this topic in future posts.

The What

Lets start with the “what”. The official one liner is

IPFS is a distributed system for storing and accessing files, websites, applications, and data

The Why

So why would/should you be interested in IPFS? Well the reasons could be many and your reasons maybe a different subset of reasons to why I’m interested in it. For me decentralisation is critical to the health of the internet, we have to start to try and undo the “siloisation”


Typing “benefits of decentralisation” into Google shows this entry (for me)

  • Users don’t have to put trust in a central authority.
  • There is less likely to be a single point of failure.
  • There is less censorship.
  • Decentralized networks are more likely to be open development platforms.
  • There is potential for network ownership alignment.
  • Decentralized networks can be more meritocratic.

You can read more from the original post The Benefits of Decentralization (ironic that its on I know)

Another point is that it can really help when a dramatic scaling event happens. Lets say you have a low traffic blog (such as this) and you have spec’d the VM to handle the expected load. What happens if your load is suddenly 1000 x what you expect. There is a post here about a guy who moved his blog to IPFS and was able to withstand sudden interest that came from one of his posts going viral in Hacker News/Reddit. Whats even more interesting is that he later reverses his decision post here citing the “dangers of unrestricted speech”.

I am not saying that moving to IPFS is 100% great and there are no downsides but for me, the benefits outway the downsides - so I am going to test it out and see how it works for me/my use case.

The how

There are several ways to implement this - from just using IPFS as the storage layer and building a traditional webserver on top (very simple to implement) - all the way to moving everything over.

Client Side

Naturally I want people who don’t have access to IPFS to be able to read my blog so my implementation takes this into account. There are also a couple of ways to implement this. As for how to consume IPFS, the easiest way is with a browser plugin ( Its available for all the common web browsers. The browser I use (Brave) has IPFS support built in - see their Jan 2021 announcement

You can also install Desktop app (which runs a node), which Naturally I did. If you decide to do this its best to tell your browser add on to use this node (the api runs on instead of its own built in one. The app is available for all OS, with many options for us Linux users also.

Server infrastructure

This part I didn’t feel the need to do just yet - its detailed here should you be interested. The nice part about setting the cluster up is that it comes with an example docker-compose file, which makes the process very easy.

For me and most people I suspect the Desktop app is more than enough, the main reason to run your own server is to fully control which files you want to pin. Files in IPFS may be garbage collected unless they are pinned

Nodes on the IPFS network can automatically cache resources they download, and keep those resources available for other nodes. This system depends on nodes being willing and able to cache and share resources with the network. Storage is finite, so nodes need to clear out some of their previously cached resources to make room for new resources. This process is called garbage collection.

To ensure that data persists on IPFS, and is not deleted during garbage collection, data can be pinned to one or more IPFS nodes. Pinning gives you control over disk space and data retention. As such, you should use that control to pin any content you wish to keep on IPFS indefinitely

There are commercial services that will offer to pin this data for you if you dont want to run your own cluster. I user one of the most popular ones Pinata - they offer 1 gig of pinned data for free. If you dont want to use a third party provider then you either leave your desktop always on and running the desktop app or you run your own cluster. For me Pinata was a good place to start

Lets get some content into the IPFS cloud

Getting files and folders into IPFS is very simple. You can use the Desktop app of course but I’m a CLI guy and I prefer to do it this way. In order to use the cli you need to install the client. The documentation shows how simple this is. For us Linux guys (all OS’s are supported) its as simple as downloading a tar file and moving a single binary into our $PATH.

If this is the fist time you are using IPFS (and you didn’t run the Desktop App yet), you will need to initialize the repository. You will only need to do this once

$ ipfs init

> initializing ipfs node at /Users/jbenet/.ipfs
> generating 2048-bit RSA keypair...done
> peer identity: Qmcpo2iLBikrdf1d6QU6vXuNb6P7hwrbNPW9kLAH8eG67z
> to get started, enter:
>   ipfs cat /ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme

OK, lets get a file uploaded into IPFS. Start of by creating a file to upload

$ echo "some content" > mytextfile.txt
$ cat mytextfile.txt
some content

Upload it to IPFS

$ ipfs add mytestfile.txt
added QmZtmD2qt6fJot32nabSP3CUjicnypEBz7bHVDhPQt9aAy mytextfile.txt

Lets just verify that the hash does point to a file that contains that content

$ ipfs cat QmZtmD2qt6fJot32nabSP3CUjicnypEBz7bHVDhPQt9aAy
some content

So you can give this hash to anyone who you want to have access to the contents and they can grab it from IPFS.

My use case for IPFS is to host all my website files though and it would be quite tedious to do each file individually. Luckily you can upload directories just as easily. Lets say you have a local directory with all your website files in called public/

You could use your desktop app to Add > Folder or you can just use the client

$ ipfs add -r public

You will it recurse through the directory and its subdirectories, displaying the HASH of each file and the last hash displayed is the HASH of the directory itself - this is the one we want

You can list the files in the directory like this

$ ipfs ls <hash of directory>
QmVGST4SiTQ8ePCVeUSeP3AEj4gCHb985yEr8DAkBbKwk3 5541  404.html
QmUP7XpkpP7DUrB5QhLQuu7rdNDTf2tbNbgdsWPAiD4v6G -     about/
QmWiU9tA3bnZtRtcS8r5rYeG6F9qfAzewXPkwUbB75Yh17 -     blog/
QmVQeCnTjjSzZ5uNCyGoFf1aSjPxeC1QK7uF3RozDFWsmZ 21556 index.html

Now that we have the hash of the directory we can use this for our blog. People who have IPFS enabled browsers could go to ipfs:// and they will see your website. The only problem is that no one else will.

One option is to still provide a website to display your content traditionally BUT to add some info such as a DNS record or a HTTP header to inform clients who have IPFS capabilities that they can also get to the site via IPFS.

To do this you only need to create a DNS TXT record called _dnslink with the content set to something like this


When your client goes to a site, it will also query that sites DNS for a _dnslink record and if it sees one, it will inform the user.

More info on DNSLink can be found here and here

HTTP header

If you would like you can also inject an additional http-header (x-ipfs-path) into your regular web pages and this also notifies IPFS aware browsers that IPFS is also available. Browers like brave would then show an “Open in IPFS” button

Info on this topic can be found here

I decided against doing this at the moment.

How is OSG Hosting its site in IPFS?

For my initial experimentation into IPFS I decided to use a third party servicer Cloudflare. I had already been using their free service for DDOS protection etc and so it made total sense to test their IPFS gateway. Setting this up is pretty simple. In addition to the DNSLink TXT record you simply need to create a CNAME for your site (in DNS) that points to their IPFS Gateway (

I should say that you absolutely don’t need a 3rd party service for this, you just need something listening on port 443 that will server up pages on IPFS. You could easily use your own webserver for that. One advantage of using this, currently free, service is that I can spin down the VM that usually runs OSG and save that money every month.

More info on Cloudlare’S IPFS gateway can be found here. Just like IPFS, Cloudflares documentation is excellent

I shall end the article here, suffice to say that so far I’m very impressed with IPFS - I suspect I have much to discover and you can bet I will follow up with this in future articles