Cross posted my long content blog
A while ago I got a mini PC and turned it into a home Web server. This turns out to be a remarkably effective way to host a website. My long content blog is hosted on on this mini PC. And this home server is connected to my standard home Internet. And it’s not just my website, I host a lot of other services in order to improve the privacy of my data.
It’s a good alternative to the increasing centralization of the Internet.
I decided to do some testing to figure out how much traffic my set up can handle. Thereby confirming if a small and cheap mini PC connected to a home Internet connection is enough to host someone’s whole personal Internet presence.
The server I am using is a fairly inexpensive Beelink mini PC. It has 8 GB of RAM and a 256 GB mSata SSD, the exact model I bought doesn’t seem to be for sale right now, but a roughly equivalent device from the same manufacturer seems to be going for about $170 on Amazon right now.
I feel like this is a good example of the performance range to expect from the type of device that someone would try building a home server on. It’s a fairly attainable level of computing power to just set aside for this application, particularly once you consider the expense that comes with things like cloud services or web hosting, or the indirect costs that comes with putting your data where it can be harvested or sold to advertisers.
The Software My home server runs Debian Linux with the Caddy web server. Most of the other services running on that server run under Docker containers. Almost everything on the server is freely downloadable open source software.
The Internet Connection My Internet connection is a 1 Gb (Symmetrical up/down) fiber connection. I have also bought a block of static IP addresses, however this isn’t strictly necessary for posting a Web server, there are many tunneling services that will provide your server a good way to receive connections from the outside Internet. One such service I’ve experimented with in the past is Cloudflare Tunnel.
Despite the past experiments, my set up is not behind any proxying services or CDNs. It’s a direct connection from the users to the server.
The main reasons to get a static IP address block is if you want more flexibility, ability to host services that require ports other than standard HTTP or HTTPS, or if you want an alternative to centralized services that would otherwise have to be used.
Right now, the website you’re reading is built with the Hugo static site generator. This creates a fairly lightweight website, lighter weight than say a WordPress blog, although in the past I’ve successfully hosted a WordPress website on the same server.
While I haven’t done the same level of stress testing that I’ve done with the Hugo site, I feel that WordPress is definitely usable for a personal website on this server.
How I Tested the Maximum Load the Server Can Take
I used two services to load test the server. The first is Loadforge, the second is Loadster. These are both paid commercial services to test how much traffic a website can take.
I configured these services to test the usage pattern that involves, first the user opening up a post on the blog, and then clicking on the homepage of the blog and loading it.
I picked this usage pattern because I think it describes vaguely what a user would do in the case of what I think is probably the highest level of usage that a normal person would encounter on their personal blog – a post going viral and suddenly getting a large influx of traffic driven by external websites. Something like the famed “Reddit hug”.
As tested by both services, my home server has the ability to handle about 300 HTTPS requests per second. If the load goes much beyond that the rate of errors getting returned by the server increases dramatically, and response times to queries slow down drastically.
The limiting factor is pretty clearly the servers ability to handle that many simultaneous requests. Internet bandwidth didn’t seem to matter much, during the testing the bandwidth usage didn’t exceed 50 megabits per second. Meaning that while I have a fairly high end Internet connection, there’s a lot of leeway and that most people want to close their own blog on a home server could do pretty well if they use a slower Internet connection.
Based on watching the performance of unrelated tasks on a different computer on the network, I also don’t feel that the performance of the router or the modem was a bottleneck either. However I haven’t really been able to determine what the bottleneck is. RAM usage and CPU usage didn’t really seem to hit the limits of the server.
I don’t really have the resources to test the exact parameters and limits more, since the server load testing services I have found to be reliable are quite expensive to run. I don’t really have the budget to throw more resources towards this experimentation and I already have
But this is enough for basically any plausible use. That’s enough to have a website that can withstand getting posted on the front page of Reddit (according to one source I found, the 99th percentile level of load that comes from being posted on the front page of Reddit is about 25 users a second. For that particular website, each user makes about 15 requests to the server. Meaning that the highest level of load that that particular website was put under was something around 360 requests per second.
That is still a little bit over what my server was able to be benchmarked at during the load testing.
However, based on my tweaking and experimentation, a well optimized blog can probably get substantially below that, as long as most users don’t dig deep into the archive. For example, a page load on my site only causes three requests to be made to the site. Additionally, tools like CDNs would substantially improve performance also.
So, my conclusion is testing is that, yes, a self hosted blog that is well optimized can be hosted on a standard home Internet connection using a cheap computer as a server.
Hosting text heavy content in a decentralized way, is therefore basically a solved problem. The computing power and Internet connectivity available to the typical person means that anyone can self host a website without needing to rent server space, use a content silo, or pay to have someone else host it.
However once you include a lot of rich multimedia the bandwidth requirements start to skyrocket pretty quickly, and depending on how the website is structured, there could be a lot more requests to the HTTP server. I think recent advances in decentralized Internet technology might come to play with higher bandwidth content. Sharing large files effectively and in a distributed way seems to be to wheelhouse of technologies like IPFS, while the task that is easily handled by standard HTTP, that is, hosting lots of small text files is the Achilles’ heel of IPFS and similar. I feel that there is a good potential for a mixed solution combining both traditional technologies and some of these newer technologies.