Experience with development server 7x cheaper than Linode/DO

 
Author:  Follow: TwitterFacebook
Job Title:Sarcastic Architect
Hobbies:Thinking Aloud, Arguing with Managers, Annoying HRs,
Calling a Spade a Spade, Keeping Tongue in Cheek
 
 

Preamble

As I wrote a few weeks ago, I am currently developing an open-souce ithare::obf library. And apparently, to make sure it works more or less consistently, a Damn Lot(tm) of randomized testing (and preferably under different compilers) is necessary. As a result, last week I found myself in a search for a cheap Linux box to run my randomized tests on.

Is “server for $8/year” === “too good to be true”?

Of course, I could go to Linode or DigitalOcean, and get a box-with-1G-RAM for $5/month (and box-with-2G-RAM for $10/month). However, before doing so, I Googled for “cheap Linux server”, and got to lowendbox.com; there, I found a “special” (~=”not available from vendor site home page”) deal from WootHosting (NB: this is not a link to the deal, as the special might already be over, but feel free to look for it on lowendbox).

The deal said it is a box-with-1G-RAM for $8/YEAR…

Wtf hare:My first reaction was 'hey, it is too good to be true'.My first reaction was “hey, it is too good to be true”. However, as this is a box which is certainly not mission-critical, and the money-to-risk was rather minor, I decided to give it a try1 – and to share the whatever-experience-I’ll-have on this blog.

So, let’s the comparison between WootHosting’s ultra-cheap special deal, and Linode/DO begin! Note that WootHosting is not the only one out there with such ultra-cheap deals (more can be found on the same lowendbox site) – but it is the one which I stumbled upon, so it is the one I will speak about.


1 though as mentioned below, I went on a shopping spree and upgraded it to 2x vCPUs and 2G of RAM, so the whole thing costed me whopping $17/year

 

Woot vs Linode/DO: Pricing

First, let’s compare pricing:

Linode Digital Ocean WootHosting Linode-or-DO / WootHosting
1x vCPU, 1G RAM $5/month (=$60/year) $5/month (=$60/year) $8/year 7.5x
1x vCPU, 2G RAM $10/month ($120/year) $10/month ($120/year) $15/year2 8x

Very obviously, WootHosting has won this round hands down.


2 $17/year with 2x vCPU – and this is what I settled for

 

Setup

Payment and setup with Woot went smoothly. The only issue was that they didn’t add my “addons” (those upgrades to 2x vCPU and 2G RAM) automatically – but they mentioned in their e-mail that I should contact their support to get addons activated, and after I wrote the e-mail – they activated it in 3 minutes.3

Oh, and a funny observation –

RAM and CPU were added without reboot4

Not that it is really useful – but I indeed found it rather interesting (last time I’ve seen such dynamic expansion, was under Stratus VOS on a unkillable box costing about half a million bucks, so finding it on the opposite side of price spectrum was rather entertaining <wink />).

After the setup, SSH credentials were sent to me by e-mail, which is convenient, though security-wise you should remember to change the password outright.


3 Having had lots of experience with hosters during my career – I have to say that it was Really Good(tm) even by the-best-hoster-standards
4 most likely – via “hotplug” feature of OpenVZ

 

Experience

Now, to the most interesting part – experience. I don’t have much experience with Linode, but I have LOTS of experience with remote server boxes, and (what’s more important now) quite a bit of experience with DigitalOcean. Here go my observations about Woot so far (and probably, quite a few of them will apply to other OpenVZ-based ultra-cheap hosters):

  • OpenVZ ...OpenVZ uses a single patched Linux kernel and therefore can run only Linux. All OpenVZ containers share the same architecture and kernel version.— Wikipedia —Woot uses OpenVZ as their virtualization technology. I won’t go into details, but the most easily-observable thing is that as OpenVZ containers share the same kernel, it means that we CANNOT upgrade kernel of our otherwise-separate install.
  • Kernels available on Woot, are rather old, and therefore the newest Debian-like distro I was able to find with there, was Ubuntu 16.04.
    • Not that it was a problem per se – but as the whole ithare::obf is about C++17 – it meant that I had to spend some time looking for newer-compiler-packages-for-Ubuntu-16.04 (and, as noted below, compiling from source is not an option at least for Clang <sad-face />).
      • This, in turn, required me to install newer glibc – which complained about older-kernel, but apparently works at least for that-very-limited-use-I-need-from-it. <phew />
  • The whole thing feels significantly less responsive than normal remote server or DigitalOcean box (~=”there is enough delay to feel ‘it is lagging’, though most of the time it is within 0.3 seconds or so”). Again, not a big deal for running many-hour loads – but can be mildly-irritating if trying to patch your code right there. Still, was good enough for my purposes.
  • CPU-wise, Woot’s box performed pretty well (about 20-30% slower than DO’s box).
  • However, as soon as the disk becomes involved – DO’s SSD started to dominate (and for compile of large projects with lots of small files, speeds could differ by 3-5x).
  • For this package by Woot, I don’t think there is an option to upgrade it beyond 2G RAM. Which rules out things such as compiling-clang-from-source <sad-face />.
    • OTOH, I found on a real-world case that OpenVZ’s 2G are indeed larger than KVM’s 2G (KVM is virtualization used by DO).
      • It should be so in theory – just because kernel’s memory is not a part of OpenVZ’s memory allocation, but is a part of KVM memory allocation.
      • I observed it in practice, when the same test (under the same compiler) was successfully compiled under Woot’s-box-with-OpenVZ-and-2G-RAM, but ran out of memory and failed under DO’s-box-with-KVM-and-2G-RAM.

Summary

Hare with smiley sign:So far, I am satisfied with my experience with Woot's ultra-cheap box; for my purposes (CPU-bound non-time-critical testing) it is good enough - and is darn cheap <smile />.So far, I am satisfied with my experience with Woot’s ultra-cheap box; for my purposes (CPU-bound non-time-critical testing) it is good enough – and is darn cheap <smile />.

Overall, IMO, such ultra-cheap OpenVZ-based deals can be interesting, provided that:

  • price difference is big enough
  • the whole thing is NOT mission-critical in any way
  • there is no need to upgrade kernel
  • responsiveness is NOT an issue5
  • disk access speeds are not too important

5 not sure whether this issue is specific to Woot, or characteristic for all the OpenVZ-based boxes

 

Don't like this post? Comment↯ below. You do?! Please share: ...on LinkedIn...on Reddit...on Twitter...on Facebook

Acknowledgement

Cartoons by Sergey GordeevIRL from Gordeev Animation Graphics, Prague.

Join our mailing list:

Comments

  1. says

    The one thing you didn’t (and really couldn’t) cover is up time. A Small Orange was a $8/year type of host, had a great interface and the best customer service I’ve had with a host. Then they had downtime issues like I’ve never seen before. It was down most of a week at one point. Of course, they were recently bought by EIG at the time which I’m sure had a ton to do with it. That doesn’t seem to be the case with woot. Also I’m guessing this is private use so up time probably isn’t on your radar. Still, up time is a great corner to cut because the customer is already reliant on you when they experience issues. If they are cutting a corner, I’d bet it’s up time. Be sure to update this article if your downtime ever gets unreasonable.

    • "No Bugs" Hare says

      > up time probably isn’t on your radar. Still, up time is a great corner to cut because the customer is already reliant on you when they experience issues.

      Sure; as it is already said in OP – I won’t ever consider such ultra-cheap servers for anything mission-critical.

  2. - says

    > Kernels available on Woot, are rather old

    This is an issue with OpenVZ in general, as OpenVZ compatible kernels haven’t been updated in a long time.

    > The whole thing feels significantly less responsive than normal remote server or DigitalOcean box

    Do you mean delays in SSH input or such? If so, I suspect this is more due to location than the box itself (i.e. network latency), than anything else.
    If you’re refering to how long it takes to build stuff or similar, it’s possibly either due to disk (most likely) or the box being heavily oversold (as most OpenVZ hosts do).
    I generally avoid LEBs (low end boxes) which don’t use SSD disks as the disk almost always gets abused/becomes a bottleneck, unless I really need the space or couldn’t care about disk speed.

    On the overselling part, usually these hosts won’t allow you to fully use the CPU/disk, because it’s shared amongst many others. If you do, how each host handles it varies, but can vary from them warning you about over-usage, throttling the CPU or restarting your VPS.
    Another thing to note with OpenVZ is that it’s literally just a container on a host – the host can easily see all of your processes and files, so if you’re worried about privacy or concerned about hacks, that’s a thing to take into consideration.

    I’ve been running off LEBs for many years now, and do run personal production sites off it. I usually use smaller plans (typically 128-512MB RAM) though often prefer KVM virtualization over OpenVZ (so pricing ends up being similar). Done well, I find that they can be just as reliable as some AWS servers I run (and literally hundreds of times cheaper to boot), but selecting the right host and getting everything stable is a bit of an art in itself.
    Keeping with big name brands is a safe choice, but you pay for it. Still, it’s not a large sum of money, so I can see why everyone does it.

    • "No Bugs" Hare says

      Hi! Always good to hear from somebody-more-knowledgeable-than-myself :-).

      > This is an issue with OpenVZ in general, as OpenVZ compatible kernels haven’t been updated in a long time.

      I didn’t think about it – but of course (as they need patched kernel)…

      > Do you mean delays in SSH input or such? If so, I suspect this is more due to location than the box itself (i.e. network latency), than anything else.

      Yes, it is delays in SSH, but no, it does not feel as a network latency (it is much more erratic than good ol’ network latency, and also from-here-to-Miami RTT won’t reach observed half-a-second+). FWIW, it feels more like working on a box which has allocated:physical memory ratio of 10:1 with TONS of swapping going in background.

      > so if you’re worried about privacy or concerned about hacks, that’s a thing to take into consideration.

      As it is for testing of an open-source project, I don’t :-). OTOH, the same privacy concern stands for ANY VM (nothing prevents hoster from making a snapshot of your VM and restoring a copy for themselves); moreover, nothing prevents a backdoor-which-got-onto-hoster-admin’s-laptop from doing the same thing(!). From this perspective, dedicated servers are MUCH more secure BTW (it will take a lot of effort for a virus to connect cables physically ;-)).

      > but selecting the right host and getting everything stable is a bit of an art in itself.

      Exactly – plus for smaller companies these things tend to change much more often than I’d like them to 🙁 . That’s why I don’t suggest it for anything mission-critical…

      > I find that they can be just as reliable as some AWS servers I run

      Doesn’t sound good for AWS reliability ;-( . BTW, do you have any stats on MTBFs you’re experiencing? (FWIW, on good ol’ dedicated servers, for a 4S/4U box from Big Three manufacturer, MTBF is 3-5+ years, both in theory and in practice; for 2S/1U it is more like 2-3years)?

      > literally hundreds of times cheaper to boot

      AWS loses price-wise even to dedicated servers (about 4x hardware-wise, and about 30x(!!) traffic-wise, their $.10/G traffic pricing is nothing but atrocious). In spite of all the hype of clouds being “cheap”, only highly elastic loads (like “one day a week”) make sense there (or handling stable load on dedicated servers, plus “load spikes” on AWS).

      • - says

        > FWIW, it feels more like working on a box which has allocated:physical memory ratio of 10:1 with TONS of swapping going in background.

        That doesn’t sound good… If interested, you could perhaps try a disk I/O benchmark to see if that actually is the case.
        Note that OpenVZ doesn’t use free RAM for disk caching, since the RAM is all shared. From experience, disks are almost always the bottleneck on LEBs, hence I generally recommend getting servers with SSDs. KVM/Xen virtualization is also nice, where RAM can be used for disk cache and allows you to run your own kernel. You’ll pay a bit more than $8/year, but I doubt it’ll break the bank 🙂

        > OTOH, the same privacy concern stands for ANY VM (nothing prevents hoster from making a snapshot of your VM and restoring a copy for themselves)

        100% true! (until SEV-like technologies become common across cloud providers) I mostly point this out because there’s a bit of a difference between plainly seeing a client’s processes in the host, vs them needing to dump RAM and parse it or restore it to see what’s going on. With OpenVZ, a host can trivially see and manipulate your processes, even accidentally so, whilst on KVM/Xen/ESXi etc, it actually requires some deliberate effort to do such a thing.
        Hosts generally aren’t in the business of caring what you do, but LEBs can be hit/miss. Some may kill processes they deem to be consuming too many resources, for example. Others scan the process list for names of programs they don’t want you running (e.g. torrent clients, Tor etc).

        > Doesn’t sound good for AWS reliability ;-(

        It’s all anecdotal experience amongst a very small sample size, so take it with a large grain of salt. I’m not exactly sure what AWS does, but personally, dedicated servers (with zero redundancy) have been the most reliable in my experience, often running flawlessly for years on end. I haven’t run AWS for anywhere near as long, but I’ve already had a few servers experience unexpected reboots in the past few months.
        AWS seems to be mostly about providing tools to help you mitigate these issues (such as load balancers, automatic failover etc), but ultimately doesn’t actually prevent them. They *do* offer fast recovery though, since disks (EBS) can just be attached to another server instance in the event of failure. Most LEB providers don’t do anything like this, so whilst I find that AWS servers go down every now and then, you will likely get less downtime on AWS due to faster recovery.
        However, if you don’t use any AWS features (or implement them yourself), decent LEB providers can be comparable in terms of how often they fail. They’ll probably cost more than $8/year, but can still be less than Digital Ocean pricing.

        I don’t have any stats or similar – this is mostly personal experience from running a few of my own servers, plus a few servers for work, and hence don’t really have enough scale to give any meaningful statistics anyway unfortunately.
        I’ve been through many providers, so can’t remember everything, but off the top of my head, I’ve had good experiences with RAMNode and Scaleway. I imagine OVH’s and Hetzner’s VPS line should be quite good too. These aren’t close to $8/year, but won’t be as oversold as that.

        > and about 30x(!!) traffic-wise

        Hmm, that sounds rather low…
        Let’s see… I’ve pushed 40TB/month off one of these: https://www.online.net/en/dedicated-server/dedibox-sc
        40TB outgoing is probably somewhere around US$3600 on AWS. 300x seems more like it =P

        • "No Bugs" Hare says

          > They *do* offer fast recovery though, since disks (EBS) can just be attached to another server instance in the event of failure.

          Which means that we’re back to the bad ol’ cluster-with-shared-disk system, which usually caused much more trouble than it was worth :-(; at the very least – all the in-memory state is lost in case of crash (which forces you into ugly architectures with everything-going-to-DB, which in turn causes Uber-like troubles with DB scalability, and so on). Real fault-tolerant systems (such as checkpoint-based VM fault tolerance, available both in Xen and VMWare) should allow to work in clouds reliably – but I didn’t see a cloud provider which supports it (yet?)

          > whilst I find that AWS servers go down every now and then, you will likely get less downtime on AWS due to faster recovery.

          Yes, but then why not use dedicated box with even-less downtimes? 😉

          Or from a different perspective: IF I am doing home- or open-source stuff – it is THEN when I care about this level of pricing (and this, in turn, brings LEBs into the picture); however, if I’d work for a company which tries to save $30-or-so/month by going into LEBs for mission-critical stuff – I’d rather look for a different job 😉 . Even if the companies run thousands of such boxes – it is still better-and-cheaper to rent them at wholesale prices 🙂 .

          > I’ve had good experiences with RAMNode and Scaleway. I imagine OVH’s and Hetzner’s VPS line should be quite good too. These aren’t close to $8/year, but won’t be as oversold as that.

          I used to know two good providers which do NOT cut corners (neither they oversell), but which have non-exagerrated pricing. One was ThePlanet (which was bought by SoftLayer which was bought by IBM); they’re still good, but their pricing went through the roof (SoftLayer has bought them exactly because it wasn’t able to compete price-wise). The second one is LeaseWeb – and they’re still here ; they run an extremely good network (their peering tends to be very good, and I’ve even seen them fixing peering if one of your customer complains(!)), AND right today you can get a real HP-box-with-Xeon-16G-RAM-and-RAID from them for $44/month (which is not exactly LEB, but it is an obvious bargain for the kind of hardware-and-connectivity-we’re-speaking-about; HP boxes are among those which have ECC, redundant fans/power, good-and-supported drivers, and RAID – all of which allows to improve MTBF to those 2-3 years I mentioned). In general, it is something along the lines of LeaseWeb which I suggest for anywhere-serious business for anywhere-mission-critical stuff (if business cannot afford $44/month – I doubt it qualifies as serious).

          As for Scaleway – they’re about 2x cheaper than LeaseWeb; at their price, I would even forgive them running Atoms rather than Xeons 😉 – BUT running without all-those-redundancy features is a significant problem for mission-critical stuff 🙁 ; paying twice more to have twice less failures is usually well-worth the price. Still, Scaleway MIGHT work well for certain cases (with dozens and hundreds of disposable boxes running) – but I have my serious suspicions about them overselling their traffic capacity, which makes it quite risky if you go beyond simple web sites; also a Big Question is their policy in case of DDoS (“DDoS protection” sounds good. On paper. Until your ISP just null-routes your box-under-attack to protect their other customers, which is no doubt is written in their ToC-which-nobody-ever-reads).

          > I’ve pushed 40TB/month off one of these:

          Could be, but for mission-critical stuff there is a difference between “was able to push 40T” and “have those gigabits of traffic available whenever you need them”. For AWS and Leaseweb-class ISPs, it is the latter, and comparing it to people-who-are-outright-overselling-it-from-the-very-beginning (with wholesale pricing going in-the-very-best-case at about $1K/10GBit/month, selling 1GBit/s @EUR9/month is possible only with a HUUUUUUUUUUGE overselling, which will inevitably hit sooner rather than later, and migration of mission-critical stuff is rarely worth this kind of savings) is comparing apples-to-oranges.

          So, if comparing apples-to-apples (which is like “kinda-guaranteed capacity”), we’re speaking about the difference between, say, LeaseWeb (@EUR25/10T), and AWS (@$0.1/G ~= $1000/10T), or about 30x :-).

          EDIT. To summarize, I can see several realistic scenarios:
          – serious mission-critical business stuff. My current choice for it is dedicated-boxes from something-like-Leaseweb (MAYBE assisted with the cloud to handle load spikes). OTOH, for serious mission-critical we’ll have to organize redundancy (at least as a standby box), which happens to be a headache :-(, which does open the door to cloud stuff (for smaller businesses, when maintenance costs for dedicated start to get too high).
          – occasional calculations (such as calculating salaries – or doing pretty-much-anything-else – once a month). My choice: cloud all the way.
          – personal- or personal-business web sites, which are kinda-critical, but which will survive being a day offline once a few years. My current choice here ranges from Linode/DO to Scaleway etc.
          – some testing-only stuff (which I needed for my open-source testing, but didn’t want to run locally). LEBs can happen to be good enough.

  3. a-square says

    Have you considered precompiling clang on your machine? E.g. you could download the source package for Debian Buster, debuild it inside Ubuntu 16.04 docker instance (maybe you’ll have to edit dependencies so that they match the ones for the native clang package) and then upload it to your box.

    https://packages.debian.org/buster/allpackages?format=txt.gz (look for clang-5.0)

    The whole thing should take less than 3 hours

    • "No Bugs" Hare says

      Yes, it should be possible, but TBH, for compiled-from-source clang I was lazy enough to take an easier route: I got DigitalOcean instance, bumped it in size to 8G to compile Clang, and got it resized back to 1G to run my tests with an already-compiled-from-source Clang. Sure, it costs more, but TBH, I _really_hate_ dealing with cross-box stuff in Linux (glibc versions alone tend to give me the creeps…).

      I am still using Woot for pre-packaged GCC 7/Clang 4/Clang5…

Leave a Reply

Your email address will not be published. Required fields are marked *