slim and cheap server

Of course 1.50 US$ a GB is ridicolous

But the whole concept of a 1U quad drive cheap-o system seems intruiging: Raid cards are still expensive. They certainly deliver in many cases the best solution. But 3TB (4 x 750) cheap ’scratch space’ for data that can be recreated could certainly exist in a 1U box for a pretty sweet price point. Sacrificing 25% storage and you have save space.

And as long Moors Law keeps deflating disk and system prices it is still the best strategy to buy as little storage as late as possible. To paraphrase Einstein just not to late or to little.

9 Responses to “slim and cheap server”

  1. scottt Says:

    I think that an interesting aspect of these particular boxes is their low power consumption…

    Here in California the average cost of electricity is 9 cents per kilowatt hour. This means that it costs $788.40 to burn a single kilowatt for a whole year ( $1576.80 if you’re burning that power in a closed machine room - thermodynamics are a bitch..)

    Let’s assume that an average server draws 200 Watts, and that the air conditioning to remove it’s heat is 100% efficient and so needs another 200 Watts to do its’ job - that’s 400 Watts all up or $315.36 to run that box for a year.

    Using the 80 Watts quoted in the article on the other hand would cost $126.14 to run for a year - almost 200 bucks less…

    If you’ve got a few dozen of these servers scattered about the place you could soon be saving the cost of another server …

  2. Daniel Page Says:

    Datacentre power consumption is going to become more and more important - this is why AMD and Intel’s new warhorses are lower-speed, bigger cached and lower wattaged systems.

    Personal question: Do you really need a 400 watt power supply for a 1u colocation server with 1 HDD, 1 CD ROM, a puny graphics card, and no external peripherals?

    Just remember that your average laptop is generally pulling 19 volts and about 3.5 amps: 19*3,5=66.5 watts… and 20 laptops will probably make less noise than any 1u rackmount server…

  3. scottt Says:

    Hey Daniel - if that question is not rhetorical and is in fact directed at me then here’s a response …

    The 400W figure was a napkin estimation for total power required for a basic server including the power required to remove it’s heat from wherever it is located - so it might have a 200W PSU in the box, and then there will be another 200W of HVAC cooling load. All back of a napkin - a wet napkin at that.

    Another interesting thing that is happening along these lines is distributed DC systems - bacically a system wherein you have one mamma jamma big power supply that provides the DC power to a whole rack of systems thereby taking the PSU inefficiency hit only once - the problem is that thanks to ohm’s law the expense is shifted to needing expensive heavy cabling to carry all that high current. It’s interesting to note that the Googlites choose their PSU’s very carefully, paying a premium for supplies with the lowest power factor - they figure that they recoup that pretty quickly (there is a link to an interesting story about this somwhere but I don’t have time to find it…)

    Oh - and finally, would you really want to be serving your website off of a laptop with a teeny weeny 2.5 inch harddrive and a laundry list of other design concessions that were made to make it portable? I get what you mean as far as the power consumption goes, but who cares about the noise? Real servers spend their lives in real datacenters which are cold noisy horrible places anyway - and most NOC or IT Engineers are either already deaf or pretending not to hear you ….

  4. Daniel Page Says:

    No, it was not directed at you!

    On the “portable-system-as-a-server-on-65-watts”, was just an example of what could be done if a really (power) economical system was to be built - the biggest power consumer of a portable system is a toss-up between the screen and the CPU, so dump the screen and 2.5 disk and replace with slim form 7200 SATA drive… Another interesting point on this would be the 0.5u form factor, potentially 8 machines on a 1U shelf… but such a setup would of course be limited as you say to basic webserving, email and light DB and scripting, but would be great for a cheap entry level dedicated hosted system where RAM and bandwidth are more important than real storage capacity and numbercrunching horsepower, on the basis that what you need is not always what you want…

    On the other hand, when you start hosting a web service like one I manage, with 200 concurrent users, 2 Gb SQL database, 4 Gb of images processed per day, a biprocessor intel Xeon and a 15000 RPM RAID5 setup is not too much… Take it home for maintenance, and your lights dim and you get soggy undercooked fries when you turn it on and the neighbours start banging on the ceiling ’cause of the noise - Yup, I already have caught colocation ‘flu before working in a datacenter in summer leaving my coat in the car… You know you are getting to suffer from exposure when you see penguins walking between the racks ;-)

    Working along the lines of small and light hosting servers, I saw a company that rents out Imac minis (macminicolo.net): 4u high, probably 6 per shelf (3 at the front, 3 at the back): 84 per rack with power and net up the middle… and about probably about 70w per system… and probably half the price of a blade server setup - you up your server/rack density (and therefore customer income) for the same power consumption and also heat dissipation…

    A shared DC setup is really the base of blade servers and is a great idea and allows a lot of flexibility, but then you also have a single point of failure… Lose the DC PSU, you loose the whole rack and not just one machine - OK, you can plan redunduncy, but you can never win them all even with redundant systems: Here in France one of the largest colocation centres (RedBus) had a major outage when power failed during maintenance: batteries took over - maintenance took too long, batteries died creating a site outage, power went to generators… 1 generator died and again a site outage while a backup generator spun up… and another outage when mains power was restored and there was a mains cut for 2 minutes several seconds later and the backup batteries has not had the time to recharge…

    We never did get an explanation from their NOC about that screw-up…

  5. scottt Says:

    Hey Daniel - It’s funny how much the context changes the way you look at things and hence the concessions that you are willing (and/or able) to make. We both deal with very similar technology, but in different applications - to you a rack full of machines is serving content for many paying customers, for me a rack full of machines is rendering images for my collegues.

    Consequently I make different concessions than you - for example (and you might want to skip this part as you may find it disturbing…) I recently removed the entire render farm from the UPS and put it on dirty power - if I loose power then I have bigger things to worry about than some frames having to be re-rendered. Also, the entire render farm will shut it self off in the event of an air conditioning failure - cos no clients are going to call me wondering where the hell their website went…

    Where I put in the extra effort is in protecting the content created by a couple dozen creative people who labour away for hours creating data files that can have literally hundreds (or for feature films thousands) of man hours invested in them - so that’s where I end up building in as much redundancy as I can without crippling their workflow.

    I am currently experimenting with doing away with harddrives in our render nodes, and replacing them with flash storage - this will drop another 10W per machine and also remove one of the most failure prone components. With our farm of dual core opterons we already see a 100% difference in power consumption from quiescent and peak loads - each node burns about 100W just sitting there and 200W if you fully load it - how’s that for “Cool and Quiet” ….

  6. Daniel Says:

    Hi Scott,

    As I said, it all depends on your needs - Image processing needs horsepower and bandwidth (my major customer is an online “photo-lab”), but I also rent out a couple of servers for tiny sites… For these sites, a 1 hour downtime per day will never be noticed so power processing is a moot point… but others need the availabilty, even so, if given the choice of systems, my smaller “homepage hosting” clients want the bigger disk and faster CPU… just like people want to get a Hummer or a Land Cruiser rather than a Prius or a Smart to go shopping downtown… want vs. need…. though I readily admit that there are a few people about who do have a real legit need for Hummer’s and Land Cruiser’s…

    As you said about running direct from the mains, if that works for you and you cannot justify the needs of filtered or backuped power, no problems - in any case, if there is a major power fail, it will generally also hit your routers - and if your servers are set to auto-start, they will generally be back up before your telco’s router/modem has had time to re-sync, so the platform would have been to all intents and purposes offline whatever happened (if you run over an internet link), though running your server farm through a UPS (depending on how big your farm is) is generally a good idea, if only to gracefully shut the systems down if the cut lasts more than a minute - I know of some assembled hardware with a customer running Windows 2000 server that will systematically corrupt it’s registry and bluescreen if power-crashed…. and you can do without having to break out the recovery CD’s and manually copying the reg backups about from the recovery console when you have your client’s graphists baying for your blood (and Bill’s) as their G4 macs have restarted without incident and they cannot access their projects on the fileserver…

    On the subject of replacing your HDD’s with flash drives, be careful on what you install - a dedicated 16 or 32 Gb drive (expensive but robust), an IDE-to-PCCard flash adapter or a “boot from USB-key” solution (cheap but subject to “write-wear”).

    You should be OK with a dedicated flash drive, but if you are thinking about a DIY with an IDE adaptor or boot from USB, beware of flash life: flash drives supports (virtually) unlimited reading but limited writes to a block - OK, the fatal number is about a million writes on your better cards and keys so on average cut/paste jobs they will last a lifetime…

    A dedicated flash drive (should have) hardware onboard that auto-fragments data alowing data writes to a same file to be spread over the filesystem giving the drive uniform wear and multiplying it’s life (and better quality write access resistance), but this is not the case with flash cards and keys: Windows can be logging at about 1 write/sec (mixed between physical & cache access) plus swapfile use. Under linux, you have logs and the swap partition…

    If you do not take extra precautions (removing logs or writing to ramdisks, mounting drives R/O) you could potentially get “bad sectors” on the “drive” in about 10 days on a log-intensive / swap-intensive system… Not good news!

  7. scottt Says:

    Thanks for the thoughts - I was concerned about the write lifecycle of the flash media. When we run the nodes in linux we can shut off all logging and disable swap (generally if we hit swap in renders the show is over anyway..) but when we run in windows there is not much to be done about the pagefile - disabling that seems to turn the OS into an expensive, locked screensaver.

    I wonder if there exists somewhere out there a driver that can get in between the OS and some cheap flash storage and enforce a fragmented use of the device? Going the ramdisk route is possible, but of course you’re then in a catch 22 situation as you want to leave all the available RAM that you can for the application.

    I’m just glad that this will all be so much easier when Vista finally ships….

  8. Daniel Page Says:

    What OS are you running on the servers?

    I remember for one customer running Win2k Server, I got the OS + PHP + MySQL (default install) + a DB update app running in less than 50 me o RAM (windows alone took 39 meg!) - a copy and run from ramdrive is then definitly possible!

    Other solution, leave the OS on the flash card, but redirect all system logs to a 20 meg ramdrive that needs to be created on system startup. That is the easy part… From there, you will also need to identify other services that log text files silently to %windir%\system32 and stop or redirect them.

    When I get time (hahaha) I want to do a sort of BartPE CD that runs off a flash drive (usb key or compact flash card ide adaptor) with the main objective of flash life… You can already do this with puppyOS (linux as a live CD or live key - www.puppyos.com), but I’d prefer to build my own Linux+samba as a competitor to Windows Embedded Storage Server, and post instructions into the English section of my blog (I’m a Brit in France, and I do not even have an English homepage!)

  9. scottt Says:

    We dual boot XP and Fedora on the render nodes depending upon which particular renderer we want to use - predictably enough the windows based stuff requires installs with hundreds of megabytes of dll’s.

    Let me know if you ever find time to put togther your idea - I’ve been looking for a more efficient way to run a linux mp3 server at home. I got a bit carried away with raid arrays and such, but now that 750GB sata drives are available I’m looking for a more energy efficient (and quieter) alternative…

    I’m also a Brit, lost in Los Angeles ….

Leave a Reply

You must be logged in to post a comment.