Sunday, September 6, 2015

Rebuilding the Network

Back in May of this year I revealed that I was in the process of planning out a network rebuild as a consequence of our home network aging out (the newest network infrastructure components were at least 10 years old).

You can read that post by hitting this link for the a Home Network Revival Storypost.


While we are not each of us likely to choose to admit to matters like hero-worship easily, I have to admit that it is the Zen-like awareness of the character Chris Stevens - and specifically the sub-character he is when being "Chris in the Morning" - a DJ for radio staiton KBHR - which he announces as Kay-Bear...  Proof that figuring out what you want and then making it happen by recycling as much as possible is just plain cool man...

Surveying the Reality of the Network Situation

When you begin a new journey it usually helps when you start at the beginning - and in our case, for the re-imagining of our network - the beginning starts where the wires attach to the house.

Since we previously ran our consulting business from offices in our basement that we had finished off with walls, doors, paneling, and a finished ceiling with proper lighting, the place where the wires come into the house was the most logical place to deploy our network racks.

So in that spot there was (past tense) a Relay Rack (a two-post 19” rack used exclusively for network hardware), besides which was two matching 7-foot-tall standard box-style 19” equipment racks of the sort that are commonly known as “Server Racks.”

The pair of Server Racks we chose back in the day were the then-current model of Tripp-Lite 42U rack enclosures. We bought two of them, with side-panels, but no doors (front or back), no wheels, and no handles. Just the standard leveler bolting scheme that allowed us to drive in concrete receivers so that we could bolt the racks to the floor.

For the Relay Rack we chose a Tripp-Lite SR-2Post-45U Open Frame Rack, with a solid steel 4-way-mount base plate that allowed it to be properly secured to the concrete floor so that it could safely contain up to 800lb of hardware (for which it is rated).

Even when it looks like it is still good, old network hardware is just that - old.

Age and Treachery Require Flexibility!

As we contemplated an entire re-design of our network in order to bring it, kicking and screaming, into the 21st Century, we had to make some tough decisions. The main reason for many of them had more to do with the fact that between 1995 (when the original network was planned out) and 2015, I have lost the use of my legs and require a wheelchair to get around.

As a result of this in 2002 we pretty much moved my office to the ground floor (because the basement office was no longer accessible to me), with the room on the ground floor that was designated as my office being specially adapted.

Because I converted my old and no longer manageable career of NetSec Engineering into Writing, and in particular as more than half of my writing was on the video games beat, that room was given the well known and patented Ikea treatment in order to make it as friendly to my needs as possible.

What that translates to is simple enough. Against the front wall of the room we installed a custom-designed and modular solution based on the "Besta" line of shelving and storage units from Ikea. 

Basically that included a primary Besta Unit with a large open TV shelf off-center to the right, with a bank of shelves along its left side. We then added a pair of large shelf banks, one of which was laid down on its side to function as a base for the main unit (to which it was bolted) that basically provides five identical box-shaped storage openings that match the five similar openings that serve as the base of the main unit.

By doing that the main unit is raised in height of the unit so that when we put the second bank or column of storage shelves on the left-side of the assembled main unit, their heights matched.  What that resulted in was a main unit combination that could easily accept our 52” flat screen TV and plenty of storage. 

The base of the TV space has a row of two very wide but low shelves that are meant to accept entertainment kit like DVD players, game consoles, and other modular kit, while the eight box-shaped openings plus the added five additional shelves from the second column laid on its side provide full customization space.

The total number of storage openings is 18 and they are sized so that you can use modular storage objects from Ikea - such as their conversion kits with backs and doors to turn them into independent storage space, as well as drawer inserts and filing container inserts just to name a few.

We went with a selection of different inserts for the office so that I had plenty of file space, as well as proper shelves for books and game cases.

Right so you now know what the central focus of the office consists of. Positioned directly in front of the E-Center is a large flat bed-like construction upon which I can sit, lay, or recline, while I work, with my notebook PC and game controllers easily accessible.

To the left of that work space is a small desk on which the A/V rendering server is placed, with its keyboard and mouse accessible to use from the work space. Finally along the left-hand wall is a proper desk for my wife's use.

It was into this fully functional and well-designed work space that we inserted - to the right of the E-Center and thus occupying the right-front-corner of the room - one of the 45U Server Racks, into which would be installed all of the hardware that for various reasons I may need direct access to.


The New MBR Rack Space and LAG

We decided to call this the MBR Rack, so the E-Center became the MBR Center.  Into the MBR Rack was initially installed a pair of shelves, and some spacer vent plates. The latter being perforated plates that provide front-coverage to serve as spacers for the various hardware while allowing ventilation so that air-flow was ensured.

A 4U spacer plate was installed into the top of the MBR Rack, and directly under that we installed the first of a matched pair of Netgear GS724TV2 Managed Smart Gigabit Ethernet Switches that would serve as the backbone of our new network (bearing in mind that the original network was left in place and fully functional as we created the new one!).

So the pair of GS724TV2 Switches were installed - on in the MBR Rack, one in the NOC Relay Rack - and these were fully and properly configured. That meant adding the proper configuration settings, and then making two custom CAT6 Ethernet cables that connected Ports 23 and 24 on the MBR Switch to Ports 23 and 24 on the NOC Switch.

These two switches were then configured to use the pair of Cat 6 cables as a single 2GB backbone that connects the upstairs and downstairs switches - as otherwise a single Gigabit cable would be all that connected them.

All fully-managed and most partially-managed Gigabit Ethernet Switches today offer up a backbone protocol of this sort, either using dedicated Network LAG configurations (LAG = Link Aggregation) or, as was the case for our Switches, a dedicated aggregated Trunk connection of two cables and two ports for a total of 2GB backbone speed.

The reason that we chose the Netgear GS724TV2 switches was a combination of price (they can be had on eBay dirt cheap) and the decision on Netgear's part to opt for using Trunking in place of proper modern LAG.

Trunking is preferable largely because in theory - and if needed - we could use up to 8 Ports (for a total of 8GB) for our backbone aggregation if we chose to, whereas most of the modern implementations of LAG protocol limit it to just two Ports - or 2GB.

Because we work with A/V files on a regular basis, and because file storage is planned to be centralized on our network, we need a minimum of a 2GB backbone and possibly a minimum of 4GB - so the ability to expand that as needed? Yeah, that was key.

With the two switches and the backbone connection in place, we tested the data transfer speeds and have come to the conclusion that, as long as no more than three users are accessing A/V media or working with A/V files across the network simultaneously at any given time, the 2GB backbone should be sufficient.

But just in case we made a third Cat 6 cable and attached Port 22 to Port 22 after disabling Port 22 on both routers (to ensure that there is no accidental looping) so a third 1GB connection exists that can instantly be added to the existing 2GB backbone to increase it to 3GB if we ever need to. You know, having the raw cable there really was no good reason not to do that.

Proper Termination

The network downstairs was originally a free-flow cable configuration but, once that got too messy, we ended up adding a CAT5E Patch Panel to it ten years ago, and then custom cabling to Keystone boxes installed at the baseboards for a neater cable management scheme.

So all of the systems that connect to the NOC end of the backbone do so via properly terminated permanent CAT 5E cable runs that utilize a patch panel and patch cables for their end-point connections.

Since we were re-imagining our network anyway, and moving the Server Rack and a Switch upstairs into the office anyway, it seemed like a good idea to duplicate that standard upstairs. So we purchased an industry standard CAT 6 Patch Panel and properly terminated all of the cable runs that connect to it using the standard box-and-keystone configuration, so that, at strategic points in the walls, there are nice clean Ethernet jacks in place of spaghetti cable runs.

At the MBR Rack installed just below the Ethernet Switch is a 48-Port CAT6 Patch Panel that is almost identical to the 48-Port CAT5E Patch Panel in the NOC Relay Rack. So how is that for symmetry?

Power Management

The NOC Relay Rack was originally planned and deployed with a standard, grounded, 8-Port PDU that connected to a 2400KVA UPS installed in the base of the Server Rack next to it, which allowed for the various bits of kit in the Relay Rack to have their power independently controlled, using the toggle switches on the PDU.

Of course at the time I was AB and thus able to walk over to the rack and flick a switch - but once I was crippled, and with the hardware now completely inaccessible to me, if I needed a device attached to the rack down there reset, that meant someone else had to go downstairs and flick the switch off, then on again.

Clearly that was not an ideal circumstance - and with so much of the hardware failing as it aged out, I was having to send switch-flickers downstairs several times a day, EVERY day!

So when the Server Rack was moved upstairs, we detached the mechanical PDU from the Relay Rack and brought it upstairs with the Server Rack, installing it at what would ordinarily be waist-high level in the rack. All of the network kit was then powered via that mechanical PDU.

Downstairs in the Relay Rack things were different. Changed. Better!

Via eBay we purchased an APC model AP7900 Network-Attached 8-Port PDU complete with the same style of web-based admin access that the switches use, so that I could basically recycle the power to any connected device remotely.

Yep, that meant that even if I needed something attached to the NOC Relay Rack reset, I would not have to dispatch a human to do it - I could do it myself! And how cool is that?

The AP7900 is a pretty well-known high-quality piece of equipment. Purchased brand new from APC you are looking at $500 a crack - but used on eBay? Less than $90! So that is totally win-win and within the budget.

In fact I am seriously considering buying a second AP7900 for the MBR Rack and I would if that was not so decadent and lazy a decision :)


Servers and Such

The Server Rack downstairs is part of the solution for our re-imagined network, because it is the home to a new server we purchased and over the course of 4 months (as we could afford it) upgraded until it was ready to be deployed.

I am speaking of the Dell PowerEdge 2950 II server we call “Monster” that functions as the VM Host for our network.

Virtual Servers are a new concept for us here - and while it just might maybe be overkill when you stop to consider that the network we are building is being built to provide Internet Access to the residents of our farmhouse, and work resources for a journalist, it probably IS overkill.

But when we did the math, when you combine the average cost of ownership and the electric bill for running four generic PC-based servers which each have PSUs ranging from 450W to 900W (the file server uses a 900W PSU due to the number of drives in it) it actually makes total sense!

The 2950 II has a pair of redundant 720W PSUs in it, and collectively they end up using less than half of the juice that the four original servers that they replace use.

So our new (it is actually a used and refurbished) PowerEdge 2950 II which we purchased from a server rehab outfit in New York City called TechMikeNY who we originally found on eBay (but who has their own store-site online as well) we basically got very very lucky.

The process of finding reliable, reputable, honest, and knowledgeable suppliers for specialized computer hardware in this day and age is often something of a crap-shoot. And considering that we not only needed a reliable and savvy refurbisher but also a supplier who would be willing to answer all of our questions and provide solid and sound advice pretty much whenever we needed it... Well, that is almost an impossible task these days!

To make this absolutely clear, we have no reservations whatsoever in recommending TechMikeNY as a supplier whether via eBay OR via their website sales center. The blokes working there know their hardware inside and out, can (and more importantly WILL) answer any and every question that you ask - even the really stupid ones - and provide a high-quality refurbishing services for servers that they back up 100%.

The MIT Flea - held third Sunday of the month, April thru October, and the place where we obtain the used and refurbished hardware as well as new batteries for the UPS's

Adventures in Server Acquisition

We felt so comfortable - and happy - with the services that they provided when we originally bought the server as an eBay transaction (which is a good idea for you to use to since using eBay adds a layer of insurance to a purchase like that thanks to the eBay and PayPal policy of going to bat for the buyer whenever issues crop up for such transactions) that we had no problems or concerns with hopping from eBay to their web-storefront to do the rest of the purchasing.

At their well-made website we bought the rest of the kit we needed -- which in this case consisted of the following (with source noted):


MONSTER (Our VM Host Server Build)

PowerEdge 2950 II Initial Server Purchase @ eBay -- Total Cost: $250.74
(Buy-It-Now Price = $215.99 plus Standard Shipping = $34.75)
eBay Order and Purchase Number 221295119685 check it out for yourself!

This was basically a starter-package for a Commercial-Grade Enterprise-Level Virtual Server Host System that included the following bits and bobs:
  • Model: PowerEdge 2950 III
  • Processors: 2 x Intel 2.33GHz E5345 Quad Core
  • Memory: 8 x 2GB PC2-5300, FB
  • Hard Drives: 2 x 1TB 7.2K SATA 3.5"
  • Drive Bays: 6 x hot-swap 3.5"
  • Power Supplies: 2 x Dell 750 Watt (redundant PSUs)
  • Number of Power Cords: x2
  • RAID Controller: PERC 5i w/battery backup installed
  • Network Interface: 2 x Broadcom Gigabit Ethernet
  • Video Card: ATI Technologies Inc ES1000
  • Optical Drive: DVDRW-CDROM+
  • Remote Access Controller: DRAC5
  • OS: Windows Server 2008 R2 Evaluation Edition
  • Warranty: Standard 30 Day Included
Yes, all of that and only $250?! Not only is that a heck of a deal, it is an awesome server! The ting is it is just what we went looking for - a starter kit. We knew going in that we would need to buy additional bits and bobs before we could actually deploy this as our server solution.

So happy were we with the initial purchase that when we found out that TechMikeNY was in the process of building its own web-based storefront, we waited for them to complete it and roll it out rather than doing the rest of the purchases via eBay (because why? Because we could save a few bucks on eBay fees that is why!).

When their storefront finally deployed we bought pretty much most of the rest of the kit we needed, which in case you are contemplating doing the same thing we did, was:
  • x3 1TB 7.2K SATA 3.5" @ $49 each;
  • x3 HP Drive Caddies @ $15 each;
  • x1 Dell PowerEdge Rapid Rails Kit @ $35;
  • x1 Intel Pro Dell X3959 Dual Port Gigabit Ethernet NIC Card PCI-E D33682 Card @ $36.99
Basically we needed the additional drives in order to create a pair of RAID Volumes - the first one being a 1TB Volume for the OS that consisted of a pair of mirrored drives (plus a dedicated Hot Spare Drive) for the OS and VM Host Server configuration and the first virtual server, which is a network utilities server.

The second Volume was an identical pair of 1TB mirrored volume for the second pair of servers - our Wiki and Database Server and a Testbed Server for online game servers.

To make that work we needed to add another dual-port Gigabit Ethernet Card since each server would need its own dedicated Ethernet and so we needed four Ethernet Ports.

Finally there as the Rails - which you cannot install this beast into a rack without!

The only thing we still need (but there was not enough in the budget for at the time) was one more of the 1TB 7.2K SATA 3.5" drives and an HP Drive Caddies to install it as the designated Hot Spare for the second volume.

See the way that this works - because it is necessary for us to install the server in the NOC Rack far away from humans due to the incredible noise that Enterprise-Level servers tend to make thanks to their massive cooling requirements - trust me this Monster sounds like a 737 taking off when its fans kick in!

So yeah, since it has to be racked away from anywhere that humans are likely to be found, having pre-configured and deployed hot spares makes total sense! What a hot spare does is just what it sounds like: if something goes wrong with one of the two drives that make up the mirrored volume, the RAID controller immediately signals an audible alarm (that nobody will hear) and emails a warning alarm notice to the admin email address (which of course I am likely to see soon enough) while at the same time activating the Hot Spare and rebuilding the Array using it!

That way we never lose the mirror and insurance that it offers, and the server can take the necessary steps to ensure the continued operation of the volume. I get the email and dispatch one of the kids to pull the bad drive - the HP caddies actually have indicator lights on each that will show which drive has gone bad or crashed, but there is also a display panel on the front of the server that will tell them which slot the bad drive is in as well...

There is a drawer in my wife's office desk that among other things contains our supply of eplacement drives for the different RAID equipped systems we have on our network - which basically is Monster plus the A/V rendering server, which we built using fast 1.5TB SATA 2 Drives. So I remove the bad drive from the caddy, and swap in a new replacement, then the kid takes that shoe/caddy back downstairs and locks it back into the server, whereupon the RAID Controller marks IT as the Hot Spare for the Volume, and all is right in the world.

In the interim I go online and RMA the bad drive, then Yvonne boxes it up and ships it off to Western Digital, who then sends me a new or refurbished 1TB 7200RPM replacement which gets placed in the RAID Drawer in the desk to start the whole process all over again as needed, if it every IS needed.


Servers at the MBR

So Monster is fully deployed and our virtual host is providing the virtual servers to our very real network, and in the meantime we have to complete the deployment of the MBR Rack. That involves a few minor and some major tasks!

First there is the KVM Switch that we moved from the NOC Server Rack (Monster has its DRAC so it does not need a head or a foot since we can access it via the DRAC Card), so the KVM Switch is no longer needed downstairs.

Back in the day that KVM Switch - a Belken OmniView 8-Port Switch - was the console interface for all of the servers in the racks - which basically meant all of the network resources plus our crack cluster. Today things are a bit different.

For one thing the ancient SVGA 15” Monitor that sits on a shelf in the NOC Rack is still sitting on that shelf - at least until we can recycle or sell it anyway - because we will not be using either that old monitor OR the mini-keyboard and mouse that sits on a retractable shelf in the NOC Rack.

In its place we have bought a used and refurbished KVM Unit (basically that is a folding display with built-in keyboard and touchpad) that is build as a 2U drawer by APC, which we installed in the MBR Rack. Below the KVM is the OmniView KVM Switch, which is connected to the APC KVM and to each server to be installed in the rack.

Now previously the Wiki Server that was plopped on top of the E-Center was the only Server upstairs other than the A/V Server which is not installed in either the E-Center or Rack. But that will change over the course of the next 2 months or so.


New Servers and the KVM

The guts of what was the Wiki Server are to be swapped into a new 19” rackmount server that includes high-density mounts for hot-swapable hard drives for what will become our NAS Server - NAS translating to Network-Attached Storage.

In this case that server will be built using the FreeNAS suite - which is basically a small turnkey appliance-style OS and utility that fits on an 8GB Thumb Drive. That way all of the hard drive space that is actually installed in the server will be hard drive space.

The FreeNAS Thumb Drive is plugged into a USB Port on the motherboard INSIDE the new case, which means it is nowhere that it can be tampered with or lost. Inside the case we have currently installed all of the spare SATA drives that we had laying around in units of 2 identical drives each, so that we can utilize the FreeNAS RAID File System.

Eventually ALL of the drives in that box will be intentionally-purchased large format drives - probably 2TB minimum but eventually 3TB since that is the current hard limit for formatting using the ZFS File System - though really if you think about it since ZFS and the FreeNAS software sets aside a large chunk of the drive for swap space anyway, we could probably get away with using 4TB drives since it only needs to format the portion of the drives it can actually use as data space (the swap does not count)...

Huh, that is certainly something to think about!

Actually how much hard drive space for NAS is hard-limited by the amount of RAM that is in the system anyway - as FreeNAS requires you to have a minimum of 1GB of RAM per TB of disk space, so for example, if you have a server with 16GB of RAM in it, you would be hard-limited to just 16TB of PHYSICAL disc space. Note the emphasis on “physical” there because the RAM to HDD ration is NOT for the actual storage space but the disks with swap included in the figure.

So for now that means we will have 16TB of physical Hard Drives in the server, and 16GB of RAM. That will translate to around 8TB of Swap and 8TB of NAS. Which is just fine - heck, 8TB of NAS is a LOT of storage space!

Eventually though we will increase the RAM in the server to 64GB, and install 64GB of physical drive space for a total of 32GB of NAS. And that will be so sweet! Because not only will that give the family plenty of space for saving photos and movies they take with their phones/cameras, but it will provide more than adequate space for the automatic backup scripts to backup each of their PCs every other night. So yeah, that happened.


Hollywood...

Almost the final step in the network upgrade will be building a new server called “Hollywood” that will be built into a standard 19” rackmount case, and installed in the MBR Rack, and attached to the KVM. Its name should clue you in on its function...

Hollywood is the home-built Media Server I am creating using Ubuntu Linux Server and a pair of TV capture/tuner cards and as much RAM and Hard Drive space as I can cram in!

Using a subscription-based schedule database available online via the web, the family will be able to access Hollywood either via its webserver or by sending the server an email message telling it what shows that they want it to record.

Hollywood will record the shows that they ask it to - minus commercials - which means that the kids will never miss their favorite shows thanks to time-shifting. See they have reached the point in their young-adult-life where having a job as well as going to uni is part of life. This way they still get their TV fix.

In addition, and thanks to some clever scripting and several apps, we can pop any DVD from our collection into the optical drive on Hollywood and it will automagically rip the contents to the hard drive and add it to the program database and its related web display on the server.

Once a title has been added, the source media can be safely stored and thus never risk being scratched! Not only that, but the show/movie/etc can now be watched from any network-attached device in the house that is capable of playing standard video formats!

So the family can watch Game of Thrones using their iPad or iPhone ON the throne (um, yeah, that is kinda gross but still...), or on the TV in the living room, or on their notebook or desktop PCs while laying in bed, or even on a portable device while sitting in the sun on the deck out back, or... Well, you get the idea...

So there you go - we went from an aging largely failing network and moved to a fast, reliable, modern, and useful network that offers all of the modern conveniences! Yeah, technology is good. But what is even better is that almost all of these improvements and the new network is a product of re-use of existing kit, buying refurbished kit, and making most of it myself from cables to servers.

The consequences of that is a new network that was done on the cheap. And what could be wrong with that?!

No comments: