Monday, April 3, 2017

Retro Games, Rotation, and the Gamer

- or -
 
A Question of Retro Games, 
Game Play Rotation Lists, 
& Modem Gamers


Well unless Angry Birds happens to be in his Game Play Rotation List that is!
The Most Dangerous Gamer (Comic)
by Nicole Wakelin on December 10, 2012
 

PREFACE


There has never been a better time to recover from lazy gamer syndrome or its counterpart - no-time-to-play-itus - than today.  Now.  Bear with me, all will become clear.  But first we begin the lesson... 
 
The Importance of Context

Contexts is wicked important.  So are ideas like “logic” or “expression” or even “thought” and “emotion” just to name a few.  One position on these matters can be found in the school of Epistemology -- which is the philosophical science and discipline under which we study and define how we know what we know - and the best was to both communicate and illustrate those points.

At its most basic of definitions “Epistemology” is defined as the study of the nature and scope of knowledge, as well as its justified belief and related systems that extend from there. Epistemology
analyzes the nature of knowledge -- and how it relates to similar notions such as truth, belief and justification -- and then defines those words and terms and their meaning in useful ways, so that we can thus carry on dialogue together.

The discipline also addresses our means of production of knowledge, and skepticism about different claims therein. I find this immensely appropriate and even poetic when I consider the alternate worlds that I have most recently existed in, and in particular that of the Japan and its northern-most island, Hokkaido, in the world of Hitman (2016), and the world that exists within the construct of the game “Thief” which was for all practical intentions, created in the late 1990s and refined in 2014 but depicts an industrial-age society on some alien world.

Sure, those are fictional worlds - or are they? I can tell you that at times they felt very real to me - and in particular the moral codes that appear to have usurped that of the courts and Common Law in them.

And the Darwinian approach to moral justification - something akin to Python Law rather than Common Law - when it comes to the significance of and importance for “getting even” or revenge - two themes that play significant roles in both of those manufactured worlds.

Despite the fact that humanity - let alone an individual citizen from one of the many different tribes that human call “nation-states” under which the species has been divided -- often and under conditions of grave danger seek that sort of satisfaction. I'm just saying.

To have meaningful exchanges about these - and other - topics we all need to agree on the basic foundation points like the actual meaning of phrases like “Retro Games,” or “Game Play Rotation List(s)” and even “Modern Gamer(s),” and what about “Preface?” That being so, for the record as I write this I am working from the following foundation points:

Retro Games = Any game that is older than the current season - but can be a very old game too.

Game Play Rotation List(s) = Any game title you play regularly but especially one you have yet to complete to your satisfaction.

Modern Gamer(s) = Me. You. Any gamer currently gaming even if they began their gaming career in the 1970s. As long as they are still gaming and doing it on modern hardware, they are a Modern Gamer.

Preface = The bits that come before the meat of the story.

See? That wasn't so difficult, now was it?

The Meat Part

Moore's law is an observation made by Gordon Moore back in the day that the number of transistors in a dense integrated circuit doubles approximately every two years. His observation turned out to be spot-on accurate, which is why they named it after him. It probably didn't hurt that Gordon Moore was also a co-founder of Fairchild Semiconductor and a little tech company called Intel.

The paper that Moore wrote and published in 1965 described the doubling - every year - in the number of components per integrated circuit, and projected that the rate of growth would continue for at least another decade - which turned out to be a very conservative time estimate, hindsight being 20/20 and all.

Borrowing from his experience I would like to introduce to you:

Boots-Faubert's Law

So yeah, this is the paper I am writing and publishing (well, article not so much as paper but still) that history will draw upon to phrase what will become known as Boots-Faubert's Law of Game Play Rotation - a simple law in gaming that dictates that the typical Game Play Rotation List for a gamer will double in size every 12 months as more games are added to the list thanks to two basic principles:

(1) The wizards at game studios continue to pump out games at a staggering rate, many of which are classified as “must-play” titles; and

(2) The average gamer will not have sufficient time in any given year to spend on completing these games, which will cause a backlog of incomplete games (and games they never got a chance to start playing in the first place) due to the lack of sufficient time to play them all.

The reasoning for this has to do with how big the video game industry has grown, and the fact that it continues to grow, with new studios appearing practically every day.

2014

A good example of this trend and its effect can be found in the year 2014. Bear in mind that a decade ago the typical gaming season - which runs from September through May - generally produced around six AAA titles in the “must-play” category, and so was certainly within reach of the typical gamer. Which was why we didn't really have Game Play Rotation Lists of the sort we have now back then.

Fast forward to 2014 however, and the situation has changed. Peruse this sampling of just the primary “must-play” titles for that year:
  1. 2014 FIFA World Cup Brazil
  2. Alien: Isolation
  3. Assassin's Creed Rogue
  4. Assassin's Creed Unity
  5. Batman: Arkham Origins Blackgate
  6. Borderlands: The Pre-Sequel!
  7. Bound by Flame
  8. Call of Duty: Advanced Warfare
  9. CastleStorm: Definitive Edition
  10. Castlevania: Lords of Shadow 2
  11. Chariot
  12. Child of Light
  13. Dark Souls II
  14. Defense Grid 2
  15. Destiny
  16. Diablo III: Ultimate Evil Edition
  17. Dragon Age: Inquisition
  18. Dragon Ball Z: Battle of Z
  19. EA Sports UFC
  20. The Elder Scrolls Online
  21. Elite: Dangerous
  22. Escape Dead Island
  23. The Evil Within
  24. Fable Anniversary
  25. Far Cry
  26. Fez
  27. FIFA 15
  28. Final Fantasy XIV: A Realm Reborn
  29. Forza Horizon 2
  30. Goat Simulator
  31. Grand Theft Auto Online
  32. Grand Theft Auto V
  33. Guacamelee! Super Turbo Championship Edition
  34. Halo: The Master Chief Collection
  35. Halo: Spartan Assault
  36. Hearthstone: Heroes of Warcraft
  37. How to Survive
  38. Infamous: First Light
  39. Infamous: Second Son
  40. The Last of Us: Left Behind
  41. LEGO Batman 3: Beyond Gotham
  42. LEGO: The Hobbit
  43. The LEGO Movie Videogame
  44. Lightning Returns: Final Fantasy XIII
  45. LittleBigPlanet 3
  46. Madden NFL 15
  47. Mario Kart 8
  48. Mario Golf: World Tour
  49. Metal Gear Solid V: Ground Zeroes
  50. Metro Redux
  51. Middle-earth: Shadow of Mordor
  52. Minecraft for X1 / PS4
  53. MLB 14: The Show
  54. NASCAR '14
  55. NBA 2K15
  56. Need for Speed Rivals: Complete Edition
  57. Persona 4 Arena Ultimax
  58. Pinball FX 2
  59. Plants vs. Zombies: Garden Warfare
  60. Pokémon Battle Trozei
  61. Pokémon Omega Ruby and Alpha Sapphire
  62. Risen 3: Titan Lords
  63. The Sims 4
  64. Skylanders: Trap Team
  65. Sleeping Dogs: Definitive Edition
  66. Sniper Elite III
  67. South Park: The Stick of Truth
  68. Sunset Overdrive
  69. Super Smash Bros. for Nintendo 3DS
  70. Terraria
  71. Thief
  72. Titanfall
  73. Tomb Raider: Definitive Edition
  74. Transformers: Rise of the Dark Spark
  75. Tropico 5
  76. Valiant Hearts: The Great War
  77. The Walking Dead
  78. Warriors Orochi 3 Ultimate
  79. Watch_Dogs
  80. The Wolf Among Us
  81. Wolfenstein: The New Order
  82. World of Tanks: Xbox 360 Edition
  83. World of Warcraft: Warlords of Draenor
  84. Worms Battlegrounds
  85. WWE 2K15
While not every gamer is going to like every genre - so there will be some selective removals depending on personal choice, the list above contains 85 games! And it does not help matters that some of those titles don't really include official endings - particularly the MMOs.

Sure I could have summarized that list - but then it would not have contained the gut-punching impact that the full list contains. And if you think that is a lot of games to be released in one year, consider the fact that that list only presents the AAA games - there are three times that number of lesser and niche titles released in 2014 as well.

This is why the average gamer's Game Play Rotation List is going to continue to grow with each passing season.

Another Problem

If you think that the paractical limits that usually apply - like only being able to afford X number of games in any given year - is helpful, consider this new problem: Microsoft has started GIVING games away for FREE to members of Xbox LIVE Gold.

Consider it - today when I checked the list of free Gold games - under the Game With Gold Program - I found the following titles:
Ryse: Son of Rome
Evolve Ultimate Edition
Darksiders

So there you have three more titles I want to play. I WANT to play mind you. But I guarantee you that I won't have the time to fully play them to my satisfaction, so as sure as Bob's Your Uncle those three titles will end up being added to my Game Play Rotation List.

What's the Solution, Kenneth?

I don't know about you lot, but the idea of my GPRL simply ballooning forever bothers me. There are loads of entertainment withering there just waiting for me to play!

Fortunately I have a solution. I say we set aside Sunday afternoon through early evening for ME time. Game Time. We dedicate ourselves to removing titles from our GPRLs by really digging into a game every Sunday. Set Sunday aside for gaming! Free the Games! YEAH!

Saturday, March 25, 2017

. . . Status Update and Buzzard Luck.

I have good news and I have bad news. The good news is that I am not dead. The bad news is that at various points over the past 8 weeks or so I found myself almost wishing I was.

You lot have sent me piles of email asking after me and why I suddenly went silent and why my progress on current active projects was so sporadic and almost random. I did have some health issues - but that is largely at this point in the past. Which is to say that I am on the mend and feeling a LOT better.

I sincerely apologise to those of you who have been waiting very patiently for the final expansion section for Hitman (2016) - which is long over-due as a result of my illness - but is (I swear) very very imminent in arriving - as I am in the last few hours of wrapping up the video edits and insertion of the video illustrations for the main active project (Watch_Dogs 2) as I write this.


Once the WD2 WTG project is completed I will immediately segue into completing Hitman - which only has the Japan-expansion left to it.

I am not going to go too in-depth on the health issues other than to reveal that I experienced a combination of a cold that turned into an URI and a bad reaction to some new meds -- simple as that.

So thank you all kindly for the emails and your concern - it is very much appreciated. Please be relieved to know that I am better and I have resumed work.

Cheers!

Chris

Sunday, November 13, 2016

Planning and Building a Home Network for the Video Gamer

Creating a Gamer-Oriented Home Network
A Network Tech Series Feature (Chapter 1)

by CM Boots-Faubert

The realm of computers and network tech today is so far advanced and so well integrated into the modern home that the vocabulary of the average person includes words like firewall, gateway, router, and phrases like cloud computing, content curation, MAC Address, and virtual private network.

Shorthand like DNS, HTTP, ICMP, IP, IPv4, IPv6, ISP, LAN, NAT, TCP, UDP, URL, WAN and WiFi won't present the tech-savvy engineer a challenge, but today they are well-embedded into the vocabulary of waitresses, auto mechanics, and even the local parish priest but especially youngsters and college-aged students. Which is why when Uncle Ralph and Aunt Molly have a problem with their home network they tend to turn to a nephew or grandchild first before seeking professional help.

These words, phrases, and shorthand have basically crept into the everyday vocabulary of non-technical people, this is true, but unlike said engineer, their interests in the underlying meanings pretty much terminates at the point where they cease to be useful in their life.

So while they know what the words, phrases, and shorthand mean, often the relationships that exist between them are simply beyond their need to know, so they don't know them.

There are logical reasons behind that expansion in vocabulary, and the broader technical understanding of the average person - reasons that can easily be traced to the evolution of technology, and specifically computer and network technology - in the modern home.

Consider this: the presence of a wireless computer network in the average home today is so expected and unremarkable that the lack of such a service is more remarkable than its presence.

When your daughter has her friends over on the weekend for a slumber party -- which by-the-way they don't actually call a slumber party anymore - they call it a LAN Party -- and the first question that the gaggle of tween guests in your home are likely to ask is “what's the WiFi password?” as they pull out their iPads, laptops, and smartphones.

That circumstances that create this scenario - a scenario that unfolds in the world a lot these days - did not happen overnight. Or in a vacuum. In fact we can easily track the various circumstances and events that lead up to it.

The Evolution of the Home Network
In 1990 two events occurred that helped to set in motion a movement that would eventually lead to the Internet in its modern form. The first was the death of ARPANET, which died not because it needed to, but because in 1985 the directors of the National Science Foundation arrived at the conclusion that, if they were going to obtain the level of network and data services that they required, they would have to create it themselves.

After years of argument, the board of directors of the National Science Foundati on proceeded to do just that, authorizing the establishment of a new network in 1986 first by linking creating a very large telecommunications network (called “The Backbone”) through which they connected six strategic member networks - five of which happened to host Supercomputer Centers.

These were - starting from East and heading West - (1) The John von Neumann Supercomputer Center at Princeton University, (2) The Cornell Theory Center at Cornell University, (3) The Pittsburgh Supercomputing Center (PSC) of Carnegie Mellon University and the University of Pittsburgh, and Westinghouse Corporation, (4) The National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign, (5) National Center for Atmospheric Research (NCAR), and (2) The San Diego Supercomputer Center (SDSC) at the University of California, San Diego (UCSD).

The establishment of this new network - which they named NSFNet - resulted in the first high-speed national network to be created without direct input from DARPA - using a series of six backbone sites that were interconnected via leased 56kb/s dedicated always-connected lines.

That may not sound all that fast now, but back in 1986 the best that the average user could hope for in terms of connectivity was a 9600 baud modem connecting via a POT - Plain Old Telephone - single pair of copper wires - or what is otherwise known as a phone line.

Using the V.32 standard for full-duplex connections that were capable of 9600 bit/s at 2400 baud, V.32 modems theoretically allowed for connection and transfer speeds at up to 9.6 Kbps - a figure that probably means nothing to you. Yet.

Here are some numbers that will mean something to you: that 9.6Kbps actually translates to 4.32 MB/hr - or 103 MB/day. Now compare that to the typical modern high-speed Internet connection of around 9MBs -- which translates to around 72 Mbps, or 540 MB / min which totals 32.4 GB/hr or 777 GB per day.

So roughly translated, our ideal net user in 1986 would require roughly five-and-a-half days to transfer 1 minute of modern data rates. If they were connected to that seemingly snail's pace backbone of the original NSFNet, they'd only need around three-hours-twenty-minutes or so which is way better - WAY better - than five days! So yeah, it's not super fast but, at the time, it was.

The new NSFNet rapidly attracted partner networks, with the Canadian National Network connecting almost immediately after its creation, and hundreds of other networks of all sizes joining it over the course of the following two years, at which point the original Backbone failed to maintain the required speeds to service what had become a Global Network - or Internet.

After NSFNET began to accept foreign networks for permanent connection membership, by 1990 the new Global Internet was official the decision was made to upgrade the Backbone - which they did to the tune of a T-1 Connection between each of the Primary Nodes (A T-1 Connection is 1.5 Mbps). To help reduce stress on the Backbone NSFNet was divided into Regional Networks so that, for example, a user in London who requested a page or program that was stored on a UK system, their request would not travel across the main Backbone but used only the Regional Net.

Eventually the Backbone was again upgraded - this time to a T-3 Connection (45 Mb/s) shortly after issuing the license for paid (ISP-based) access to the network - but now we are getting ahead of the story here.


The World is On Fire
The second major event in The Year That Changed Everything (1990) was a small company in Massachusetts called Software Tool & Die (AKA The World) connecting THEIR network to NSFNET.

The World was the first Commercial Internet Service Provider (ISP) and provided anyone willing to pay for an account access to the Internet. The shitstorm that followed from government agencies and universities eventually forced the NSF to grant provisional permission and license to The World to offer ISP services, and within a year that license was extended to ISPs all over the country and, eventually, the world itself. The modern Commercial Internet was born.

As regular people began to use the Internet, larger Value-Added networks like CompuServe and AOL also turned their attention to it, and Internet Access very quickly became a thing that forward-looking real estate companies added to their buildings in places like New York, Boston, and Los Angeles to attract what they thought of as upwardly mobile and thus desirable tenants.

The typical apartment lease form featured a comprehensive Utilities Section, which in 1990 and before, included specifications on who was responsible for electricity, water, and gas services, and reasonable limitations when the landlord or building owner provided some or all of those services.

In 1995 those forms began to include something called an Internet Access Lease Addendum -- a clause that spelled out both access terms for tenants and any use restrictions placed upon the building's 'Net Connection -- like upload and download limits, or using the residential connection for commercial purposes.

These additions to the average lease agreement spelled out the various technical details - whether the building network included a proxy web server, what sort of firewall was used, and what steps the residents needed to take to register their device(s) with the Internet Service Coordinator for the building.

By 2010 the Internet Access Lease Addendum was fully integrated into the Utilities Section in most markets, but thanks to the always-evolving computer threats the average tenant often refused to rely upon whatever firewall protections the building implemented, choosing instead to purchase their own WiFi Firewall Router that they registered with the building coordinator as the “computer” for their apartment.

In response to this new demand, hardware manufacturers all over the world began to design and manufacture a plethora of new devices that in addition to offering an ever-evolving level of firewall protections, NAT services, and DHCP, also included slots to install hard drives for their Network-Accessible automatic backup software. They even started to make Internet-Connected refrigerators - so yeah, you can easily lay responsibility for The Internet of Things on the The Year That Changed Everything.

To put this in perspective for you, there is a high-rise building in New York City that has fully integrated the Internet into their infrastructure. In each of the flats is a dedicated screen by their entry doors that, in addition to displaying an image of who is standing outside the door in the hall, offers menus that display information on a variety of building conditions.

At the tap of the screen tenants can learn the current temperature at street-level, the air and water temperature for the building pool and hot tubs, whether the sauna is turned on and, if so, its current temp, and they can call up a view of their assigned parking spot in case they want to check on their car.

The interface allows them to summon the concierge, send a text to the doorman or the deskman, and even order groceries from a limited menu of necessaries - milk, bread, bottled water, that sort of thing - provided by a nearby store that offers delivered services to the building.

Using the building's wireless Ethernet service they also have access to a Wiki Server that offers the current calendar of co-op events, as well as a number of maintenance services.

Despite all of that convenience at their fingertips, to maintain fair and impartial network access and speeds, these internal networks often limit residents to one or two Internet-connected devices, and also had limits to data use and access to certain ports or services.

To address those restrictions, the more tech-savvy residents either created their own private networks or hire someone to do it for them. These non-routed 10.10 or 192.168 private networks hid behind the officially registered IP address of their NAT-capable firewall-router, so that from the building network side - or LAN - it appeared that there was only one device, while LAN and WAN access was available to all of the devices on their private network.

Enter the Gamer
That sort of solution works great for the average user whose needs were restricted to email, web surfing, and streaming music or video via services like Netflix and Hulu, or providing their kids with a connection for their iPads and smartphones, it did not work very well for gamers who often found that the network services provided by their building or co-op tended to feature restrictions on large data transfers and the existence of Open Network Address Translation.

The typical video game - whether a console or PC game - often has an aggressive patching and updating model, and most of the games that included online multi-player required open-NAT in order to channel their services via specific ports from their servers to specific ports on the client end.

In most commercial settings those services were intentionally blocked for the protection of their clients, and bandwidth limitations were often applied to any user who exceeded the monthly allotment, which averages between 10 and 20 GB per month.

Basically gamers found that access to desirable services and games -- including a plethora of online multi-player games as well as MMORPGs -- was severely restricted or simply blocked. They also found that the typical game updates and patches could easily eat up their bandwidth allotment with updates to just four or five titles. For example the most recent patch to Tom Clancy's The Division totaled 5.39 GB - so you do the math.

The nature of network services is such that there really is no work-around in this case, which is why most gamers who live in net-connected buildings still tend to contract their own personal net connection from the local ISP - which in recent years pretty much means either a Cable TV modem or high-speed Internet services from the Telephone Company.

In some areas, if the network owner was fortunate enough to obtain cellular Internet services when the wireless phone companies were offering unlimited all-you-can-eat contracts for a set price, you'll see gamers whose firewall router terminates in a cellphone, but that's uncommon today.

The reason that this class of netzien chooses to go their own way in terms of net access is down to their need for open-NAT, the ability to assign specific ports to specific IP Addresses inside their network, either directly or passing through a virtual DMZ, and the need to download huge amounts of data in the form of games, game patches, and updates.

Whether or not the gamer lives in a Net-connected apartment or a house in the middle of the boondocks, the basic needs for creating a network are the same - which is where we begin in this article in our Network Tech Series.

Part I - Planning Your Gamer-Oriented Computer Network

The sexy part of building a computer network is when you sit down to pick your hardware. That's when the typical gamer gets to shine a light on their tech-savvy chops, and maybe brag a little on their choices for hardware infrastructure.

You may be interested to know that despite the fact that it is sexy, the process of creating a new computer network - whether it is a standard data or a gamer's network - does not begin with picking hardware.

It properly starts with the actual network design, which is a process that usually takes place on paper, and covers a number of crucial elements including the three most important decisions that must be made. Of course that presumes that the gamer is following standards of network design.

We've seen more than a few networks that we can only describe as Frankenstein Networks - examples where the gamer started with a net connection in their living room that consisted of a Cable Modem and Router with or without a firewall, to which many things were added piecemeal over time until it turns into a disaster.


A Sample Frankenstein Network
The results of that are what we jokingly refer to as a Distributed Network. An example of this is the network that belonged to a friend who asked us to help them fix their Frankenstein Network. The problem with that is that fixing is not really the best approach. The best approach is to throw it all out and start by designing a proper network, making use of anything that is already present that you can make use of.

To help you understand this let's take a look at the network in question.

What we found when we came to survey it was this: the WAN connection was (A) a cable modem in the master bedroom, which was connected to (B) an older firewall router with four ports and no WiFi. To get WiFi they ran a 60' Cat-5 cable to the other side of the house, where they plugged that into (C) a LinkSys WiFi Router.

There were four client systems in the house - (D) a PC in the living room that was connected to the WiFi router by Cat-5 cable, (E & F ) laptops in the two bedrooms belonging to their kids which connect to the network via WiFi, and (G) a PC in the master bedroom connecting to the firewall router by Cat-5 cable.

There were also some game consoles - three in the living room - but there were only three ports left open on the WiFi Router, so they had purchased an (H) 8-port Ethernet Hub and plugged that into the WiFi Router, plugging their (I) Xbox 360, (J) PlayStation 3, and (K) Wii into the hub. Later they added an (L) Xbox One and (M) PS4 to it.

When they got into playing a specific game a few years ago they ended up building their own (N) game server which, because there was no room elsewhere, they placed in their garage, and connected it to the network by running another 60' Cat-5 cable through the attic to the master bedroom, which was plugged into the firewall router.

At some point they had an almost break-in at their house, so they bought an (O) IP Security Camera System, which they ended up sticking in the garage, buying a surplus (P) 10bT Ethernet Switch which they placed in the garage and plugged the game server and IP Camera server into. They then placed the (Q/R/S/T) four cameras that it came with at various locations outside and inside their home, with one connected to the hub in the living room, one connected to the last available port in the firewall and the other two connected to (U) an Ethernet hub that they placed in the attic, and connected to the living room hub.

It was a mess, but it got the job done (sort of). It did have a number of problems, not the least being lots of collisions and, due to one of the cables getting crushed, some cross-talk on that link. They had no way to know that though, because none of the hardware that they were using was managed hardware so it was incapable of telling them a problem existed.

So let's begin with an inventory of the network...
  • A Black Box Cable Modem provided by the Cable Company (10bT)
  • Netgear RP 114 “Web Safe” Router (10bT / 100bT)
  • Linksys WRT54G WiFi Router (10bT / 100bT)
  • 3Com Unmanaged Switch (10bT)
  • Generic 16-Port Ethernet Hub (10bT / 100bT)
  • Game Server PC (10bT / 100bT / 1000bT)
  • Security Camera Appliance (10bT / 100bT)
  • Security Cameras (x4) (10bT / 100bT)
Network Clients
  • PC A (10bT / 100bT)
  • PC B (10bT)
  • Laptop A (10bT / 100bT / 1000bT)
  • Laptop B (10bT / 100bT / 1000bT)
  • Nintendo Wii (802.11 b/g WiFi)
  • PlayStation 3 (10bT / 100bT / 1000bT)
  • PlayStation 4 (10bT / 100bT / 1000bT)
  • Xbox 360 (10bT / 100bT)
  • Xbox One (10bT / 100bT / 1000bT)
The three major issues that we identified beyond the mess that the physical network represented are:

(1) Divergent Ethernet Speeds
(2) Ancient Hardware
(3) Lack of reporting capability

Our Hardware Recommendations
This is a useful teaching experience for you - because it demonstrates the decision making process as it applies to network design.

The very first step in this process after the inventory was creating a network plan. That meant drawing a layout of the physical structure, and then determining the best place to start the network from. In this case, and because of other issues that the network owner had - and their desire to go in a commercial direction in terms of its format (they had already purchased a rack at the Flea), the direction the plan took was dictated by some of those issues.

Considering that almost all of the network hardware on their network was ancient, it shouldn't be a surprise that we recommended replacing it all - including the cabling. Fortunately for them, I have the tools and the know-how to custom create Ethernet Cable and a box of Cat-6 cable in my basement, so that eliminates what can be a significant expense.

We also live near Boston, Massachusetts, which means that we have access to the MIT Flea Market - an electronics, radio, computer, and networking flea market that runs from April to October one Sunday each month. The deals that you can get at the MIT Flea include relatively modern hardware for dirt cheap dollars, so when you know what you are looking for, you can find some awesome kit at rock-bottom prices!

Using the layout of their house we created a network map for them, which first centralized the network services in one manageable location (the garage) and offered the capability of not only monitoring the network for problems, but also made regular maintenance easier because instead of using the cable modem provided by the cable service provider - which they did not have access to - replacing it with their own model gave them interface access, which is necessary if you need to troubleshoot a problem.

The server rack that they had purchased at the flea prior to consulting me turned out to be a heck of a deal. They somehow bought an APC 42U Netshelter Rack for $100 - this is a rack that sells new for ten times that amount. Unfortunately it was just the primary rack, and lacked the back and front door/enclosures. But we were able to track down some used at the very next flea.

The reason that we needed the enclosures was because they wanted to go with a rack-mounted server capable of supporting VPN and RAID, so that they could just have a single-server solution to the needs on their network, which basically was down to the game server, and the desire to have a media server and a Wiki-style web server that they could use to organize their business.

What we ended up recommending to them was to replace their kit with the following:
  • x1 SB6183 SURFboard Cable Modem ($81.99 via eBay) 1000bT
  • x2 Netgear GS724T Smartswitch ($100 via eBay) 1000bT
  • x1 Netgear Centria N900 Dual Band Gigabit Wireless Router ($55 via eBay) 1000bT
  • x1 Dell PowerEdge 2950 II RM Server with rails ($250 via techmikeny.com) 1000bT
  • x4 WD 2TB Drive w/2950 Caddies ($60 via techmikeny.com)
Owning their own cable modem meant that they could return the one that was costing them $10 a month in rental fees, so basically that new cable modem paid for itself in less than 9 months. In addition to that though, the new cable modem offered them full Gigabit Ethernet on the LAN side of their connection - the ancient cable modem that they had been using since they first obtained their Internet connection was a 10bT connection. Which considering the speed of their Internet package was ludicrous.

The matched pair of GS724T switches were set up at the two ends of the network, one in the Garage Rack, and one in the Livingroom Entertainment Center that contained the games consoles, and the Cablemodem. The two GS724Ts were configured so that ports 22,23, and 24 created a 3GB Trunk Backbone to allow for multiple streaming clients.

The WiFi Router was placed in the Livingroom, as that offered the best overall coverage for its users.

All of the Ethernet Cable was custom made Cat-6, with cable run management via the basement to reduce the mess and clutter it originally presented.

The 2950 II was installed in the Garage Rack, and configured as a VM Server. To the network it appeared to be four different servers - the Game Server, Media Server, Wiki Server, and a Loghost with direct email capability. The logs for all of the network devices were sent to the Loghost, and any alarm conditions generated an email to the owner's account.

We used mostly free utilities to make the networked VMs easier to manage, including FreeNAS/Plex for the Media Server, and Webmin to manage the other three servers. We also used a free for the bulk of the VMs - Ubuntu Linux, though the Game Server required Windows Server.

The network that we started with was quirky, slow, and difficult to manage. The network we ended up with was streamlined, incredibly fast in comparison, and very easy to manage. In the end the total cost for upgrading and replacing the network? $1,247.50 (though I did not charge anything for my help or the Ethernet cables).

They were able to recover almost $200 of that from selling off the hardware we replaced via Craigslist.

Proper Network Design Elements
When you approach the design of a new network, there are specific elements that need to be planned out. Those are:
  • Cable Pathing and Management
  • Network Device Placement
  • Network Service Location
Before we progress any further we need to define what those three important decisions mean.


Cable Pathing and Management: Don't be confused by the term Cable Pathing and Management - it means exactly what it sounds like it means, which is determining how you will manage and place the physical network cables that will connect your systems to the central device space.

If you were thinking that installing and managing physical cable was only going to be necessary for the actual physical cable that connects your firewall and router to the WAN side of the connection, prepare to be disappointed. Because if you are serious about building your own home network that meets gamer-class efficiency and speeds, you are not going to be using WiFi as your primary network connection. The latency will kill you.

When this article was written the standard for Ethernet Cable used in home networks is called Category 6 - though there is a second generation of cable for that Category called Category 6a (or Cat-6a) that is also available. This is the standard for Gigabit Ethernet.

Previously when 100bT speeds were the standard, Category 5 (Cat-5) was the prevailing standard, but with the wider introduction of Gigabit Ethernet, Cat-6 has taken over as the default standard. The reasons for that are simple enough.

Ethernet Cable Technical Differences
Since the original creation of and use of cables for computer networking, a standards committee has routinely specified the minimal technical requirements for these cables because the performance characteristics for said cables operates in a very narrow range.

While the differences in cable specifications are not as easy to see as physical changes in a cable, the specs for each are crucial to their proper function. Each category of cable has the capability to perform at set ranges and it is the very minimum -- not the maximum -- speeds that network engineers are concerned with. Because the ability to at least reach and maintain the minimal traffic load is critical to the success of a network in terms of simple function.

In terms of cable standards, Ethernet Cable is measured by specific requirements which include a standard length for measurement, operating MHz, the aforementioned minimum operational speeds, and finally the capability of offering Power-Over-Ethernet (PoE) without that service negatively impacting the data-side.

Here are the specs for the modern cables that you will find in commercial and home networks right now:



Length
(in meters)
Speed
10Mb/s
Speed
100 Mb/s
Speed
1 Gb/s
Speed
10 Gb/s
PoE
Mhz
Cat-5
100
X
X


X
100
Cat-5e
100
X
X
X

X
100
Cat-6
100
55 for 10Gb/s
X
X
X
X
X
250
Cat-6a
100
X
X
X
X
X
500

It's no coincidence that category number and Mhz of the wire gets higher as each category brings more stringent testing requirements for eliminating crosstalk as well as adding isolation between the wires.

That said, with Ethernet YMMV. We've seen various cables used in ways that are not inline with the specifications. Networks with runs longer than 100m, and networks that used Cat-5 instead of Cat-5e for Gigabit Ethernet connections and totally got away with it.

The reason for that is because the Cat-5 wire that was being used just happened to be of a higher quality than usually found. Cat-5e is not a different design mind you - it's Cat-5 cable, it has just been given more stringent testing standards for crosstalk than are generally applied to Cat-5.

You can often get away with longer runs and using standard Cat-5 as long as it is high quality cable, but use of that sort may not obtain expected results. It may work, but at a lower efficiency.

Conversely just because you're using Cat-6 cable doesn’t mean you are actually obtaining 1000bT network speeds, because every connection on your network must support Gigabit Ethernet to achieve that. Just like Cat-5 and Cat-5e, Cat-6 cable was retested to achieve 500 Mhz communication (compared to Cat-6’s 250 Mhz). The point to certifying higher communication frequency was to eliminated alien crosstalk - which allows for a longer range at 10 Gb/s sustained speeds.

If you are using older hardware and especially if you are using dumb hubs the entire network will slow down to the fastest speed of its slowest member. If a server on your network only offers 100bT any of the 1000bT clients connecting have to step down their speed to talk to it. That is something you need to consider when planning out your network.

You also need to test all new cable runs to verify that they are hitting the certified speeds. If you have a bad run the network devices are not going to simply slow down to say 900bT to talk on it, they will step down to the next standard level - which is 100bT.

Network Device Placement: When you plan out the placement of your network devices, at least part of the decision process needs to include environmental requirements and how they will be deployed. Whether or not the users will require access, and whether the connection environment will change frequently.

Network Service Location: When you can't tailor your device placement to the service location, special care must be taken to ensure that the cable runs from the service location to the network placement is 100% correct and functional as otherwise this will have a major negative impact on the network.

WiFi is a Convenience: Another issue that you need to come to terms with is that WiFi networking is simply a convenience. The rapidity at which a WiFi router can be over-saturated is laughable. If you have systems on your network that need to move large amounts of data, or that depend on maintaining the highest speeds possible, you want to be using cable, not WiFi.

Adding WiFi capability to a network is largely viewed as a courtesy to unsophisticated users - like your kids who just want to jump online with their iPad or Smartphones to check their email. It's really not appropriator for gaming or streaming.

To Rack or Not to Rack?
We personally know more than a few gamers who started this process by purchasing 19” Computer Racks of varying heights as the foundation for their home network; the example we gave in the Frankenstein Network is a case in point.

For the most part they don't do this because they need to - they do it because they WANT to. It looks cool. They like it. It makes them feel like they have a boss network. So here is the thing - despite all that if you can afford it, go ahead and do it!

I use racks for my own home network but that is something of a special case. In addition to a pair of server racks I have a relay rack for my network devices, which are ALL basically rack-mounted kit. If you do decide to go that route, understand that you do NOT have to replace the systems you want to rack with rack-mounted systems. That would be wicked expensive.

You can either purchase standard rack-mounted shelves to place the generic PC cases on, or for about the same price, you can buy a rack-mounted PC case and swap the guts of your PC into it. If you are curious go to eBay and do a search for Rack-mounted PC Case. Prices range from $50 to $500 though the lower-end cases will not come with a power supply. So yeah, it is doable. And yeah, it does look cool.

In addition to looking cool, a fully-enclosed rack will also provide noise management - which means you can use them to reduce the noise of servers and network appliances to levels that make placement acceptable in your house, rather than needing to stick them in a garage or basement.

That said though, racks exist to be home to rack-mounted components, like Ethernet Switches, Routers, Load Balancers, and Servers, not your Xbox 360. Just saying.

Figuring out your needs means knowing how many rack units you will require. A rack unit is a unit of measure used to describe the height of a server, network switch or other similar device mounted in a 19-inch or 23-inch rack (though 19-inch is the most common width).

One rack unit is 44.45 mm (1.75 in) high. One rack unit is commonly designated as "1U"; similarly, 2 rack units are "2U" and so on. The size of a piece of rack mounted equipment is usually described as a number in "U" - so counting up the U for the kit you have will tell you how tall of a rack you might want or need.

If you are going to go with a rack for in-house use, and you plan to use it for noise management, I recommend you choose a half-rack as that is a LOT easier to find space for or camoflage.

Completing Your Network Plan
Now that you know the basics of planning, it's time to complete your network plan. Start by sketching a schematic of your house or apartment, then noting where each piece of hardware will go.

Make a list of all the hardware and network-connected devices you will need to accommodate, and then work out where they will best fit into the new network plan.

One of the most important decisions you will need to make is whether or not you require a backbone. If your home is large and a significant amount of client hardware is located somewhere distant from the Internet Connection where it enters the home, then you will need a backbone.

Planning, Designing, and implementing a Network Backbone is the subject of the next chapter in this series. Hopefully the contents of this chapter have offered you sufficient information to begin the planning of your new network. While you are doing that, as a gamer, remember - this is supposed to be fun.