Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Network Infrastructure Question

Options
  • 18-06-2008 11:06am
    #1
    Registered Users Posts: 156 ✭✭


    Hi all, and thanks in advance to anyone who replies to this message.

    I work for a firm that does Landscape architecture, 3D visualization and Town planing.

    At the moment, each dept has its own server, as the 3D vis team often open and save massive files and I would have thought, slow down a single shared server too much, even if there were multiple nics etc...

    The Landscape Architecture team also deal with some pretty massive volumes of data regularly, for instance, a single autocad drawing of 50 megs, could easily contain links to 20 others of a similar size and pull them in on load time.

    The planing depts fils are pretty small, and they piggy back on to a share on the main PDC/Exchange server. Its all word files for them, nothing cosmic.


    So for the moment, I think having separate servers for Landscape Arch and 3D Vis makes sense.

    However here is the problem, From the business perspective, the work that these depts do is becoming increasingly interlinked. As such they are keen to amalgamate all the data onto one Network drive. Off the top of my head, unless it was some kind of cluster, this would require amalgamating the data onto one server. If that was the case, it would truely need to be a beast, with not only several load balancing Nics, but a killer Quad CPU and tonnes of Ram. Am i correct on this?

    I know some form of cluster may be in order, but they are not keen on spending huge money, any suggestions in general of a path to start to explore?

    (P.s The file servers are all server 2003, the plan is always to keep the exchange server separate from this, as it is already)

    Thanks in advance!

    Pete


Comments

  • Closed Accounts Posts: 695 ✭✭✭FusionNet


    I think you've hit the nail on the head with a lot of ideas. Keeping the exchange server away is def a good Idea.

    As far as the next steps, I only have experience in one or two of your fields of quetion so I wont try and answer anything else.

    My first concern is make sure your cabling can manage the data. No good looking at new expensive servers when you may have weaknesses in the cabling. So if your not at Cat6 Level or even Cat7 Id consider this as part of the spend, and abviously which you already proably have is gigabit managed switches connected with Fibre?

    After that Id also consider the budget. Try and find out what your max spend is as there is no point looking at amazing systems that you just cant afford. I think if you gave some of the guys here a gestimate of what you can afford to spend in one go or over 3 years I think that would greatly help.


  • Closed Accounts Posts: 164 ✭✭ob


    First off, how many users do you have? Are they all in the one location, i.e - all connecting to the same switch stack?

    What is the performance currently like?

    I don't think the server needs to be a beast, but I think the storage system does.

    I think it's worth looking at a SAN for something like this, and make sure your server has top end nics, and penty of them.

    If you didn't want to go the SAN route you could still use Distributed File System to make the share appear in one place, and still distribute the load among file servers.

    Personally I think the SAN route is the way to go, providing you have the budget to do it right.

    Gigabit switches are obviously a minimum requiremnt as well, using link aggregion to load balance server nics and switch ports.


  • Registered Users Posts: 156 ✭✭gizzymo


    Hi,

    Thanks for the reply guys. Currently the network is all Gigabit, with fiber between switches. We have about 40 users in the office, and the performance currently is very good.
    The switches are in two locations, one per floor with high bandwidth fibre links between them.

    I like the SAN idea , however, we will have to upgrade our switches to Managed, as this is going to be essential. For decent link aggregation purposes, do I have to go and get Cisco, or are there more affordable options?

    I have a contact for Dell San equipment, that shoudl get me a pretty decent discount.....

    Thanks in advance

    Peter


  • Registered Users Posts: 1,698 ✭✭✭allybhoy


    40 users is not a lot to be honest, So personally I would say your current network infrastructure might support you, Few questions first though.

    have you any idea of what your network utilisation is currently like? If not find out before you do anything. Make sure you capture utilisation at peak time's. Eg:- 9am and 5pm aswell as off peak times. Once the upgrade is complete you need to do this again. This is important because if you perform large upgrades like this you need to have a baseline for comparison, like if things start to slow down after the upgrade and your boss freaks out that it cost him X,Y and Z and his network is crawling, at least you can show him real figures that he cant dispute.

    Do you have any system performance data for the server that currently hosts the Landscape Architecture team data over an extended period of time? If not I would obtain this info aswell, like your server might spike for half a second when a file is being accessed but unless you have detailed info that you can analyse you wont really know what to purchase.

    Are all clients on 1000BASE-T?

    Are all clients\servers on full duplex? Presumably they are if they are on 1Gb.

    How are the two switches connected via fibre? Like is it a point to point uplink\downlink cable, or a fibre switch?
    If its a fibre switch you could just install a HBA card into a new server with an external storage bay or two without getting costly SAN equipment. Alot of our customers do that now. They might have 2 or 3 Terabyte hanging off a server that has two HBA's installed and these would be network intensive jobs, like Web or SQL servers.

    How much data do you need to store, and how much data will be accessed on a daily basis by the users?

    Like I know you may have a large graphics department, but are they accessing large templates every day and viewing them on the server or are they pulling them down to their local machine, viewing/modifying then saving it back to the network share.

    This probably asks more questions then it answers but I hope you find it of some help


  • Closed Accounts Posts: 164 ✭✭ob


    gizzymo wrote: »
    I like the SAN idea , however, we will have to upgrade our switches to Managed, as this is going to be essential. For decent link aggregation purposes, do I have to go and get Cisco, or are there more affordable options?

    3Com managed switches are fairly good, I installed a few of their 5500G-EI ones, and they got a ton of features, and excellent performance.


  • Advertisement
  • Registered Users Posts: 559 ✭✭✭ZygOte


    do not confuse network sitches that have a fibre interconnect and fibre based sans, they do not use the same switches, they are totallt roally different. Fibre based sans use dedicated 2/4 and sometimes 8gb fibre fabric switches they operate on FCP not TCPIP like your servers. ISCSI and FCOE based SANs can be used with traditional gigabit networking switches.


  • Registered Users Posts: 156 ✭✭gizzymo


    Thanks all for the advice, got plenty of food for thought now.

    Although , it has been pointed out to me that 40 users is not that much, just to highlight our issue, because of the super large projects been worked on, the differential backups are about 280 gigs a day. sometimes a lot more, and that is excluding the exchange server etc.

    To answer some questions:

    The clients are all 1000baseT Full Duplex and our current switches are connected via point to point fibre cables.


    I really appreciate all your help,

    Pete


Advertisement