Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Tales from the land of Gigabit Ethernet

Options
  • 15-06-2004 1:17am
    #1
    Registered Users Posts: 648 ✭✭✭


    Is anyone out there using Gigabit Ethernet on the desktop?

    After keeping an eye on prices for a while, I decided Gigabit technology was now cheap enough to get my feet wet. The year-old MSI motherboard on my main workstation (Athlon 2400) has Gigabit Ethernet built-in; I picked up two PCI Gigabit Ethernet cards for about €30 each, along with a DLINK 5-port Gigabit hub for €80.

    I copy a lot of media files back and forth across the network (mainly video footage) and this was the primary motivation for the upgrade. An hour of digital video is about 13 GB, and on a 100BASE-T network, it takes a while to copy.

    My two workstations are running Windows XP SP1 (one is only a P700, which I haven't upgraded yet) while my main fileserver was recently upgraded to an Athlon 2800 running Windows 2003. I installed one of the PCI network cards in this.

    After doing some basic tests, I'm a bit underwhelmed. I have the two Athlon systems wired together through the switch, and copying data using standard Windows filesharing gets me a transfer rate of just 50 Mbps. I ran some custom TCP and UDP throughput test tools, and raised this to about 140 Mbps -- a bit better but hardly breaking new frontiers.

    One interesting thing I did notice: The Athlon 2400 running XP was flat-out at 140 Mbps - the CPU was so busy it didn't even have time to update its performance graph, which was frozen on 100%. Meanwhile, the Win2003 server (admittedly on a slightly faster machine) was only using about 20% of its CPU. This was true, regardless of which direction the tests ran in, and certainly bears out Microsoft's claim that Win2003 has been heavily optimised for performance.

    After a little research, I suspect I'm close to the performance limit of a 33 MHz PCI bus. I don't have any inclination to upgrade everything to 64-bit just yet, so I'll make do. It would be good to figure out how to get Windows to use more of the bandwidth for simple file copying, however. (This is more or less identical to the speeds I was getting with 100BASE-T, by the way, suggesting that other issues are at work here.)

    It looks like it might be worth while upgrading my main workstation to Windows 2003; I may install it on a spare disk so I can run some performance tests to see if the performance boost merits the effort of an upgrade - I've been pretty happy with XP so far, so I'm reluctant to disrupt it too much unless there's a good reason.

    Also worth mentioning that the PoS $30 Gigabit Ethernet card proved very flakey at 100BASE-T - copying large amounts of data caused it to lock-up, even with the latest drivers. Switching back to the onboard 100BASE-T Ethernet made the problem go away. This card is based on the Realtek RTL8169S chip.

    The Athlon 2400's motherboard Ethernet uses the Broadcom NetXtreme chipset. Driver differences between the two chips may also account for the difference in CPU utilisation, of course, so maybe I'll try the second Gb Ethernet card in there instead.

    If anyone else has experience to share, please do.


Comments

  • Registered Users Posts: 18,484 ✭✭✭✭Stephen


    Are you sure the slow transfers you are experiencing aren't caused by your hard drives maybe not being able to keep up?


  • Registered Users Posts: 6,163 ✭✭✭ZENER


    What type of cables are you using, CAT5e (350MHz) are minimum for reliable operation of gigabit, http://www.sql-server-performance.com/jc_gigabit.asp is a good place to start.

    Might not be relavent but ya never know !

    ZEN


  • Registered Users Posts: 15,815 ✭✭✭✭po0k


    what "PoS" cards did you get?

    It really does pay to spend a little extra and get good kit.

    Have you got Jumbo packets enabled?

    As Stephen says it could well be your drive performance that's hold you back, but you say you work primarily with media/video, which would imply that you've got either scsi drive(s), a IDE RAID array or a scsi raid array (*fap*).

    The "custom benchmarking tools" should have created a ram drive instad of using your physical media.

    PCI bus bandwidth is 132MByte/sec
    Theoretical "max" GigE bandwidth is 125MByte/sec
    That doesn't take into account header info etc. which should be stripped and processed by the card before it hits the bus.

    You really should look at geting PCI-X or Pci-Express based motherboards for your machine


  • Registered Users Posts: 648 ✭✭✭Tenshot


    Thanks for the comments everyone.

    Cables: I'm fairly confident these are okay. The D-Link switch does a pretty extensive 15 second line test whenever a cable is first plugged in and only enables Gigabit if it is satisfied the cable can take it. (I should maybe plug in a crap cable and confirm that it actually does prevent it trying to run Gigabit.)

    Drives: Someone else mentioned the drive bottleneck as well, which is a good point. The drives are SATA drives, non-Raid, and I get about 50 MB/s copying large files from one drive to another on the same system (about 450 Mb/s equivalent network speed, including protocol overhead). I think the SATA chipset is on a separate bus from the PCI cards, but not sure.

    The Ethernet cards are literally no-name units from either China or Taiwain. I think I picked them up from Marx Computers, otherwise it was Komplett; I've had them a couple of months, I was waiting to find a cheap switch before I installed them. They use the Realtek RTL8196S Gigabit Ethernet chip. (As an aside, when I installed the new MSI motherboard, the MSI driver disk picked up the Gigabit Ethernet card automatically -- and proceeded to install an RTL8139S driver instead. I've since replaced it with the correct driver.)

    Theoretical throughput: Max throughput is indeed about 132 MB/s with 33 Mhz PCI; if I achieved even half that, I'd be quite happy. Gigabit should actually be delivering 125 MB/s across the bus on a fully saturated link, since the full frame (ethernet headers and all) is delivered to the PC's memory. Unless, of course, the Gigabit card is intelligent enough to do some of the initial frame processing itself, which would help a lot. I've heard there are network cards that implement a local TCP/IP stack for precisely this reason, but I bet they're expensive.

    Benchmarking: The test tool I used was a small utility called NetSpd; it generates packets with a fill pattern on the send side and discards them on the receive side, so there is no real i/o activity other than the network itself. I've used it for ages - it's quite handy since it lets you easily check TCP vs UDP with different packet sizes etc.

    Jumbo Frames: I'd forgotten about these; I'll look into turning them on in the driver this evening and see if it helps.

    TBH, it sounds like I should just invest in some proper network cards and stop trying to do it on the cheap! I'll do a bit more testing and if I get any significant improvement, I'll add it to the thread.

    What I'd really like to hear though are stories from other people who are also using Gigabit, but getting much better performance. What hardware are you using, what special tricks did you need to do, etc?

    (On another board I read, a poster said their company network was routinely pulling 650 Mb/s across Gigabit Ethernet, peaking at 850 Mb/s. This was with IBM workstations, connected via a Cisco Layer 3 Gigabit switch -- all expensive equipment, but nice to know it works as expected when you get the right gear.)


  • Registered Users Posts: 18,484 ✭✭✭✭Stephen


    I assume that was the traffic between switches with a bunch of PC's connected to each?


  • Advertisement
  • Registered Users Posts: 648 ✭✭✭Tenshot


    Here's the exact quote from that poster (I'd remembered some of the details incorrectly).
    I have PCI-X on all of my new RS/6000 (pSeries) servers and I can sustain 600 Mbps across the net links with bursts to 850 mbps. This is with a Cisco 7500 switch frame in the middle.
    Anyway, I've been doing some more experimenting this evening, and had some encouraging results. I made the following changes:

    - Installed second Realtek Gigabit Ethernet card in my XP machine (replacing the Broadcom motherboard connection). No significant change on CPU usage or performance, but the driver configuration gave me a few more parameters to play with.

    - Disabled a bunch of protocol stacks I'd forgotten I had installed (network sniffer utilities, QoS, etc.) This made a BIG difference! My raw TCP throughput test went from 140 Mbps to around 220 Mbps.

    - Removed DLink switch from the equation and connected the two PCs directly together. (No crossover cable needed - the Gigabit cards auto-detect crossover as necessary; very convenient. Crossover is an irrelevant concept for Gigabit anyway, since all four pairs are used for both transmit and receive.)

    - Enabled jumbo frames. This gave another boost, up to 270 Mbps on the raw throughput test. Unfortunately, jumbo frames don't work well with my existing network equipment, but on a back-to-back connection, they're fine.

    - With all this, my Netbios transfers were still stuck at around 50 Mbps. So, I decided to try Netbios-over-IPX. After a few configuration snafus, I got it working and was very pleased to see speeds of around 140 Mbps. Unfortunately, it wouldn't sustain these for more than about 30 seconds, then dropped back down to slow speeds again. This sounds suspiciously like a cable (or maybe a driver) issue; some more experimenting to do here.

    - Useful to know that the Windows XP network performance monitor gets confused by non-IP traffic; its throughput graph was off by a factor of 100, compared to the same graph under Windows 2003, which was scaled correctly.

    So, it looks like I have a possible solution for now. I'll leave my server and main workstation connected over their private Gigabit link. Meanwhile, the motherboard Ethernet links on each board can be used to connect to the rest of my network at 100BASE-T.

    I have yet to test performance between two Windows 2003 machines; since CPU usage on the XP machine remained close to 100% during all the changes above (except for the file copies), while Win2003 was hardly breaking a sweat, I expect this will give me another significant boost in raw performance. Whether that will translate into improved file copying speeds remains to be seen.


  • Closed Accounts Posts: 1,136 ✭✭✭Superman


    ah shure isn't not fast enough!


  • Moderators, Recreation & Hobbies Moderators, Science, Health & Environment Moderators, Technology & Internet Moderators Posts: 91,886 Mod ✭✭✭✭Capt'n Midnight


    Yeah cables are important.
    I've one here that will pull a Gigabit card back to 100Mb

    RE: NetBIOS m$ used to have very high overhead on network protocols - I never got more than 300Kbs on 10Mb BNC coax network.

    Makes you think if, a gigabit network card is cheaper than a HDD (and faster than some) and if your OS can remote boot, you could have a silent PC.


  • Registered Users Posts: 648 ✭✭✭Tenshot


    I made a couple of final changes; one helped a bit, the other helped a lot.

    The small change was replacing the 7M patch cable I'd been using to connect the PCs back to back with a 5M solid-core cable (home made in fact). This seems to have smoothed out a number of the test results (i.e. they are more consistent); the 7M cable may have been a bit noisy.

    The big change was putting Win2003 on the second system. With both systems running Win2003, I now get 640 MBps (yes, almost three times the previous speed!) with normal frames and 680 MBps with Jumbo frames. Despite that, the sending machine is only using about 30% CPU while the receiver is at about 55%.

    The network performance graphs are absolutely flat running these tests (and in fact, the numbers I quoted are TCP data throughput, so add on another 4% or so for frame overhead - which is probably accounts for the jumbo frames improvement; they have substantially less protocol overhead). I reckon this is about 70% of the theoretical maximum throughput of the 32-bit 33 MHz PCI bus, which is pretty good going.

    So I'm happy now; at least the hardware is making a decent effort.

    Unfortunately, Netbios continues to let me down - no significant improvement on previous results for straight file copies. Rather frustrating - perhaps the disk IS proving to be something of a bottleneck here after all. (I don't think so though - transfer rates are actually higher at 100 Mbps than at 1000 Mbps, so I think Netbios is getting upset about timeouts or something.)


Advertisement