Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

MySQL Performance Issue

Options
  • 18-05-2021 9:27am
    #1
    Registered Users Posts: 7,265 ✭✭✭


    So a little backstory. We have two beefy servers [16 core [2x Xeon E5-2450], 67GB RAM SSD RAID] running CentOS under hypervisor. It's the only VM on the server and is consuming all hardware. Well, the physical server has 96GB RAM but apparently the hypervisor maxes out at 67GB per VM.

    The first server is a webserver, standard LAMP stack. As it runs both apache/php and MySQL, I want to offload MySQL to the second server to increase performance on the webserver.

    Both servers are in the same rack.

    The second server is exactly [apparently] the same config and specs. I copied the mysql config from webserver to the new DB server.

    The problem is, loadtesting the webserver is much worse when the website is pointing to the DB server as apposed to localhost.

    Webserver processors:
    [~]$ lscpu
    Architecture: x86_64
    CPU op-mode(s): 32-bit, 64-bit
    Byte Order: Little Endian
    CPU(s): 16
    On-line CPU(s) list: 0-15
    Thread(s) per core: 1
    Core(s) per socket: 8
    Socket(s): 2
    NUMA node(s): 2
    Vendor ID: GenuineIntel
    CPU family: 6
    Model: 45
    Model name: Intel(R) Xeon(R) CPU E5-2450 0 @ 2.10GHz
    Stepping: 7
    CPU MHz: 2099.998
    BogoMIPS: 4199.99
    Hypervisor vendor: Microsoft
    Virtualization type: full
    L1d cache: 32K
    L1i cache: 32K
    L2 cache: 256K
    L3 cache: 20480K
    NUMA node0 CPU(s): 0-7
    NUMA node1 CPU(s): 8-15
    Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 popcnt aes xsave avx hypervisor lahf_lm ssbd ibrs ibpb stibp xsaveopt md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities


    DBServer processors
    [~]# lscpu
    Architecture: x86_64
    CPU op-mode(s): 32-bit, 64-bit
    Byte Order: Little Endian
    CPU(s): 16
    On-line CPU(s) list: 0-15
    Thread(s) per core: 1
    Core(s) per socket: 8
    Socket(s): 2
    NUMA node(s): 2
    Vendor ID: GenuineIntel
    CPU family: 6
    Model: 45
    Model name: Intel(R) Xeon(R) CPU E5-2450 0 @ 2.10GHz
    Stepping: 7
    CPU MHz: 2099.999
    BogoMIPS: 4199.99
    Hypervisor vendor: Microsoft
    Virtualization type: full
    L1d cache: 32K
    L1i cache: 32K
    L2 cache: 256K
    L3 cache: 20480K
    NUMA node0 CPU(s): 0-7
    NUMA node1 CPU(s): 8-15
    Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 popcnt aes xsave avx hypervisor lahf_lm ssbd ibrs ibpb stibp xsaveopt md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities

    I use loader.io to stress test my configuration. I run three test per machine, per configuration. Each test browses to three pages. Stress testing with both apache/php and mysql on the webserver is about 3 times faster than testing with the website pointing the database to the dbserver.

    Am I missing something? On the face of it, both servers are identical but I'm getting better results when everything is on the same server. It makes no sense. At this stage the only thing I can think of is maybe network, but they are both in the same rack.


Comments

  • Registered Users Posts: 5,007 ✭✭✭opus


    What speed is the network link between the two boxes? That would be my first port of call. You could check the network performance easily enough using iperf.


  • Registered Users Posts: 7,265 ✭✭✭RangeR


    Yeah, I did a quick test transferring 1GB file between both servers using scp. Only getting 9.6MB/s. I think this is where my bottleneck is. It should be a Gb link.


  • Registered Users Posts: 7,265 ✭✭✭RangeR


    It was, indeed, a network bottleneck. Vendor said that the dbserver had a legacy nic configured. That's now resolved and we have comparable results on both servers to perform proper load tests.



    Sorry for the noise.


Advertisement