Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6148

40Gb/s IPoIB only gives 5Gb/s real throughput?!

$
0
0

I really need some expertise here:

For the current situation see update 2 below.

 

I have two Windows 10 Machines with two MHQH19B-XTR 40 Gbit Adapters and a QSFP cable in between. The Vlan manager is opensm.

 

The connection should be about 32Gbits Lan. In reality i only get 5 Gbit performance. So clearly something is very wrong.

C:\Program Files\Mellanox\MLNX_VPI\IB\Tools>iblinkinfo

CA: E8400:

      0x0002c903004cdfb1      2    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       1    1[  ] "IP35" ( )

CA: IP35:

      0x0002c903004ef325      1    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       2    1[  ] "E8400" ( )

 

I tested my IPoIB with a program called lanbench and nd_read_bw:

nd_read_bw -a -n 100 -C 169.254.195.189

#qp #bytes #iterations    MR [Mmps]     Gb/s     CPU Util.

0   512       100          0.843        3.45     0.00

0   1024      100          0.629        5.15     0.00

0   2048      100          0.313        5.13     0.00

0   4096      100          0.165        5.39     0.00

0   8192      100          0.083        5.44     0.00

0   16384     100          0.042        5.47     0.00

0   32768     100          0.021        5.47     100.00

..stays at 5.47 after that. with CPU util 100%

The processor is an intel core I7 4790k so it should not be at 100%. According to Taskmanager only 1 Core is actively used.

Firmware, Drivers, Windows 10 are up to date.

 

My goal is to get the fastest possible File sharing between two windows 10 machines.

What could be the problem here and how do I fix it?

 

Update: I tested with Windows 2012 Clients to verify and I still get about 5.5 Gbit/s max.

Maybe someone has other 40Gbit adapters what are the speeds for you?

Update 2: The mainboard had 16x physical and only 2x electrical connection. (Special thx to Erez support admin for a quick and good answer)

After changing to a PCIe 3.0 8x lane I now get the following speed: (should still be 3x faster)

 

Speed.PNG

 

After endless hours of searching I found out that vstat showed that I have a 10GBit connection.

 

C:\Users\Daniel>"C:\Program Files\Mellanox\MLNX_VPI\IB\Tools\vstat.exe"

 

        hca_idx=0

        uplink={BUS=PCI_E Gen2, SPEED=5.0 Gbps, WIDTH=x8, CAPS=5.0*x8} --> Looks good

        MSI-X={ENABLED=1, SUPPORTED=128, GRANTED=10, ALL_MASKED=N}

        vendor_id=0x02c9

        vendor_part_id=26428

        hw_ver=0xb0

        fw_ver=2.09.1000

        PSID=MT_0D90110009

        node_guid=0002:c903:004e:f324

        num_phys_ports=1

                port=1

                port_guid=0002:c903:004e:f325

                port_state=PORT_ACTIVE (4)

                link_speed=10.00 Gbps

                link_width=4x (2)

                rate=40.00 Gbps

                real_rate=32.00 Gbps (QDR)

                port_phys_state=LINK_UP (5)

               active_speed=10.00 Gbps --> WHY?

                sm_lid=0x0001

                port_lid=0x0001

                port_lmc=0x0

                transport=IB

                max_mtu=4096 (5)

                active_mtu=4096 (5)

                GID[0]=fe80:0000:0000:0000:0002:c903:004e:f325

 

What I should get is : (thx to erez)

PCI_LANES(8)*PCI_SPEED(5)*PCI_ENCODING(0.8)*PCI_HEADERS(128/152)*PCI_FLOW_CONT(0.95) = 25.6 Gbit

 

Can anyone help me with this problem?


Viewing all articles
Browse latest Browse all 6148

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>