Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6148

mlnx_tune does not detect the BIOS I/O non-posted prefetch settings?

$
0
0

I have four AIC SB122A-PH 1U all NVMe storage servers.

  • Two of them have Intel E5-2620v3 2.4Ghz CPUs, 8 x 16GiB 1833Mhz DDR4 DIMMs, 2 x DC S3510 SATA SSDs,  8 x DC P3700 NVMe SSDs
  • Two of them have Intel E5-2643v3 3.4Ghz CPUs, 8 x 16GiB 2133Mhz DDR4 DIMMs, 2 x DC S3510 SATA SSDs, 8 x DC P3700 NVMe SSDs
  • All four run CentOS 7.2

[root@fs00 ~]# uname -a

Linux fs00 3.10.0-327.28.2.el7.x86_64 #1 SMP Wed Aug 3 11:11:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

  • All four are installed with MLNX_OFED_LINUX-3.3-1.0.4.0-3.10.0-327.22.2.el7.x86_64
  • All four have a Mellanox EDR HCA

[root@fs00 tmp]# ibstat

CA 'mlx5_0'

  CA type: MT4115

  Number of ports: 1

  Firmware version: 12.16.1006

  Hardware version: 0

  Node GUID: 0x7cfe90030029288e

  System image GUID: 0x7cfe90030029288e

  Port 1:

  State: Active

  Physical state: LinkUp

  Rate: 100

  Base lid: 3

  LMC: 0

  SM lid: 1

  Capability mask: 0x2651e848

  Port GUID: 0x7cfe90030029288e

  Link layer: InfiniBand

 

As a preparation for tuning, I ran mlnx_tune -r to get some ideas first.  So, I got

>>> PCI capabilities might not be fully utilized with Hasweel CPU. Make sure I/O non-posted prefetch is disabled in BIOS.

 

Fine. I updated the BIOS, now it looks like the following.  It should be obvious that the Non-posted prefetch settings are all Disabled. I made sure all PCIe ports have the same settings.

20160812_102448.jpg

 

But upon re-running mlnx_tune -r, I still saw the same warning   Now either the BIOS software OR Mellanox mlnx_tune is incorrect. It's impossible that both are right.  Now any hints as to where to dig more and determine the actual source of warning?

 

[root@fs00 ~]# mlnx_tune -r

2016-08-12 15:21:22,928 INFO Collecting node information

2016-08-12 15:21:22,929 INFO Collecting OS information

2016-08-12 15:21:22,931 INFO Collecting CPU information

2016-08-12 15:21:22,987 INFO Collecting IRQ balancer information

2016-08-12 15:21:23,002 INFO Collecting firewall information

2016-08-12 15:21:23,787 INFO Collecting IP forwarding information

2016-08-12 15:21:23,791 INFO Collecting hyper threading information

2016-08-12 15:21:23,791 INFO Collecting IOMMU information

2016-08-12 15:21:23,793 INFO Collecting driver information

2016-08-12 15:21:27,185 INFO Collecting Mellanox devices information

 

Mellanox Technologies - System Report

 

Operation System Status

CENTOS

3.10.0-327.28.2.el7.x86_64

 

CPU Status

Intel Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz Haswell

OK: Frequency 2600.156MHz

 

Hyper Threading Status

ACTIVE

 

IRQ Balancer Status

ACTIVE

 

Driver Status

OK: MLNX_OFED_LINUX-3.3-1.0.4.0 (OFED-3.3-1.0.4)

 

ConnectX-4 Device Status on PCI 84:00.0

FW version 12.16.1006

OK: PCI Width x16

>>> PCI capabilities might not be fully utilized with Hasweel CPU. Make sure I/O non-posted prefetch is disabled in BIOS.

OK: PCI Speed 8GT/s

PCI Max Payload Size 256

PCI Max Read Request 512

Local CPUs list [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]

 

ib0 (Port 1) Status

Link Type ib

OK: Link status Up

Speed EDR

MTU 2044

 

2016-08-12 15:21:27,459 INFO System info file: /tmp/mlnx_tune_160812_152122.log


Viewing all articles
Browse latest Browse all 6148

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>