ibtracert works
We actually have connection but we are only able to ibping to the GUID that is binded on OpenSM but can't ibping to the other GUIDs now.
ibtracert works
We actually have connection but we are only able to ibping to the GUID that is binded on OpenSM but can't ibping to the other GUIDs now.
If your question is if it possible to use different MOFED version for different kernel, then it is not . However, you should be able to install Mellanox OFED on one kernel with --disable-kmp option, reboot to other kernel, recompile same Mellanox OFED using mlnx_add_kernel_support and after that install only kernel modules.
According to you, the issue somewhere in OS (I/O, memory allocation, other) and not in the network. 20Gbps on ConnectX-2 will give you maximum theoretical 16 Gbps because of 8/10 encoding, so 15.6 Gbps is pretty close.
I would suggest to use perf to analyze ssh/rsync behaviour, or maybe 'strace -ttt -T' option in order to see how much time it spends in the system calls
I found one bug in the Mellanox drivers. In the RT kernel on the Centos 7.5 system, the ptp4l or phc2sys processes do not want to start automatically as a service (systemctl). There is no problem with drivers built into the kernel.
Hello Todd -
Please see: http://www.mellanox.com/related-docs/whitepapers/InfiniBandFAQ_FQ_100.pdf and let me know if it helps with your questions.
Specifically: 16 & 17
- InfiniBand supports QoS by creating Virtual Lanes (VL). These VLs are separate logical communication links that share a single physical link.
- InfiniBand, by contrast, uses link-level flow control to ensure that packets are not dropped in the fabric
Many thanks -
~Steve
According to you, the issue somewhere in OS (I/O, memory allocation, other) and not in the network. 20Gbps on ConnectX-2 will give you maximum theoretical 16 Gbps because of 8/10 encoding, so 15.6 Gbps is pretty close.
I would suggest to use perf to analyze ssh/rsync behaviour, or maybe 'strace -ttt -T' option in order to see how much time it spends in the system calls
I've created a puppet resource for interfaces. Most of the interface names on my switch are lowercase with the exception of Ethernet interfaces, so I munged the interface name to hopefully reduce errors in the manifest; e.g.:
manifest:
cisco_interface { 'Ethernet1/1': description => 'foo' }
type/cisco_interface.rb:
newparam(:name) do munge { |value| value.downcase } end
My provider code also downcases the interface names when I collect the list of interfaces with self.instances.
So this works great when I test with the manifest, but not so great with the puppet resource command which only works when I call it with the name already downcased:
switch# puppet resource cisco_interface 'Ethernet1/1' cisco_interface { 'Ethernet1/1': ensure => 'absent', } switch# puppet resource cisco_interface 'ethernet1/1' cisco_interface { 'ethernet1/1': ensure => 'present', description => 'foo', }
The puppet resource command name field seems to just be a simple filter so I think I'm stuck, but I thought I'd seen other resource types munging title values like this.
Is it possible to munge the title values in a way that works for both scenarios? If not then I'm not sure whether it would be better to leave it case-sensitive since that is what users will see in the switch config, or to "help" them avoid errors in the manifest.
it still does not work
What drivers are you using?
Sorry I am new to 10G BASE-SR and I can't seem to find a good resource that can confirm if an SFP supported in a Cisco Nexus 5548UP will be compatible with a new service The new service is described as '10 Gigabit Ethernet LAN PHY IEEE 10G BASE-LR10.3125 Gbps +/- 100 ppm 1310nm'
Ultimately I need to understand if a 'cisco sfp-10g-sr' for which the transmitter wavelength spec is described as 850nm is usable.
Thanks for your patient, i plan to take it and any site recommend?
Anyone talk form sfpcables? 10Gb/s SFP+ SR | SFP-10G-SR | J9150A | SFP+ 10GBase-SR - 10Gtek
5% OFF all items there now, i want to save a lot there.
Hello-
I have a number of older servers with ConnectX-2 VPI IB cards running a very old OS (Ubuntu 10.04). We want to take them to Ubuntu 18.04 instead. As I understand it, Mellanox is only providing Ubuntu 18.04 installers for OFED v 4.x, however support for ConnectX-2 cards was dropped in this version of OFED. So it seems I need to get OFED 3.x installed on Ubuntu 18. Has anyone had any success with that, or could please offer some advice? Or is there any hope an official installer could be provided? I see OFED 3 is supported on Ubuntu 16, but it'd be great if we could use the newer OS for 2 years more longevity. Thanks for any advice!
-Lewis
Hello everybody,
I have an issue in my NetworkDirect RDMA application when loading the Mellanox NDv2 provider. It seems as the newer WinOF-2 driver for MLX-5 IB adapters (mlx5nd.dll) requires that the connecting process has adminstrator privileges.
Because when running my application with normal user privileges, I get an error0x80070005 (Access denied), whereas this has never been an issue with the older WinOF driver for MLX-4 adapters (mlx4nd.dll).
Here the failing code sequence from my ndhelper.cpp:
static HMODULE g_hProvider = NULL;
static IND2Provider g_pIProvider = NULL;
static HRESULT LoadProvider( __in WSAPROTOCOL_INFOW* pProtocol )
{
WCHAR* pPath = ::GetProviderPath( pProtocol ); // %SystemRoot%\System32\mlx5nd.dll
g_hProvider = ::LoadLibraryW( pPath );
::HeapFree( ::GetProcessHeap(), 0, pPath );
DLLGETCLASSOBJECT pfnDllGetClassObject = reinterpret_cast<DLLGETCLASSOBJECT>(
::GetProcAddress( g_hProvider, "DllGetClassObject" )
);
DLLGETCLASSOBJECT pfnDllCanUnloadNow = reinterpret_cast<DLLCANUNLOADNOW>(
::GetProcAddress(g_hProvider, "DllCanUnloadNow")
);
IClassFactory* pClassFactory;
HRESULT hr = pfnDllGetClassObject(
pProtocol->ProviderId,
IID_IClassFactory,
reinterpret_cast<void**>(&pClassFactory)
);
if (g_pIProvider == NULL) {
hr = pClassFactory->CreateInstance(
NULL,
IID_IND2Provider,
reinterpret_cast<void**>(&g_pIProvider)
);
if (FAILED(hr)) {
TRACE("ClassFactory->CreateInstance(IID_IND2Provider) failed with error 0x%08X", hr); // Without having admin rights, always ending up here!
g_pIProvider = NULL;
}
pClassFactory->Release();
}
}
Unfortunately, just giving my process admin privileges is not an option for me. So I would appreciate if someone has an idea how to overcome this issue.
Perhaps some tuning of security configuration via dcomcnfg or the like(?)
BTW: The Mellanox-provided tools nd_read_bw.exe and nd_write_bw.exe have the same behavior:
My system configuration:
Thanks and Regards
Environment:
Configured PFC with VLAN 100 and Priority 3. While trying to enable the interface, I get:
root@sm16:~# nmcli con up ens1f0.100
Error: Connection activation failed: Failed to find a compatible device for this connection
Any pointers?
This is how the configuration of the tagged and untagged interfaces look like:
root@sm16:~# nmcli c s
NAME UUID TYPE DEVICE
ens1f0 e5251358-dffb-4e07-b72f-b9e93ca6eca8 802-3-ethernet --
ens1f0.100 10c8d003-164a-4a8d-a21f-ff6bb712a090 vlan --
# nmcli con s ens1f0.100 | less
connection.id: ens1f0.100
connection.uuid: 10c8d003-164a-4a8d-a21f-ff6bb712a090
connection.interface-name: ens1f0
connection.type: vlan
connection.autoconnect: yes
connection.autoconnect-priority: 0
connection.timestamp: 0
connection.read-only: no
connection.permissions:
connection.zone: --
connection.master: --
connection.slave-type: --
connection.autoconnect-slaves: -1 (default)
connection.secondaries:
connection.gateway-ping-timeout: 0
connection.metered: unknown
connection.lldp: -1 (default)
802-3-ethernet.port: --
802-3-ethernet.speed: 0
802-3-ethernet.duplex: --
802-3-ethernet.auto-negotiate: yes
802-3-ethernet.mac-address: --
802-3-ethernet.cloned-mac-address: --
802-3-ethernet.mac-address-blacklist:
802-3-ethernet.mtu: 4200
802-3-ethernet.s390-subchannels:
802-3-ethernet.s390-nettype: --
802-3-ethernet.s390-options:
802-3-ethernet.wake-on-lan: 1 (default)
802-3-ethernet.wake-on-lan-password: --
ipv4.method: manual
ipv4.dns:
ipv4.dns-search:
ipv4.dns-options: (default)
ipv4.dns-priority: 0
ipv4.addresses: 10.0.20.75/24
ipv4.gateway: --
ipv4.routes:
ipv4.route-metric: -1
ipv4.ignore-auto-routes: no
ipv4.ignore-auto-dns: no
ipv4.dhcp-client-id: --
ipv4.dhcp-timeout: 0
ipv4.dhcp-send-hostname: yes
ipv4.dhcp-hostname: --
ipv4.dhcp-fqdn: --
ipv4.never-default: no
ipv4.never-default: no
ipv4.may-fail: yes
ipv4.dad-timeout: -1 (default)
ipv6.method: auto
ipv6.dns:
ipv6.dns-search:
ipv6.dns-options: (default)
ipv6.dns-priority: 0
ipv6.addresses:
ipv6.gateway: --
ipv6.routes:
ipv6.route-metric: -1
ipv6.ignore-auto-routes: no
ipv6.ignore-auto-dns: no
ipv6.never-default: no
ipv6.may-fail: yes
ipv6.ip6-privacy: -1 (unknown)
ipv6.addr-gen-mode: stable-privacy
ipv6.dhcp-send-hostname: yes
ipv6.dhcp-hostname: --
vlan.parent: ens1f0
vlan.id: 100
vlan.flags: 1 (REORDER_HEADERS)
vlan.ingress-priority-map:
vlan.egress-priority-map: 0:3,1:3,2:3,3:3,4:3,5:3,6:3,7:3
# nmcli con s ens1f0| less
connection.id: ens1f0
connection.uuid: e5251358-dffb-4e07-b72f-b9e93ca6eca8
connection.interface-name: ens1f0
connection.type: 802-3-ethernet
connection.autoconnect: yes
connection.autoconnect-priority: 0
connection.timestamp: 0
connection.read-only: no
connection.permissions:
connection.zone: --
connection.master: --
connection.slave-type: --
connection.autoconnect-slaves: -1 (default)
connection.secondaries:
connection.gateway-ping-timeout: 0
connection.metered: unknown
connection.lldp: -1 (default)
802-3-ethernet.port: --
802-3-ethernet.speed: 0
802-3-ethernet.duplex: --
802-3-ethernet.auto-negotiate: yes
802-3-ethernet.mac-address: --
802-3-ethernet.cloned-mac-address: --
802-3-ethernet.mac-address-blacklist:
802-3-ethernet.mtu: 4200
802-3-ethernet.s390-subchannels:
802-3-ethernet.s390-nettype: --
802-3-ethernet.s390-options:
802-3-ethernet.wake-on-lan: 1 (default)
802-3-ethernet.wake-on-lan-password: --
ipv4.method: auto
ipv4.dns:
ipv4.dns-search:
ipv4.dns-options: (default)
ipv4.dns-priority: 0
ipv4.addresses:
ipv4.gateway: --
ipv4.routes:
ipv4.route-metric: -1
ipv4.ignore-auto-routes: no
ipv4.ignore-auto-dns: no
ipv4.dhcp-client-id: --
ipv4.dhcp-timeout: 0
ipv4.dhcp-send-hostname: yes
ipv4.dhcp-hostname: --
ipv4.dhcp-fqdn: --
ipv4.never-default: no
ipv4.may-fail: yes
ipv4.dad-timeout: -1 (default)
ipv6.method: auto
ipv6.dns:
ipv6.dns-search:
ipv6.dns-options: (default)
ipv6.dns-priority: 0
ipv6.addresses:
ipv6.gateway: --
ipv6.routes:
ipv6.route-metric: -1
ipv6.ignore-auto-routes: no
ipv6.ignore-auto-dns: no
ipv6.never-default: no
ipv6.may-fail: yes
ipv6.ip6-privacy: -1 (unknown)
ipv6.addr-gen-mode: stable-privacy
ipv6.dhcp-send-hostname: yes
ipv6.dhcp-hostname: --
Hi Ryan,
I am getting the same end result with RHEL 7.5, Kernel 4.9, ethtool version 4.8 for Connectx-4 & Connectx-3.
I am however able to retrieve the information on a Connectx-5.
You can download the MFT package and use the "mlxcables" utility.
http://www.mellanox.com/page/management_tools
Note: see section mst cable add first.
Note: Latest FW for that HCA card is 12.23.1020.
Sophie.
Hi Suraj,
Thank you for posting your question on the Mellanox Community.
Based on your information provided (We also noticed your posting on https://mailman.stanford.edu/pipermail/mininet-discuss/2018-August/008031.html), currently we do hot have support for running Soft-RoCE on a Mininet topology. Currently we only provided a method of running from a nic-2-nic.
Also please note that Soft-RoCE is still in BETA
We would recommend to post this question on the Mininet Mailings-list as you already did. Maybe somebody from the mailings-list has a solution for this as they need to implement this into the Mininet Framework
Thanks and regards,
~Mellanox Technical Support
Hi Kenneth,
Thank you for posting your question on the Mellanox Community.
We have noticed that you also opened a Mellanox Support case regarding this issue and that we provided you access to the PRM's of the ConnectX-5
If you need anything, please do not hesitate to open a new support case by sending an email to support@mellanox.com
Thanks and regards,
~Mellanox Technical Support
Is there any error messages? Does ibtracert work (#ibtracert <src lid> <dst lid>?
Hi Brian,
When using virtualization, GRH (global routing header) must be present in the packet. For ibping, --dgid <GID> parameter need to be used (see man ibping).
To get GIDs, on the server run 'show_gids' and use the output on the client side
Server
#show_gids
DEV PORT INDEX GID IPv4 VER DEV
--- ---- ----- --- ------------ --- ---
mlx5_1 1 0 fe80:0000:0000:0000:248a:0703:009c:01a7 v1
Client
#ibping --dgid fe80:0000:0000:0000:248a:0703:009c:01a7 18
If you like to check RDMA connectivity between VMs, use utilities from perftest package (ib_read_bw, ib_write_bw, etc) with -R parameter.
Could you tell me the detail for your application ? why you need such feature ?
Hello.
We have a plan to connect Cisco Catalyst 3850 switch's 1Gbps SFP port to Mellanox SN2100-BB2F QSFP port.
For this connection, we are going to connect them using NAM1Q00A-QSA and MC3208011-SX( SFP transiever).
I am not sure whether it will work fine or not.
I have read about QSA article and it says as follow, But I don't know the meaning of "to plug another SFP+ to SFP into the QSA".
" It is less common but possible to plug another SFP+ to SFP (10GbE to 1GbE) into the QSA and to reduce the port speed even to 1GbE.
Again, if it is a Mellanox Ethernet switch port, you must configure that manually.
For example, assuming the QSA was plugged into Ethernet switch port 1/1 and another SFP+ to SFP adapter was plugged into the QSA to reach 1GbE, run:
switch (config) # interface ethernet 1/1 speed 1000
switch (config) # configuration write
Question) If we plug MC3208011-SX into NAM1Q00A-QSA and configure "speed 1000" at that interface , will it works fine ? ( = we don't need something needed to be purchased more.)
or do we need to purchase something like a speed convert adapter also ?
Thank you for your attention.
Hi,
I use connect-3-pro cards on hp machines to boot via ipxe but getting attached errors. Apparently, the iPXE does not support connect-3-pro cards. just wondering is there any chance to add ipxe support for the cards.
Thanks,
Bal