SANLink3 T1 max speed with MTU size 1514

  • Last Post 10 July 2019
Hanno Handler-Kunze posted this 09 July 2019

Hi everyone,

can you please tell me if the sanlink3 t1 needs the maximum Jumbo packet size of 16128 to reach its max speed of 10G?

What would be the expected speed with the frimesize set to 1514 bits?


Thanks, and kind regards,


Order By: Standard | Latest | Votes
PROMISE Technology Inc. posted this 09 July 2019

Hi Hanno,

The link speed (100Mb/s, 1Gb/s, 10Gb/s) is independant of the paket (or frame) size. And AFAIK, very few devices support 16k packets, the largest packet size most devices recongize is 9k.

Setting jumbo frames should not be done unless all the devices you plan to connect to support jumbo frames. Most unmanaged ethernet switches do not support jumbo frames and frames larger than the standard 1.5k frame are simply dropped. Misconfigured jumbo frames can lead to unstable network connections and in some cases data loss.

It would not be a good idea to enable jumbo frames on a home network unless you understand what you are doing and all your network devices support jumbo frames. Most ethernet devices for the home market do not support jumbo frames.

That being said, when properly configured, jumbo frames can result in faster transfers for some types of data, primarily streaming video or large file transfers. If the data is small files (like html files) they can result in slower transfers as many packets will be mostly empty (like putting 2 people in a 10 person train car, most of the capacity is wasted).

Hanno Handler-Kunze posted this 10 July 2019

Thank you for your explanation. My issue is that I have two SANlink3 T1, connected to each other and to two pc's with thunderbolt 3.

Testing their performance with either TTtcp or ipferf3 I am only able to see a 10G transfer rate if choose the 16k packet size in the NIC configuration.

Both my sanlink3 are confifigured as in:


This post suggest that 10G are only possible with jumbo packets:

Is there a configuration where I can reach 10G with NTTtcp or iperf3?

PROMISE Technology Inc. posted this 10 July 2019

Hi Hanno,

iperf measures ethernet channel bandwidth and it is limited by the speed of your computer. It generates data in memory, sends it down the wire and dumps in the bit-bucket at the other end. The computer is not doing any actual work. That you can't get something close to channel speed with iperf says that your computer is not fast enough to move data at 10Gbe. This has always been an issue when new technology comes out, the 50Mhz computers in use when 1Gbe ethernet was introduced could not saturate a 1Gbe channel. The speed of your computer is a real factor with 10Gbe.

The real question is what kind of data you will be sending down the wire. If you are sending files from a computer's disk, I think you will find the actual limit is how fast you can read from one computer and/or write on the other. Even SSDs top out at about 500 MB/s (about 4.0 Gb/s), current Hard Disks will give maybe 100 MB/s (about 0.8 Gb/s). Whether 16kb packets will give you more bandwidth depends on what kind of data is going down the wire and where it comes from. If the data originates from a disk, you can't go any faster than the disk regardless. So a good strategy is to do what you plan to do with the 10Gbe connection and try it with 1.5k packets and 16k packets and see which works best. Then use that frame size.

  • Liked by
  • P B
Hanno Handler-Kunze posted this 10 July 2019

Ok, my plan is to transfer a data to another 10G+ capable network adapter (device A), which unfortunatly is only able to handle 1.5k packets. At the moment I am testing the capabilties from Sanlink to Sanlink, after that both deviced should interface with device A.

Device A is tested with an Anritsu MT1100A, which is able to test up to 100Gbit/s. Since I cannot do this test with the Sanlink, (due to the Thunderbolt connection) I am forced to use a PC.

Max speed of the Sanlink is but 4.5Gb/s with 1.5k packet size.

NTttcp sends from ram to ram, so no other bottleneck is included.

Your help in understanding, why that is is greatly appreachiated. Maybe you have a testsetup which I can reproduce?


PROMISE Technology Inc. posted this 10 July 2019

Hi Hanno,

Ok, my plan is to transfer a data to another 10G+ capable network adapter (device A), which unfortunatly is only able to handle 1.5k packets.

Then the Jumbo frame issue seems solved, they are not usable in this context.

At the moment I am testing the capabilties from Sanlink to Sanlink

Are you connecting between the the 2 ethernet ports on the same Sanlink? If so that has the same computer as the sink and source, it probably does not have enough CPU to do both at the same time.

But the bottom line is that few synthetic benchmarks mimic real world use, it's best to test with your actual use case and optimize that as much as possible.


Hanno Handler-Kunze posted this 10 July 2019

Sorry, for the confusion my Setup for the sanlink testing is:


And to clarify, I have an issue not reaching 10G with 1.5k packet size.

I just so happen find that I get 10G performance with jumbo sized packets and not with 1,5k packets.

PROMISE Technology Inc. posted this 10 July 2019

Ethernet drivers are interrupt driven, one of the methods TOE and modern ethernet adapters use to reduce CPU load is interrupt coalescing. 

  • Interrupt coalescing, also known as interrupt moderation, is a technique in which events which would normally trigger a hardware interrupt are held back, either until a certain amount of work is pending, or a timeout timer triggers.

The problem is, what ethernet interrupt coalescing saves in CPU is lost in channel bandwidth. So what your iperf testing is showing is the maximum rate at which your computer can service interrupts. With larger packets more data will be sent before the next interrupt is serviced. You may or may not see this difference in practice as I suspect that except in a few use cases a large number of 16k packets will be mostly empty. As mentioned previously, larger packers can result in reduced bandwidth with real world data. Also previously mentioned is that nothing off disk is going to be faster than that disks's IO speed. And not mentioned yet is that when the computer is doing real work, it will have less resources to service ethernet interrupts, so network bandwidth might be reduced. There are lots of factors at play.

So the advice is to try a real world test with whatever you plan to use the 10Gbe link for and see what is faster in practice. But given that the other side can't do jumbo frames, I don't understand what you expect to solve here.