Detailed information on messages


This document provides additional explanation for some of the messages in pathdiag reports. See the general documentation for a glossary and overview of the measurement technique.

Jump to the [Result Summary] [General documentation] [Server Form]


Tester: [hostname] ([address])

The tester is the computer system that is sending the data and measuring the network path. The tester analyzes the target and the path from the tester to the target.

Target: [hostname] ([address])

The target is the system under test. It is receiving data from the tester discarding it as fast as possible. The target and path are analyzed by the tester.

Logfile base name: [filename]

This is the base file name used to generate all of the intermediate files, reports and to archive the results. If you have a question about a specific test, please include the full URL of the diagnostic report your question, so we can see exactly what you are looking at.

Observed Maximum Segment Size for this path section: [mss] bytes

The maximum size of the TCP payload for this path section. This is generally the MTU of the path section minus the fixed part of the TCP/IP headers (40 byes), since the space for the options are accounted as part of the data size.

This report is based on a [rate] Mb/s target application data rate

The measurements of this path section are evaluated to see if it can support an application over a full end-to-end path at this target data rate with this target end-to-end Round Trip Time (RTT). This must be a realistic goal for the actual application, and must include considerations for link overhead and other traffic on the link.

This report is based on a [rtt] ms Round-Trip-Time (RTT) to the target application

The target Round Trip Time (RTT) for the application. The measurements of this path section are extrapolated to the target Round Trip Time (RTT) to see if this section can support the application over the full end-to-end path at its target data rate.

User specified Maximum Segment Size: [mss] bytes

The (future) target MSS as specified by the user. This is useful if the end-to-end path has a smaller MSS (MTU) than the local section of the path.

Original target application data rate was: [rate] Mb/s

The original target data rate as specified by the user. It was overridden for this report.

Original target RTT was: [rtt] ms.

The original target RTT as specified by the user. It was overridden (due to longer section RTT?) for this report.

The Round Trip Time for this path section is [rtt] ms.
The Maximum Segment Size for this path section is [mss] Bytes.

This path section has the indicated minimum RTT and MSS, as reported by TCP itself.

Warning: The section RTT is greater than the requested target RTT ([segrtt] > [rtt])

The path section under test has a larger RTT than the RTT for the target end-to-end path. Since this would causes very confusing results, the target RTT has been adjusted to be larger than the section RTT.

Target host TCP configuration test: [status]!

This group of tests inspects the TCP option negotiations to determine if all of the features that are required for high performance operation were properly enabled. Some other peer problems, such as insufficient buffer space might be reported here as well.

Warning: TCP connection is not using SACK.

The target (client) did not enable TCP Selective Acknowledgments, which would permit the receiver to indicate to the sender exactly which data is missing and needs to be retransmitted. Without SACK the sender has to estimate which data is missing. Under some conditions this can waste half of the available performance. SACK is specified in RFC2018.

Warning: TCP connection is not using RFC1323 timestamps.

The target (client) failed to negotiate RFC1323 timestamps. This will not slow the TCP connection at all, but without timestamps there is a risk of silent data corruption if segments (packets) are delivered out of order by more the 4 Gigabytes. This is not generally a problem below 100 Mb/s because the segments would have to be delayed more than their legal lifetime in the network. However 4 Gigabytes (actually 2^32) only takes 35 seconds at 1 Gigabit/second or 3.5 seconds at 10 Gigabit/second, both of which are within the worst case delays for the Internet.

Critical Failure: Did not negotiate window scale, it should be [wscale].

The target (client) did not negotiate window scale, which is require for the target data rate and RTT. This configuration problem is an absolute show stopper for the target data rate and RTT, and should be the first problem fixed.

The method to enable window scale is not standardized, and varies from system to system. See the TCP tuning instructions for more advice.


Critical Failure: Received window scale is [wsrcvd], it should be [wscale].

The target (client) negotiated Window scale, but it is not large enough for the target data rate and RTT. This configuration problem is an absolute show stopper for the target data rate and RTT, and should be the first problem fixed.

The method used to pick window scale is not standardized, and varies from system to system. Often the window scale is selected on the basis of either the current or maximum TCP buffer size. See the TCP tuning instructions for more advice.


TCP negotiated appropriate options: WSCALE=[wscale], SACKok, and Timestamps.

The target (client) passed all tests on the SYN and SYN-ACK exchange which negotiates the TCP options for the connections. Timestamps and SACK are both on and the receiver window scale is large enough to support the application.

The maximum receiver window ([val]k) is too small for this application (and/or some tests).

The maximum receiver window, which is generally determined by the TCP buffer size, is smaller than required to meet the target data rate and RTT. It also prevented some of the tests from completing properly.

The maximum receiver window ([val]k) is too small for some tests, but sufficient for the target application.

The maximum receiver window, which is generally determined by the TCP buffer size, is smaller than required for some tests (e.g. estimating the bottleneck queue size), but is large enough to support the target data rate and RTT.

The target (client) closed the receiver window.

The target (client) is using TCP flow control to throttle the test traffic. This suggests that the receiver might not be fast enough, or might have too much other competing CPU load to sustain the target data rate.

The target (client) intermittently stalls the test traffic.

The target (client) is intermittently slowing or pausing the test data. This may be due to a momentary CPU load on the receiver, or some undiagnosed bottleneck in the target. Re-run the test, and if this error persists look for intermittent CPU load on the target (client).

Diagnosis: The target (client) is not properly configured.

Diagnosis: The target (client) is not properly configured for the specified target end-to-end path and application data rate. If you are using the NPAD/pathdiag diagnostic service, the problem lies in the end-system which is running the browser to request this test. Failed tests are absolute show stoppers and must be corrected before it is possible to attain the target data rate at the target round trip time.

See TCP tuning instructions at http://www.psc.edu/index.php/networking/641-tcp-tune

Look at the general procedural recommendations on the main pathdiag page on how to approach TCP tuning and other end-system flaws. A a system administrator should follow the detailed (system specific) directions at http://www.psc.edu/index.php/networking/641-tcp-tune.

Warnings reflect problems that might not affect target end-to-end performance.

Flaws indicated by warnings might not affect end-to-end performance. Review all help ([?]) for more detailed information. You may be able to attain your target performance without correcting the warnings, however you should read the caveats on passing tests.

The target passed all tests! See tester caveats:

The target (client) passed all tests!

We believe that this diagnostic can detect all flaws in the test target with the following exceptions:

On the main pathdiag page there are general procedural recommendations for what to do next when both the target and path pass all diagnostics.

Data rate test: [status]!

This group of tests measures the maximum data rate of this path section. If this rate is less than the target data rate, provide some indication of what might be the problem.

The maximum data rate was [rate] Mb/s.

This is the maximum data rate that the tester was able to attain on this path section.

This is below the target rate ([rate] Mb/s).

The target data rate for the application, as specified by the user.

Diagnosis: Excessive overall packet loss seems to have limited the data rate.

The data rate on this path section seems to have been limited by a high data loss rate. The loss rate was so high that TCP was unable to reach full speed for the path section.

Diagnosis: there seems to be a hard data rate limit.

Some device in the path seem to be reaching some hard data rate limit, without exhibiting other symptoms of congestion such as elevated loss rate.

Check your expectations: did you deduct a reasonable overhead (10%)?

The measured maximum data rate is within 10% of the target rate, suggesting that the target rate may have been specified using raw clock rate for the link, rather than application (TCP payload) data rate. For example you can not reliably get more than 90 Mb/s through Fast Ethernet, even though Fast Ethernet is clocked at 100 Mb/s.

Check the path: is it via the route and equipment that you expect?

When you encounter a hard data limit that is much smaller than you expect, you have to consider the possibility that you are not using the equipment that you expect. The path might be I1 instead of I2, or somebody might have "repaired" a switch by replacing it with an older model, and so forth.

Pass data rate check: maximum data rate was [rate] Mb/s

The maximum data rate over this section of the path seems to be sufficient for the target application. Due to the effects of symptom scaling, this is not sufficient to guarantee that this section is not the bottleneck when part of a long path.

Loss rate test: [status]!

This group of tests measures the background loss rate of this path section. It does this by counting losses while sending a large quantity of data at the largest window (highest data rate) that does not create congestion.

Insufficient data to to measure loss rate: Zero losses in [have] packets (need twice [need] packets or more to measure the target loss rate).

There were zero losses, but the data set is too small to clam that the path section reached the loss rate needed to attain the data rate needed by the application. This typically happens if the data rate was too low to collect the required data in reasonable amount of time.

Loss rate measurement based on insufficient data and may be inaccurate.

There were fewer than 4 losses, so there is significant measurement uncertainty in the calculated loss rate.

The data rate was not high enough to accumulate sufficient loss statistics in a reasonable amount of time.

The loss rate test was truncated without collecting sufficient data because it would have taken too long to collect sufficient statistics at the attained data rate.

Correct data rate problems and re-test the path.

After the data rate problems have been resolved, you should be able to re-run this test to get complete loss statistics.

Fail: loss event rate: [percent]% ([runlen] packets between loss events).

The loss rate on this path section is too high to support the target data rate and RTT. Normal (required) TCP congestion control will prevent the application from reaching the target data rate, even though this path section can support the application with a short RTT.

Diagnosis: there is too much background (non-congested) packet loss.

You need to find and correct the excessive background loss. It could be caused by defective hardware, cables, connectors in any part of the path section.

The events averaged [count] losses each, for a total loss rate of [percent]%.

Most SACK TCP implementation only care about the number of round trips that contain losses and do not care about the number of losses within each round trip that has has losses. This is because repairing the second and later loss in one round trip only adds a small additional overhead (when SACK is in use). Other protocols and tools (such as UDP testers) do not care about round trips, and only report the total loss rate. This message provides the raw statistics to facilitate debugging the background loss rate with a non-TCP tool.

Locate the excess packet loss in this section of the path.

First, check cables and other easily replaced hardware by doing so. If you do not have access to network cabling and detailed network maps, defer locating excess packet loss to somebody who does.

Pass: measured loss rate [percent]% ([runlen] packets between loss events).

The measured background loss rate is not high enough to prevent TCP from reaching the target data rate at the target RTT.

Pass: zero losses in [runlen] packets, loss rate less than [percent]%.

There were no background (non-congested) packet losses.

FYI: To get [rate] Mb/s with a [mss] byte MSS on a [rtt] ms path the total end-to-end loss budget is [percent]% ([runlen] packets between losses).

The total amount of loss that can be tolerated (the loss budget) can be estimated from the target data rate, target RTT and Maximum segment size from the model in [MSMO97]. The sum of the loss rates for each path section must be less than loss budget to meet the target data rate and target RTT.

Ethernet duplex miss-match test: [status]!

Running an optional Ethernet duplex miss-match test. Some Ethernet (10 and 100 Mb/s only) can use either half duplex (a shared channel with only one sender at a time) or full duplex (a point to point link with two completely independent one way channels). Sometimes Ethernet devices fail to select the same mode (due to improper manual configuration or failed auto negotiation), leading to very poor performance.

Pass extra check for Ethernet duplex mismatch.

No Ethernet duplex problems were detected.

Suspect Ethernet full/half duplex mismatch.

There seems to be an Ethernet duplex miss-match some place in the path! This is a critical problem that cause poor performance and obscures all other test results.

Check Network Interface Card (NIC) and Ethernet switch configurations.

You network administrator needs to check the duplex options on your network interface card and the Ethernet switch(s) in the path.

Network buffering test: [status]!

This group of tests attempts to measures the queue space at the bottleneck in this path section. It will be unsuccessful if there is too much data loss in the path or other bottlenecks in either the tester or the target. It performs the test by creating a standing queue at the path bottleneck, and detecting the onset of loss as that queue gets larger.

Warning: could not measure queue length due to previously reported bottlenecks

Other, previously reported problems prevented the tester from being able to create a standing queue at the bottleneck.

Please report a possible tester problem measuring queue length(*)

The tester was unable to detect the onset of loss, probably because the maximum queue size was not large enough to overflow the queue.

Measured queue size, Pkts: [packets] Bytes: [bytes]

The bottleneck queue size is measured as the difference between the onset of queue delay (queue formation) and the onset of loss (queue overflow).

Estimated queue size is at least: Pkts: [packets] Bytes: [bytes]
This is probably an underestimate of the actual queue size.

The queue size is supposed to be measured by the difference between the onset of queue delay (queue formation) and the onset of loss (queue overflow). However in this case the queue size was limited by something other than loss so the measured queue size might be smaller than the actual queue size.

This corresponds to a [time] ms drain time.

The drain time is the time it would take for a full queue to drain at the measured bottleneck data rate.

Losses start at a smaller window than the onset of queuing.

Losses are load sensitive, but start before TCP starts to cause a queue in the network. This may indicate the presence of traffic policing or other load sensitive loss mechanisms that does not include queuing.

Can not measure the bottleneck queue.

The tester and/or the target could not send data fast enough to cause network congestion, yet no bottlenecks were identified. This can be cause by three different things:

Tentative Diagnosis: Insufficient buffering (queue space) in routers or switches. (but note that this test may be overly conservative.)

Classical TCP theory [Villamizer], Requires a full delay bandwidth product of buffering at every possible bottleneck. Newer results require much smaller buffers for aggregate traffic. At this time it is unclear to what extent these conflicting results apply to any given application. If your application performance is unacceptable and this is the only non-pass result please let us know about it.

Pass: The network bottleneck has sufficient buffering (queue space) in routers and switches.

Classical TCP theory [Villamizer], Requires a full delay bandwidth product of buffering at every possible bottleneck. Newer results require much smaller buffers for aggregate traffic. At this time it is unclear to what extent these conflicting results apply to any given application. If your application performance is unacceptable and this is the only non-pass result please let us know about it.

To get [rate] Mb/s with on a [rtt] ms path, you need [bytes] bytes of buffer space.

This is the required queue size, calculated using classical TCP theory [Villamizer].

Reconfigure routers/switches to increase the queue space or limit jitter from all sources (application, ACK compression etc), to less than [maxjitter] ms.

Short queues can only work if the worst case jitter (burst size) in the data stream is small enough to fit within the queue at the bottleneck. There are several mechanisms that naturally smooth the data (reducing jitter) and others that introduce more jitter.

Path measurements

The following groups of tests each measure some aspect of the path section from the tester to the target.

Test a shorter path section or reduce the target data rate and/or RTT.

These diagnostics require that TCP be responsive to changes in test parameters within the allotted sample intervals. This is not possible it the target data rate is too high or if the path section under test is too long (i.e. If the round trip time to the tester is too large).

Since the measurements in this run are done under less than ideal conditions, failed test results for the path and tester may be exaggerated. (But all End-System test and passing test results are accurate).

Either reduce the target rate and/or RTT, or choose a closer diagnostic server.


Localize all path problems by testing progressively smaller sections of the full path.

This path section has one or more problems as indicated by other messages in this report. Look at the general procedural recommendations on how to approach path flaws.

The network path passed all tests!

Congratulations! The path tests clean!

We believe that this diagnostic can detect all flaws in the path with the following exceptions:

On the main pathdiag page there are
general procedural recommendations for what to do next when both the target and path pass all diagnostics.

Rerun the test such that the target delay bandwidth product fits within [size] Bytes.

The target delay bandwidth product (the target data rate times the target rtt) must be smaller than the indicated size, due to resource limits at the server. Rerun the test with smaller parameters.

FYI: This path may even pass with a more strenuous application:

This path section is better than required to support the target data rate and RTT. It might also pass a more strenuous test.

FYI: This path may pass with a less strenuous application:

This path section is worse than required to support the target data rate and RTT. But it might pass a less strenuous test.

Or if you can raise the MTU:

Raising the Maximum Transmission Unit (packet size) makes it much easier to attain high performance.

Tester validation: [status]!

This tester has several built in consistency checks designed to detect if the tester itself is a bottleneck or otherwise fails to properly diagnose a network path.

The tester has a bottleneck.

It appears that the tester has some limit (a bottleneck) such that it can not drive the path into hard congestion. This bottleneck might be insufficient sender buffer space, not fast enough CPU or other load on the sender. In the latter case you may get better results by re-running the test at a later time.

The tester NIC is not fast enough to be able to drive the network into congestion.

The tester was unable to fill the link because the Network Interface Card stalled data transmission. This suggests that the tester NIC is the true bottleneck, either due to its limitations or because there was other traffic through the NIC.

Insufficient data due to unknown tester limitation.

For some unknown reason the tester did not collect sufficient data for this test.

The web100 kernel instrument, [varname], unexpectedly reported a non-zero value ([value]).

A web100 internal kernel instrument that is normally zero, reported some events that are not diagnosed by the current tester. This is normally a rare situation.

The actual TCP window ([obswin]), is not an integral multiple of the MSS ([mss]).

For some reason one or more undersized segments were transmitted. This breaks some of the underlying assumptions in the analysis code, and may indicate that the tester is not properly controlling the TCP window.

The tester internal logic yields inconsistent test results ([reason]) (*)! [message]

The tester internal logic is performing the same check in two different ways, and for some reason the two methods do not yield the same results. This probably indicates a flaw in the test logic.

(*) Some events in this run were not completely diagnosed.

This test detected some events that were not properly diagnosed in this version of the tester. As we refine the tester we expect to eliminate all of these cases.

Correct other problems first, and then rerun this test.

There are other problems which prevented this test from attaining a conclusive measurement. Correct all other failed tests and then rerun this test to get additional results.