Issue: PACS is Slow via Citrix ADC (NetScaler) after 13.1 Upgrade

Introduction and Background

Over the last several months we have encountered a couple of instances of customers complaining of poor performance of anything involving large file transfers through Citrix ADC after their upgrade to 13.1 firmware. The most recent was with a PACS application that uses Citrix DC to load balance between servers and typically involves the transfer of medical imaging files from MBs to GBs in size.

Post-upgrade, clinicians complained of exponential increases in transfer times north of 30 minutes for large files, which normally took only several minutes prior to the upgrade. If the users pointed directly at the PACS server behind the VIP, the files transferred normally. Something on the ADC was bottlenecking the transfers.


In the recent cases observed, the VIPs and the back-end services or service groups defined on the Citrix ADCs had no TCP profiles bound to them, thus were reliant on default global TCP parameters. Certain types of application traffic benefit from TCP optimization on front-end and back-end to optimize the performance of data transfers for their given traffic patterns.

In recent cases, crafting a profile that was more attuned to the traffic dramatically improved performance back to normal accepted standards by the clinicians. The following TCP profile is merely an example and may benefit from further tuning to suit needs, but it did the trick. Upon creating the TCP profile, bind to the VIP and to the back-end services/service group on the Citrix ADCs handling the PACS or large file transfer VIP and subsequent TCP sessions should hopefully experience a marked improvement.

add ns tcpProfile nstcp_custom_PACS_profile -WS ENABLED -SACK ENABLED -WSVal 8 -maxBurst 10 -initialCwnd 10 -oooQSize 500 -pktPerRetx 3 -minRTO 300 -bufferSize 819000 -flavor BIC -sendBuffsize 819000 -rstMaxAck ENABLED -spoofSynDrop DISABLED -frto ENABLED -fack ENABLED

As to why this suddenly occurred after the 13.1 upgrades, not entirely sure at present, but it is possible default TCP parameters were modified or no longer working optimally. As s a rule of thumb, TCP profiles being set on VIPs and services is optimal and allows tuning traffic not just for the type of data transmission, but for the network conditions (high bandwidth and high latency aka long fat, LAN, WAN, mobile, etc).

Give this a shot before calling support as this may be a simple fix.

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x