I have noticed a strange situation. I am writing a paper for class about Service Bus Relay component. I decided to test what impact does it have on performance (there is a "client <--> sb relay <--> service" instead of "client <--> service" connection, so there must be a delay). Suprisingly, test with large input data (~1.5 MB of data per request as input, service returns single integer as output) proved application using SB Relay to be 4 times faster (500ms vs 2000ms) for 5 cuncurrent threads and ~2 times faster (200+ms vs 500ms) using just one single-thread client. I honestly have no idea where did these results come from. I tested client app on another machine (service is still hosted on Azure VM), but with same results. I tested if requests are coming through (if there is any "cache-like" mechanism), but everything seems to be ok. Does anyone have an idea why is relayed connection faster than direct?
Relayed configuration details:
WCF Service hosted on Azure VM with Windows Server 2012. 2 cores, 3.5GB RAM. Not hosted on IIS (self-hosted WCF).
Using netTcpRelayBinding, all settings default
Client app is simple console app which opens a connection and sequentially sends requests containing ~1MB of random byte data. Service receives data and returns single integer which is a total count of request bytes.
Client app hosted on my computer with Windows 8 (tested also on Win7 machines in my University's computer laboratory)
Non-relayed configuration details:
Same as relayed, but using netTcpBinding instead of netTcpRelayBinding
Thanks for any help