This is a FYI and a call for more help gathering real life performance data. I’m not a test professional, but as a software engineer/architect I often use a couple simple tests during development to spot check performance and help optimize a bit.
In that spirit I’ve created a Apache JMeter test plan that does a simple query against ChatScript with a randomized username and measures response time over load and contributed it to chatscript in GitHub.
You can find the instructions and workbench file in the chatsript directory:
NON-WINDOWS NON-C/LOADTEST/LoadTesting.md
I’m hoping others will pick up this when they have time and interest and contribute more detailed results and better test plans.
The initial results are very promising for the EVSERVER build in single server mode, I didn’t have time to extend the test plan to ramp up load over time to find the breaking point, but it does pretty well (sub 3ms) under 1000 threads constantly requesting - which probably represents at least 10k users in real time ( they have to have time to type, read the response and reply, in best case scenario that would take 10 seconds IMO).
Caveat: I didn’t test to breaking point, so it is entirely possible that the point it starts to break is much further down the path than I went. Forking would bring those numbers up too I think and load balancing across multiple forked instances should scale wider.
Note: The non EVSERVER builds have poor performance in comparison, don’t even bother with them in high load situations IMO.
PS: This test fills up your USERS directory very quickly (10k log files per run), remember to clean it out.