Saturday, January 14, 2012

ADF Performance Marathon - 22 Hours Stress Test

My goal was to test how scalable is ADF framework classical stack and if it can run for longer periods of time under constant runtime access. Experiment results - yes, ADF is scalable framework. There are people who complain about ADF performance, please leave your comment if you are not happy with ADF - this post is dedicated to you. But before leaving your comment - please think about house construction process. Even when using good quality tools and materials, still there is no guarantee that materials will be assembled correctly and house construction result will be as expected. What I mean is - ADF application performance depends a lot how you are building your application, if you are following ADF best practices. Would you hire builders to build your house without previous house construction experience? Think same about ADF - would you hire developers without ADF experience to build ADF application? Yes, it happens quite often - people are building ADF applications without ADF experience. However, they tend to forget this fact, because its easier to blame framework at the end.

I was running performance test with standard Oracle ADF sample application - Oracle Fusion Order Demo Application For JDeveloper 11.1.1.5. Performance test was executed with JMeter, download test script - AMTest_Long.jmx.

ADF BC was configured to support 50 concurrent users (Referenced Pool Size = 50), but stress test was executed with 200 users to show ADF scalability with larger number of users.

Performance test details:

- Duration: 22 hours
- Online concurrent users: 200
- Action frequency per user: ~20 requests, break for 1 minute after each 20 requests
- ADF Framework: 11g PS4, ADF BC, ADF Task Flows, ADF Faces
- ADF BC Tuning: Referenced Pool Size = 50, Database Pooling enabled, DB Passivation disabled
- Hardware: 7 GB RAM, 4 Processors
- JVM tuning: Sun JVM defaults

During this stress test each user selects different items and then is browsing shopping cart details:


In order to run long stress test, start JMeter in command line mode (otherwise JMeter will get out of memory exception after hour or so). jmeter -n -t jmeter_script:


Stress test was started around 4 PM, January 13th:

- Active sessions: 200
- Request processing time: 70 ms (0.07 second)
- Requests per minute: ~600
- AM active instances: 50
- AM passivations per minute: ~200 (this allows to support larger number of users, than configured by Referenced Pool Size)


There are 50 Active AM instances, but only 4 DB connections are used as maximum:


In order to minimize DB connections usage, I have enabled DB pooling and set to store passivation data in memory instead of using database PS_TXN table, based on my previous tests - Stress Testing Oracle ADF BC Applications - Do Connection Pooling and TXN Disconnect Level.

Stress test was finished around 2 PM, January 14th:

- Active sessions: 200
- Request processing time: 100 ms (0.1 second, increased because different set of requests applied)
- Requests per minute: ~600
- AM active instances: 50
- AM passivations per minute: ~200




This shows almost no change in ADF application runtime performance, even after 22 hours of continuos runtime access - good news.

There was no change in DB connections usage - it stays low:


There are no warnings in WebLogic status:


FOD application module settings were tuned to support only 50 concurrent users, but it was working well with 200:


DB connection pooling was enabled along with virtual memory for passivation:


JMeter script was configured with 200 online users:


Two main loop controllers were defined, first loop controller triggers 50 loops for ~20 requests with wait time of 1 minute after all requests are executed from current iteration:


Second loop controller runs forever, this allows to execute really long ADF application stress test:


19 comments:

  1. Amazing results, Good job Andrejus as usual :)

    ReplyDelete
  2. Thank you for posting this! I was going to do exactly the same (FOD + JMeter) to test our Exalogic machine and to compare benefits of using Exalogic optimizations with non-Exalogic optimized scenario on the same servers. So if you don't mind I'll use you scripts as a starting point :)

    ReplyDelete
  3. Sure, please let us know about results. Interesting what is the impact of using Exalogic for ADF performance.

    Regards,
    Andrejus

    ReplyDelete
  4. Andrejus, really appreciating your result. Can you share the FOD AM level changes you had done.

    ReplyDelete
  5. Hi,

    AM changes are described in the blog, check screenshot #8 and #9.

    Also please refer to this post - http://andrejusb.blogspot.com/2011/11/stress-testing-oracle-adf-bc_16.html

    Thanks,
    Andrejus

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. Great results of an interesting research. Thank you for sharing!

    ReplyDelete
  8. In the BC configuration we have jbo.doconnectionpooling="true" & jbo.txn.disconnect_level="1" & jbo.ampool.initpoolsize="50" jbo.recyclethreshold="50"

    During load testing with 30 users we see that the "Active connections High Count" is 30, but the average is only 7. This is because after every "request" of the user the connection is returned to the pool.

    We have a customer whos setup is the following: 2 applicationservers in a cluster. A datasource that is targeted to the cluster. The datasource has 15 connections.

    Is it feasible to serve 300 concurrent users (total = 1000 licensed users) with only 15 connections in the pool?

    I have tried the following:
    1) do the same load test as above but lower the number of connection to 15 => error: the 16th AM cannot get a connection from the pool; pool in "overload" state
    2) lower the number of modules to max 15 with 15 connections => error: for the 16th user a message is thrown to the server log: timeout while waiting for a resource to be freed from the resourcegroup

    ReplyDelete
  9. Hi Tom,

    If its a cluster, so I guess it will use 30 connections in total.

    But I hardly believe it would be enough only 30 connections to serve 300 concurrent users (even with DB pooling enabled as per this blog).

    You should check live DB connection usage (for example through WebLogic console dashboard or Enterprise Manager).

    Keep in mind, if requests are long - AM will keep connection longer, its why there will be peak times with higher connections usage.

    For 300 concurrent users, I would set Max DB connections size to be around 150 and monitor the use with WeLogic console to reduce this number if possible.

    Andrejus

    ReplyDelete
  10. Hi Andrejus, Thanks for the detailed report on ADF performance tests. This report primarly focuses on response time.
    How about statistics for memory usage? One of the key problems we are facing in our app is that of memory usage per user session, sometimes exceeding ~1GB.

    So on a WLS having 120GB heap size, when we have about 100 users and we start hitting out of memory issues. Is there a way to control memory consumption in ADF BC tier?

    ReplyDelete
  11. Hi,

    I think that something is wrong with your application design and technical implementation - having 120 GB for 100 users, sounds not good.

    On most production apps ADF handles at least 200 concurrent users with 4GB Heap Size.

    Andrejus

    ReplyDelete
  12. Thanks for your response.

    Are there any guidelines available for checking what is wrong with the application? One thing I can understand clearly is that, something is fundamentally wrong in our app, considering 4GB is sufficient to support 200 users on a production app!

    What could be a starting point for analyzing the huge memory requirement?

    In our app we have screens with Table/ Tree tables, bringing in 1000 rows or more during a typical user session.

    Thanks

    ReplyDelete
  13. Hi,

    I would start with JMeter stress test and increase load gradually, monitor memory usage. Check number of SQL statement executions.

    We are using our own performance monitoring tool for ADF, but we install it only to our customers - because it requires some special config.

    Andrejus

    ReplyDelete
  14. Hi, Andrejus.

    I think in-memory passivation is not supported in clustered environments. Right?

    Thanks,
    Barbara

    ReplyDelete
  15. Hi,

    DB connection pooling is a bit different thing - there is no passivation, it is just optimization for DB connection usage. Passivation happens when there are too many concurrent users and ADF BC re-assigns AM instances between users.

    By default passivation/activation is disabled for cluster. You can enable it in AM config, then it will write to PS_TXN table on every request all temporary data (this is how it will ensure passivated data will be stored across cluster). This will slow down system performance - more DB write and read.

    So, if you want to support ADF BC for cluster - make sure DB performs really fast.

    Andrejus

    ReplyDelete
  16. have you any tips for getting jmeter past a login page? I see extra redirect variables in the recorded output?

    ReplyDelete
  17. Based on my experience - it goes fine through login screen, without substituting extra variables.

    Andrejus

    ReplyDelete
  18. Hi there Andrejus, In each of the performace testing tools I have now tried (Jmeter, Oracle and others), The loopback script from ADF seems to interfere with the testing tools retrieving the HTML.

    Have you bumped into this before?

    ReplyDelete
  19. You need to correlate the redirect cookie,Oracle Application Testing Suite 12.3 will do this for you automatically , if you are using a lower version of OATS or JMeter you need to manually correlate the Redirect cookie

    ReplyDelete