As we saw in the 2nd part of this series, the real problem was fetching large no. of records, which in turn forcing JVM GC to kick in to free up the heap memory.
In order to fix this problem, we decided to implement pagination in search methods of the system. Till that point of time, the pagination was there only on UI side but from database all the applicable records were being fetched !
Pagination using iBatis
So we specifically used,
public java.util.List queryForList(java.lang.String id,
to pass the pagination related information to database while selecting the records for specific search criteria.
Two important parameters for pagination are calculated as,
skip : (pageNo – 1) * records_per_page
max : records_per_page
This change drastically reduced the no. of records being fetched from around 4000 to 16 (records per page). However this solution came with a minor overhead.
The overhead was to fire one query with same search criteria to compute the total no. of records that could be returned by this criteria. This count is required to calculate the total no. of pages, which will later be shown in data grid. However it is not a great overhead and database work quite fast to return the count.
This optimization reduced the need of memory by great amount and system continues to have ample heap memory for new objects.
This debugging exercise came as a great learning to me. It took around 3 days for finding the problem, understanding the reason, and fixing it.
Technically, I learnt quite a few new things,
- Remote debugging of web application, using JPDA and Eclipse
- jmap and jhat profiling tools.
- pagination with iBatis
This case study will always encourage me to do large data set testing of whatever application we would develop in future. Now I believe the fact that large no. of data records may reveal some hidden bugs and such testing would ensure the removal of such bugs and ultimately lowering the maintenance hassles and in turn the cost.