Note that there are some explanatory texts on larger screens.

plurals
  1. POSearchable index gets locked on manual update (LockObtainFailedException)
    primarykey
    data
    text
    <p>We have a Grails project that runs behind a load balancer. There are three instances of the Grails application running on the server (using separate Tomcat instances). Each instance has its own searchable index. Because the indexes are separate, the automatic update is not enough keeping the index consistent between the application instances. Because of this we have disabled the searchable index mirroring and updates to the index are done manually in a scheduled quartz job. According to our understanding no other part of the application should modify the index. </p> <p>The quartz job runs once a minute and it checks from the database which rows have been updated by the application, and re-indexes those objects. The job also checks if the same job is already running so it doesn’t do any concurrent indexing. The application runs fine for few hours after the startup and then suddenly when the job is starting, LockObtainFailedException is thrown:</p> <blockquote> <p>22.10.2012 11:20:40 [xxxx.ReindexJob] ERROR Could not update searchable index, class org.compass.core.engine.SearchEngineException: Failed to open writer for sub index [product]; nested exception is org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@/home/xxx/tomcat/searchable-index/index/product/lucene-a7bbc72a49512284f5ac54f5d7d32849-write.lock</p> </blockquote> <p>According to the log the last time the job was executed, re-indexing was done without any errors and the job finished successfully. Still, this time the re-index operation throws the locking exception, as if the previous operation was unfinished and the lock had not been released. The lock will not be released until the application is restarted.</p> <p>We tried to solve the problem by manually opening the locked index, which causes the following error to be printed to the log:</p> <blockquote> <p>22.10.2012 11:21:30 [manager.IndexWritersManager ] ERROR Illegal state, marking an index writer as open, while another is marked as open for sub index [product]</p> </blockquote> <p>After this the job seems to be working correctly and doesn’t become stuck in a locked state again. However this causes the application to constantly use 100 % of the CPU resource. Below is a shortened version of the quartz job code.</p> <p>Any help would be appreciated to solve the problem, thanks in advance.</p> <pre><code>class ReindexJob { def compass ... static Calendar lastIndexed static triggers = { // Every day every minute (at xx:xx:30), start delay 2 min // cronExpression: "s m h D M W [Y]" cron name: "ReindexTrigger", cronExpression: "30 * * * * ?", startDelay: 120000 } def execute() { if (ConcurrencyHelper.isLocked(ConcurrencyHelper.Locks.LUCENE_INDEX)) { log.error("Search index has been locked, not doing anything.") return } try { boolean acquiredLock = ConcurrencyHelper.lock(ConcurrencyHelper.Locks.LUCENE_INDEX, "ReindexJob") if (!acquiredLock) { log.warn("Could not lock search index, not doing anything.") return } Calendar reindexDate = lastIndexed Calendar newReindexDate = Calendar.instance if (!reindexDate) { reindexDate = Calendar.instance reindexDate.add(Calendar.MINUTE, -3) lastIndexed = reindexDate } log.debug("+++ Starting ReindexJob, last indexed ${TextHelper.formatDate("yyyy-MM-dd HH:mm:ss", reindexDate.time)} +++") Long start = System.currentTimeMillis() String reindexMessage = "" // Retrieve the ids of products that have been modified since the job last ran String productQuery = "select p.id from Product ..." List&lt;Long&gt; productIds = Product.executeQuery(productQuery, ["lastIndexedDate": reindexDate.time, "lastIndexedCalendar": reindexDate]) if (productIds) { reindexMessage += "Found ${productIds.size()} product(s) to reindex. " final int BATCH_SIZE = 10 Long time = TimeHelper.timer { for (int inserted = 0; inserted &lt; productIds.size(); inserted += BATCH_SIZE) { log.debug("Indexing from ${inserted + 1} to ${Math.min(inserted + BATCH_SIZE, productIds.size())}: ${productIds.subList(inserted, Math.min(inserted + BATCH_SIZE, productIds.size()))}") Product.reindex(productIds.subList(inserted, Math.min(inserted + BATCH_SIZE, productIds.size()))) Thread.sleep(250) } } reindexMessage += " (${time / 1000} s). " } else { reindexMessage += "No products to reindex. " } log.debug(reindexMessage) // Re-index brands Brand.reindex() lastIndexed = newReindexDate log.debug("+++ Finished ReindexJob (${(System.currentTimeMillis() - start) / 1000} s) +++") } catch (Exception e) { log.error("Could not update searchable index, ${e.class}: ${e.message}") if (e instanceof org.apache.lucene.store.LockObtainFailedException || e instanceof org.compass.core.engine.SearchEngineException) { log.info("This is a Lucene index locking exception.") for (String subIndex in compass.searchEngineIndexManager.getSubIndexes()) { if (compass.searchEngineIndexManager.isLocked(subIndex)) { log.info("Releasing Lucene index lock for sub index ${subIndex}") compass.searchEngineIndexManager.releaseLock(subIndex) } } } } finally { ConcurrencyHelper.unlock(ConcurrencyHelper.Locks.LUCENE_INDEX, "ReindexJob") } } } </code></pre> <p>Based on JMX CPU samples, it seems that Compass is doing some scheduling behind the scenes. From 1 minute CPU samples it seems like there are few things different when normal and 100% CPU instances are compared:</p> <ul> <li>org.apache.lucene.index.IndexWriter.doWait() is using most of the CPU time.</li> <li>Compass Scheduled Executor Thread is shown in the thread list, this was not seen in a normal situation.</li> <li>One Compass Executor Thread is doing commitMerge, in a normal situation none of these threads was doing commitMerge.</li> </ul>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload