Reading through lucene wiki, I came across a nice list of things to try for improving indexing performance. I am listing some of the most striking ones from the page

  • Flush by RAM usage instead of document count.
    Call writer.ramSizeInBytes() after every added doc then call flush() when it’s using too much RAM. This is especially good if you have small docs or highly variable doc sizes. You need to first set maxBufferedDocs large enough to prevent the writer from flushing based on document count. However, don’t set it too large otherwise you may hit. Somewhere around 2-3X your “typical” flush count should be OK.
  • Turn off compound file format.
    Call setUseCompoundFile(false). Building the compound file format takes time during indexing (7-33% in testing). However, note that doing this will greatly increase the number of file descriptors used by indexing and by searching, so you could run out of file descriptors if mergeFactor is also large.
  • Re-use Document and Field instances
    As of Lucene 2.3 (not yet released) there are new setValue(…) methods that allow you to change the value of a Field. This allows you to re-use a single Field instance across many added documents, which can save substantial GC cost.

    It’s best to create a single Document instance, then add multiple Field instances to it, but hold onto these Field instances and re-use them by changing their values for each added document. For example you might have an idField, bodyField, nameField, storedField1, etc. After the document is added, you then directly change the Field values (idField.setValue(…), etc), and then re-add your Document instance.

    Note that you cannot re-use a single Field instance within a Document, and, you should not change a Field’s value until the Document containing that Field has been added to the index. See Field for details.

  • Re-use a single Token instance in your analyzer
    Analyzers often create a new Token for each term in sequence that needs to be indexed from a Field. You can save substantial GC cost by re-using a single Token instance instead.
  • Use the char[] API in Token instead of the String API to represent token Text
    As of Lucene 2.3 (not yet released), a Token can represent its text as a slice into a char array, which saves the GC cost of new’ing and then reclaiming String instances. By re-using a single Token instance and using the char[] API you can avoid new’ing any objects for each term. See Token for details.
  • Shamelessly plugged from here

Time is son of a bitch. More you think, more you realize, time is a constraint. Ever so true for search engines. Time is used to restrict query bounds. It is used often and frequently the way time is stored in indices is botched up.

Frequently used way of storing date and time
Date: 12-03-2007
Time: 12:40:10
Its great from viewing point of view, but from search engine perspective its plain old stupid. Search engine would need to do a full identifier match through out the index to find a particular date and time. Lets assume a case of three dates.

  • 12-04-2007 22:00
  • 12-03-2006 10:00
  • 12-03-2007 22:00

Now if search query is looking for 12-03-2007 22:00 it will talk through all the fields to reach last row. Something on lines of:

  • 12-04-2007 22:00 not a match
  • 12-03-2006 10:00not a match
  • 12-03-2007 22:00 a match

Search engine walked about 33 characters on index to reach a conclusion that third row is a match.

Magic of morphological ordering
By changing the date and time a little to something like YYYYMMDDHHMMSS we can get a fair bit of speed advantage. So above date and time would look like:

  • 200704122200
  • 200603121000
  • 200703122200

Looking at number of operations for same query

  • 200704122200 not a match
  • 200603121000 not a match
  • 200703122200 a match

Search engine walked about 24 characters on index to reach a conclusion that third row is a match. If you notice, in case of second row it took 4 characters for search engine to conclude a mismatch.

Range Query
Range query is a search query with constraint value bounds. Lets assume we need something between 12-03-2007 to 12-04-2007. With morphologically ordered date/time we convert the values in the index into integers and calculate if a row is between 20070312000000 and 20070412240000. This operation is by many orders simpler than doing a string match.

Most of you are probably familiar with 80/20 rule. The rule states that 80% of results come from 20% of causes. In job search this rule is even more extreme. A great search engine can quickly becomes addictive for a head-hunter.

A smashing search engine for the portal can help grow the site so rapidly, so its important to do everything to make search, from good enough to great. If you are starting out, you will need to do more to make an impact.

What makes a good job search engine
Jobs search comes in all shapes and sizes but they share important qualities.

  • Simple The search engine needs to be simple to use. Complex forms are disturbing. The level of complexity could be viewed if required. Instead of bringing up 40 inputs in one go, a logical set of related fields could be made hidden or visible according to user input.
  • Fast The search data can become large, yet being able to sail through it to provide the relevant. Faster search allows user to run more searches and refine search better.
  • Saved Search Being able to define a query and run it frequently is a great option. Many individuals look for same kind of profile over and over, looking up most relevant resumes.
  • Sub-query Being able refine query and search through set made through previous search. An individual for example searched for Java and from the result set of that query find person who also happens to be well versed with C++.

Using Lucene
Lucene is open source search engine backend library. Lucene could be used for indexing GBs of data.

Lucene Indexes
Lucene stores data in a search index. Lucene is index is very similar to ‘Index’ section of a book. Lets assume 4 documents containing various set of words.

Normal index
Doc1 – Software Engineer, Java, C++
Doc2 – Sales, Tele-Sales
Doc3 – HR, Headhunting
Doc4 – Sales, Manager

Inverted Index
C++ – Doc1
Headhunting – Doc3
HR – Doc3
Java – Doc1
Manager – Doc4
Sales – Doc2, Doc4
Software Engineer – Doc1
Tele-Sales – Doc2

Lucene uses inverted index which as you can see is easy to lookup for a word ‘Tele’. We can quickly work out Doc2 contains it. In normal index all documents would be needed to be read to get to same conclusion. Lucene indexes are FAST

Storing data in indexes
While fast, indexes can be bogged down in case, those are not used correctly. Lucene indexes gives five options for field type to store the search data

  • String field type is used for keyword identifiers. Most pertinent usage is for proper nouns which independently identify a context. Someones name, location, job profile.
  • Numeric field is bunch of field types. One could store them as text version of number. But best option is to convert into string numeric type. Doing this means, lucene changes the number into morphologically ordered text making querying fast.
  • Date field should be stored with DateField class, which converts date/time into YYYYMMDDHHMMSS form which speeds up morphological search and range queries.
  • SortField field is a tricky business. A good example of SortField is to use it when search requires sorting other than relevance based like date of resume posted.
  • Text field is where heart and soul of lucene rests. Text fields are just large unstructured text which could be analyzed using various analysis sequences in lucene and indexed. This allows you to run full text query of these fields. What is of vital importance is to find analysis sequence which best suits your domain. If minimal analysis is used the index can become large and irrelevant, if its made to be too aggressive, it can leave blind spots on important search terms.

You can also set flags on fields which tells lucene how to treat the field.

  • Stored should be set to True in case a field needs to be displayed.
  • Indexed should be set to True in case a field needs to be search-able.
  • Tokenized should be set to True in case a field needs to go through analysis process before indexing
  • Compressed should be set to True if the field need to be compressed on disk. Lucene can search through compressed fields

Although it does not fulfill all the areas but Lucene provides a great starting point for a smashingly great search engine component for job search.