search


There are hundred different ways of distributing search traffic over a farm of search engines. From hard-coded configuration to multi-casted message-bus. If not hard to implement, they are hard to understand for someone starting out with search-engines. But not so with bonjour, which is P2P service announcement of sorts. Assuming bonjour server/client you are using is avahi, just run avahi-browse to find what services are running in current network. Offcourse, not great if farm spans more than one network. But its so easy. I am surprised, its so easy to announce and discover.

Its easy to think up something and keep stroking the thought to conclusion which looks aesthetically proper to you. Its entirely different game to actually convince someone else of your argument. But public speaking is like a barometer what you think and believe. These thoughts relate to the fact that I gave a talk at OSIW about search engines.

The talk was controversially called ‘Who is scared of google?’, with a sincere believe that search engines are going to evolve and stop where they are right now. Also that one could come up with specialized search engines using FOSS tools. The presentation available here

Verdict is in. People want ‘normal’ looking search engine. A search engine invokes a mental map which is getting re-enforced in our mind. Even separating out certain search results in a box entails a risk of users overlooking those links assuming it to be ads.

In his article Jacob Neilson warns against trying to change the search user interface. This argues that search engines should not try to distinguish themselves with fancy front ends.
Article available here

Grapeshot blitz
Grapeshot is a SDK providing advanced concept-based bayesian search methods for developers to insert “implicit search” capabilities inside application. In plain english, a promising search engine library for developers.

The technology section summarizes various aspects of the library which puts it apart from other similar projects. Some interesting features are:

  • Document clustering
  • Sentences or paragraphs can be used as queries
  • Word ranking

One feature that has been highlighted is its small footprint. Grapeshot claims to be 300K binary.
small footprint
The bar graph shows, what grapeshot claims to be sizes of binaries for various similar software libraries. The footprint of lucene specifically is of interest. Unlike claimed by the site 11+MB, lucene core jar file as of 2.2.0 version is about 526K only. Which could also be reduced depending on the users requirement.

Reducing binary footprint of lucene
Although 526K doesn’t seem like a large footprint. As an exercise, one can reduce it for embedded or mobile device like grapeshot claims. To reduce binary size:

  • Run the java application of interest with -verbose:class flag. This produces verbose output of class loading details on stdout
  • Run the output through
    cat * |grep lucene-core|cut -f2 -d' '|uniq|tr '.' '/'| awk '{printf "%s.class\n", $1}'
    command. This will filter out all the classes from lucene library loaded at runtime
  • Create a custom jar file by deleting all .class files which are not in the list.

Following this procedure for demo application bundled with lucene core binary, custom jar was reduced by half to 262k. Less than Grapeshot binary.

As side note this python script can be used to deleted files from extracted jar.

Reading through lucene wiki, I came across a nice list of things to try for improving indexing performance. I am listing some of the most striking ones from the page

  • Flush by RAM usage instead of document count.
    Call writer.ramSizeInBytes() after every added doc then call flush() when it’s using too much RAM. This is especially good if you have small docs or highly variable doc sizes. You need to first set maxBufferedDocs large enough to prevent the writer from flushing based on document count. However, don’t set it too large otherwise you may hit. Somewhere around 2-3X your “typical” flush count should be OK.
  • Turn off compound file format.
    Call setUseCompoundFile(false). Building the compound file format takes time during indexing (7-33% in testing). However, note that doing this will greatly increase the number of file descriptors used by indexing and by searching, so you could run out of file descriptors if mergeFactor is also large.
  • Re-use Document and Field instances
    As of Lucene 2.3 (not yet released) there are new setValue(…) methods that allow you to change the value of a Field. This allows you to re-use a single Field instance across many added documents, which can save substantial GC cost.

    It’s best to create a single Document instance, then add multiple Field instances to it, but hold onto these Field instances and re-use them by changing their values for each added document. For example you might have an idField, bodyField, nameField, storedField1, etc. After the document is added, you then directly change the Field values (idField.setValue(…), etc), and then re-add your Document instance.

    Note that you cannot re-use a single Field instance within a Document, and, you should not change a Field’s value until the Document containing that Field has been added to the index. See Field for details.

  • Re-use a single Token instance in your analyzer
    Analyzers often create a new Token for each term in sequence that needs to be indexed from a Field. You can save substantial GC cost by re-using a single Token instance instead.
  • Use the char[] API in Token instead of the String API to represent token Text
    As of Lucene 2.3 (not yet released), a Token can represent its text as a slice into a char array, which saves the GC cost of new’ing and then reclaiming String instances. By re-using a single Token instance and using the char[] API you can avoid new’ing any objects for each term. See Token for details.
  • Shamelessly plugged from here

calendar.jpg
Time is son of a bitch. More you think, more you realize, time is a constraint. Ever so true for search engines. Time is used to restrict query bounds. It is used often and frequently the way time is stored in indices is botched up.

Frequently used way of storing date and time
Date: 12-03-2007
Time: 12:40:10
Its great from viewing point of view, but from search engine perspective its plain old stupid. Search engine would need to do a full identifier match through out the index to find a particular date and time. Lets assume a case of three dates.

  • 12-04-2007 22:00
  • 12-03-2006 10:00
  • 12-03-2007 22:00

Now if search query is looking for 12-03-2007 22:00 it will talk through all the fields to reach last row. Something on lines of:

  • 12-04-2007 22:00 not a match
  • 12-03-2006 10:00not a match
  • 12-03-2007 22:00 a match

Search engine walked about 33 characters on index to reach a conclusion that third row is a match.

Magic of morphological ordering
By changing the date and time a little to something like YYYYMMDDHHMMSS we can get a fair bit of speed advantage. So above date and time would look like:

  • 200704122200
  • 200603121000
  • 200703122200

Looking at number of operations for same query

  • 200704122200 not a match
  • 200603121000 not a match
  • 200703122200 a match

Search engine walked about 24 characters on index to reach a conclusion that third row is a match. If you notice, in case of second row it took 4 characters for search engine to conclude a mismatch.

Range Query
Range query is a search query with constraint value bounds. Lets assume we need something between 12-03-2007 to 12-04-2007. With morphologically ordered date/time we convert the values in the index into integers and calculate if a row is between 20070312000000 and 20070412240000. This operation is by many orders simpler than doing a string match.

Index
A design problem many sites deploying search engine would face, using SingleSearcher vs. MultiSearcher. Lucene gives access to search capability using a Searcher class. Searcher class accepts a query and returns list of Hits sorted by default by relevance. Searcher is an abstract class with possibility of wrangling up customized concrete Searcher. Two already available Searcher classes are IndexSearcher which loads an lucene index from disk and MultiSearcher which loads a list of lucene indices. MultiSearcher does an additional step of running merge sort after indices return the results.

Why the question of IndexSearcher Vs. MultiSearcher
While pondering in a meeting room with nothing but an empty drawing board, it wouldn’t take much time for a design team to come to the conclusion that certain search criterion would be used more than other. Now simple thing would be to make a small manageable indices for that specific criterion and a separate index for general search.

Why not to take this decision on outset

  • Lucene in default configuration is fast enough for most search requirements. Don’t use it as a premature optimization
  • It is not good option for distibuting indices over many disks. Its easier to put disks in RAID 0 configuration
  • Its simpler to maintain single index configuration
  • It involves extra cost of running a merge sort

Some situations it makes sense to distribute indices because the frequency on particular search criterion is too skewed. Still in that case using many indices with load balancer would be better. MultiSearcher does fulfills certain niche, its a premature optimization for most.

Next Page »