Goals

  1. Accurate text search
  2. Reuse code
  3. Scalability
  4. Performance, compare to RAMFSDirectory

User Input

  1. A region and list of to-be-indexed fields (or text searchable fields)
  2. [ Optional ] Standard Analyzer or its implementation to be used with all the fields in a index
  3. [ Optional ] Field types. A string can be Text or String in lucene. The two have different behavior

Index Persistence

Lucene context

  1. A index batch write (IndexWriter.close()) will result in creation of a new set of segment files. This could trigger a segment merge operation which could be resource intensive (think compaction in LSM).
  2. A large number of segments would increase search latency.
  3. Lucene buffers documents in memory (writer.setMaxBufferedDocs and writer.setRAMBufferSizeMB).  More RAM size means larger segments means less merging later.
  4. Searchers will not see any changes till IndexWriter is closed.
  5. Optimizations
    1. If a large amount of data is to be indexed, then it is better to build N smaller indexes and combine using writer.addIndexesNoOptimize

Approach

PUTs Cache Async Queue Lucene Indexer GeodeFSDirectory Search



cluster PR 1 FSDirectoryPR1 PR 2 FSDirectoryPR2 indexPR1 indexPR2 User Cache PUTs

Limitations

Text Search

Option - 1: Custom Parser Aggregator

A search request will be intercepted by a custom ParserAggregator. This component will distribute the search query to all PRs. Each PR will route the request to local Lucene. The result will be routed to ParserAggregator. ParserAggregator will reorder and trim the aggregated result set and return the updated result set to user.
  cluster PR 1 ParserAggregator LucenePR1 FSDirectoryPR1 LucenePR2 FSDirectoryPR2 indexPR1 indexPR2 User Cache Search

Advantages

  1. Scalability
  2. Performance

Limitations

  1. High maintenance
  2. Complexity

Option - 2: Distributed FS Directory implementation

Here search request is handled by Lucene and Lucene's Parser and aggregator is utilized. DistributedFSDirectory will provide a unified view to Lucene. Lucene will request DistributedFSDirectory to fetch index chunks. DistributedFSDirectory will aggregate the index chunks from the PR which hosts the data. This is similar to a Cache Client in behavior. Cache Client reaches different PRs and provides a unified data view to the user.
  cluster PR 1 LucenePR1 DistributedFSDirectory FSDirectoryPR1 FSDirectoryPR2 indexPR1 indexPR2 User Cache Search

Advantages

  1. Low maintenance
  2. Full API compliance
  3. Accurate results

Limitations

  1. Performance:
  2. Memory requirement
  3. Network overhead

Option - 3: Embedded Solr

Here search request is handled by Solr. Solr distributes queries to Solr agents and its aggregator is utilized. SolrCloud solves some issues related to index distribution. These issues are not relevant If the index is managed in Cache. So the Solr *Distributed Search* seems like a promising solution.

Before SolrCloud, Solr supported Distributed Search, which allowed one query to be executed across multiple shards, so the query was executed against the entire Solr index and no documents would be missed from the search results. So splitting the core across shards is not exclusively a SolrCloud concept. There were, however, several problems with the distributed approach that necessitated improvement with SolrCloud:

  1. Splitting of the core into shards was somewhat manual.
  2. There was no support for distributed indexing, which meant that you needed to explicitly send documents to a specific shard; Solr couldn't figure out on its own what shards to send documents to.
  3. There was no load balancing or failover, so if you got a high number of queries, you needed to figure out where to send them and if one shard died it was just gone.

 

cluster PR 1 SolrServer SolrPR1 FSDirectoryPR1 SolrPR2 FSDirectoryPR2 indexPR1 indexPR2 User Cache Search


Advantages

  1. Performance
  2. Full API compliance
  3. Accurate results

Limitations

  1. Solr instance management complexity
  2. Additional point of failures

Option - 4: IndexWriter and MultiReader implementation

A custom implementation of IndexWriter and IndexReader could be provided as an alternative to FSDirectory implementation. FSDirectory is file-like interface. Lucene constructs a file and hands it over to FSDirectory for writes and reads. Lucene manages file merges. The directory implementation does not have visibility into the contents of the file. The IndexWriter approach is one layer above FSDirectory. Lucene interacts at a document and term level granularity with IndeReader/IndexWriter layer. The following are the important classes and methods to look at:

  1. org.apache.lucene.index.MultiReader: An IndexReader which reads multiple indexes, appending their content.

    1. termDocs(Term term): Returns an enumeration of all the documents which contain term.

    2. termPositions: Returns an enumeration of all the documents which contain term. For each document, in addition to the document number and frequency of the term in that document, a list of all of the ordinal positions of the term in the document is available.

  2. org.apache.lucene.index.IndexWriter

    1. updateDocument, addDocument

IndexWriter can control how the terms are distributed and persisted. In case of a distributed search, MultiReader can distribute the query to shard based sub-readers and each sub-reader streams filtered results from the shard to the query coordinator.

A map with this form <term, map <docId, list <position>>> is needed for supporting various lucene functions.

Limitations

  1. A popular term will have a large value (map of doc and position of term in the doc). Managing such a large needs to be efficient.

Work In Progress 

  1. How many active segment files are maintained per index? It seems one large file remains after merge. If so how to chunk a segment and colocate it with region?

Faceting

Lucene / Solr support flat, Json and API based interfaces for faceting

  • API

// Create Readers
DirectoryReader indexReader = DirectoryReader.open(indexDir);
IndexSearcher searcher = new IndexSearcher(indexReader);
TaxonomyReader taxoReader = new DirectoryTaxonomyReader(taxoDir);

// Create counters along dimensions
FacetSearchParams fsp = new FacetSearchParams(new CountFacetRequest(new CategoryPath("Author"), 10));

// Aggregates the facet counts
FacetsCollector fc = FacetsCollector.create(fsp, searcher.getIndexReader(), taxoReader);

// Search
searcher.search(...);

// Retrieve results
List<FacetResult> facetResults = fc.getFacetResults();

  • Solr Json query
{
  high_popularity : {
    type : query,
    q : "popularity:[8 TO 10]",
    facet : { average_price : "avg(price)" }
  }
}
 
Example response
 
"high_popularity": {
 
  "count": 147,
  "average_price": 74.25
}
{
  prices : {
    type : range,
    field : price,
    start : 0,
    end : 40,
    gap : 20
  }
}
"prices":{
  "buckets":[
    {
      "val":0.0,  // the bucket value represents the start of each range.  This bucket covers 0-20
      "count":5},
    {
      "val":20.0,
      "count":1}
  ]
}

 

  • No labels