Apache Solr Documentation

6.5 Ref Guide (PDF Download)
Solr Tutorial
Solr Community Wiki

Older Versions of this Guide (PDF)

Ref Guide Topics


*** As of June 2017, the latest Solr Ref Guide is located at https://lucene.apache.org/solr/guide ***

Please note comments on these pages have now been disabled for all users.

Skip to end of metadata
Go to start of metadata

In addition to the main query parsers discussed earlier, there are several other query parsers that can be used instead of or in conjunction with the main parsers for specific purposes. This section details the other parsers, and gives examples for how they might be used.

Many of these parsers are expressed the same way as Local Parameters in Queries.

Query parsers discussed in this section:

Block Join Query Parsers

There are two query parsers that support block joins. These parsers allow indexing and searching for relational content that has been indexed as nested documents.

The example usage of the query parsers below assumes these two documents and each of their child documents have been indexed:

Block Join Children Query Parser

This parser takes a query that matches some parent documents and returns their children. The syntax for this parser is: q={!child of=<allParents>}<someParents>. The parameter allParents is a filter that matches only parent documents; here you would define the field and value that you used to identify all parent documents. The parameter someParents identifies a query that will match some of the parent documents. The output is the children.

Using the example documents above, we can construct a query such as q={!child of="content_type:parentDocument"}title:lucene. We only get one document in response:

Note that the query for someParents should match only parent documents passed by allParents or you may get an exception: Parent query must not match any docs besides parent filter. Combine them as must (+) and must-not (-) clauses to find a problem doc. or in older version it's: Parent query yields document which is not matched by parents filter. As it's said, you can search for q=+(someParents) -(allParents) to find a cause .

Block Join Parent Query Parser

This parser takes a query that matches child documents and returns their parents. The syntax for this parser is similar: q={!parent which=<allParents>}<someChildren>.  The parameter allParents is a filter that matches only parent documents; here you would define the field and value that you used to identify all parent documents. The parameter someChildren is a query that matches some or all of the child documents. Note that the query for someChildren should match only child documents or you may get an exception: Child query must not match same docs with parent filter. Combine them as must clauses (+) to find a problem doc. or in older version it's: child query must only match non-parent docs. As it's said, you can search for q=+(parentFilter) +(someChildren) to find a cause .

Again using the example documents above, we can construct a query such as q={!parent which="content_type:parentDocument"}comments:SolrCloud. We get this document in response:

Using which

 A common mistake is to try to filter parents with a which filter, as in this bad example:

q={!parent which="title:join"}comments:SolrCloud 

Instead, you should use a sibling mandatory clause as a filter:

q= +title:join +{!parent which="content_type:parentDocument"}comments:SolrCloud



You can optionally use the score local parameter to return scores of the subordinate query. The values to use for this parameter define the type of aggregation, which are avg (average), max (maximum), min (minimum), total (sum). Implicit default is none which returns 0.0

Boost Query Parser

BoostQParser extends the QParserPlugin and creates a boosted query from the input value. The main value is the query to be boosted. Parameter b is the function query to use as the boost. The query to be boosted may be of any type.


Creates a query "foo" which is boosted (scores are multiplied) by the function query log(popularity):

Creates a query "foo" which is boosted by the date boosting function referenced in ReciprocalFloatFunction:

Collapsing Query Parser

The CollapsingQParser is really a post filter that provides more performant field collapsing than Solr's standard approach when the number of distinct groups in the result set is high. This parser collapses the result set to a single document per group before it forwards the result set to the rest of the search components. So all downstream components (faceting, highlighting, etc...) will work with the collapsed result set.

Details about using the CollapsingQParser can be found in the section Collapse and Expand Results.

Complex Phrase Query Parser

The ComplexPhraseQParser provides support for wildcards, ORs, etc., inside phrase queries using Lucene's ComplexPhraseQueryParser . Under the covers, this query parser makes use of the Span group of queries, e.g., spanNear, spanOr, etc., and is subject to the same limitations as that family or parsers.




Set to true to force phrase queries to match terms in the order specified. Default: true

dfThe default search field.



A mix of ordered and unordered complex phrase queries:


Performance is sensitive to the number of unique terms that are associated with a pattern. For instance, searching for "a*" will form a large OR clause (technically a SpanOr with many terms) for all of the terms in your index for the indicated field that start with the single letter 'a'. It may be prudent to restrict wildcards to at least two or preferably three letters as a prefix. Allowing very short prefixes may result in to many low-quality documents being returned.

Notice that it also supports leading wildcards "*a" as well with consequent performance implications. Applying ReversedWildcardFilterFactory in index-time analysis is usually a good idea.


You may need to increase MaxBooleanClauses in solrconfig.xml as a result of the term expansion above:

This property is described in more detail in the section Query Sizing and Warming.


It is recommended not to use stopword elimination with this query parser. Lets say we add theupto to stopwords.txt for your collection, and index a document containing the text "Stores up to 15,000 songs, 25,00 photos, or 150 yours of video" in a field named "features". 

While the query below does not use this parser:

the document is returned. The next query that does use the Complex Phrase Query Parser, as in this query:

does not return that document because SpanNearQuery has no good way to handle stopwords in a way analogous to PhraseQuery. If you must remove stopwords for your use case, use a custom filter factory or perhaps a customized synonyms filter that reduces given stopwords to some impossible token.


Special care has to be given when escaping: clauses between double quotes (usually whole query) is parsed twice, these parts have to be escaped as twice. eg "foo\\: bar\\^".

Field Query Parser

The FieldQParser extends the QParserPlugin and creates a field query from the input value, applying text analysis and constructing a phrase query if appropriate. The parameter f is the field to be queried.


This example creates a phrase query with "foo" followed by "bar" (assuming the analyzer for myfield is a text field with an analyzer that splits on whitespace and lowercase terms). This is generally equivalent to the Lucene query parser expression myfield:"Foo Bar".

Function Query Parser

The FunctionQParser extends the QParserPlugin and creates a function query from the input value. This is only one way to use function queries in Solr; for another, more integrated, approach, see the section on Function Queries.


Function Range Query Parser

The FunctionRangeQParser extends the QParserPlugin and creates a range query over a function. This is also referred to as frange, as seen in the examples below.

Other parameters:




The lower bound, optional


The upper bound, optional


Include the lower bound: true/false, optional, default=true


Include the upper bound: true/false, optional, default=true


Both of these examples are restricting the results by a range of values found in a declared field or a function query. In the second example, we're doing a sum calculation, and then defining only values between 0 and 2.2 should be returned to the user.

For more information about range queries over functions, see Yonik Seeley's introductory blog post Ranges over Functions in Solr 1.4, hosted at SearchHub.org.

Graph Query Parser

The graph query parser does a breadth first, cyclic aware, graph traversal of all documents that are "reachable" from a starting set of root documents identified by a wrapped query. The graph is built according to linkages between documents based on the terms found in "from" and "to" fields that you specify as part of the query





The field name of matching documents to inspect to identify outgoing edges for graph traversal. Defaults to edge_ids .


The field name to of candidate documents to inspect to identify incoming graph edges. Defaults to node_id .


An optional query that can be supplied to limit the scope of documents that are traversed.


Integer specifying how deep the breadth first search of the graph should go begining with the initial query. Defaults to -1 (unlimited)

returnRootBoolean to indicate if the documents that matched the original query (to define the starting points for graph) should be included in the final results. Defaults to true
returnOnlyLeafBoolean that indicates if the results of the query should be filtered so that only documents with no outgoing edges are returned. Defaults to false
useAutnBoolean that indicates if an Automatons should be compiled for each iteration of the breadth first search, which may be faster for some graphs.  Defaults to false.


The graph parser only works in single node Solr installations, or with SolrCloud collections that use exactly 1 shard.


To understand how the graph parser works, consider the following Directed Cyclic Graph, containing 8 nodes (A to H) and 9 edges (1 to 9):



One way to model this graph as Solr documents, would be to create one document per node, with mutivalued fields identifying the incoming and outgoing edges for each node:

With the model shown above, the following query demonstrates a simple traversal of all nodes reachable from node A:

We can also use the traversalFilter to limit the graph traversal to only nodes with maximum value of 15 in the foo field. In this case that means D, E, and F are excluded – F has a value of foo=11, but it is unreachable because the traversal skipped D:

The examples shown so far have all used a query for a single document ("id:A") as the root node for the graph traversal, but any query can be used to identify multiple documents to use as root nodes.  The next example demonstrates using the maxDepth param to find all nodes that are at most one edge away from an root node with a value in the foo field less then or equal to 10:


Simplified Models

The Document & Field modelling used in the above examples enumerated all of the outgoing and income edges for each node explicitly, to help demonstrate exactly how the "from" and "to" params work, and to give you an idea of what is possible.  With multiple sets of fields like these for identifying incoming and outgoing edges, it's possible to model many independent Directed Graphs that contain some or all of the documents in your collection.

But in many cases it can also be possible to drastically simplify the model used.  

For Example: The same graph shown in the diagram above can be modelled by Solr Documents that represent each node and know only the ids of the nodes they link to, with out knowing anything about the incoming links:

With this alternative document model, all of the same queries demonstrated above can still be executed, simply by changing the "from" param to replace the "in_edge" field with the "id" field:

Join Query Parser

JoinQParser extends the QParserPlugin. It allows normalizing relationships between documents with a join operation. This is different from the concept of a join in a relational database because no information is being truly joined. An appropriate SQL analogy would be an "inner query".


Find all products containing the word "ipod", join them against manufacturer docs and return the list of manufacturers:

Find all manufacturer docs named "belkin", join them against product docs, and filter the list to only products with a price less than $12:

The join operation is done on a term basis, so the "from" and "to" fields must use compatible field types.  For example: joining between a StrField and a TrieIntField will not work, likewise joining between a StrField and a TextField that uses LowerCaseFilterFactory will only work for values that are already lower cased in the string field.


You can optionally use the score parameter to return scores of the subordinate query. The values to use for this parameter define the type of aggregation, which are avg (average), max (maximum), min (minimum) total, or none.

Score parameter and single value numerics

Specifying score local parameter switches the join algorithm. This might have performance implication on large indices, but it's more important that this algorithm won't work for single value numeric field starting from 7.0. Users are encouraged to change field types to string and rebuild indexes during migration. 

Joining Across Collections

You can also specify a fromIndex parameter to join with a field from another core or collection. If running in SolrCloud mode, then the collection specified in the fromIndex parameter must have a single shard and a replica on all Solr nodes where the collection you're joining to has a replica.

Let's consider an example where you want to use a Solr join query to filter movies by directors that have won an Oscar. Specifically, imagine we have two collections with the following fields:

movies: id, title, director_id, ...

movie_directors: id, name, has_oscar, ...

To filter movies by directors that have won an Oscar using a Solr join on the movie_directors collection, you can send the following filter query to the movies collection:

Notice that the query criteria of the filter (has_oscar:true) is based on a field in the collection specified using fromIndex. Keep in mind that you cannot return fields from the fromIndex collection using join queries, you can only use the fields for filtering results in the "to" collection (movies).

Next, let's understand how these collections need to be deployed in your cluster. Imagine the movies collection is deployed to a four node SolrCloud cluster and has two shards with a replication factor of two. Specifically, the movies collection has replicas on the following four nodes:

node 1: movies_shard1_replica1

node 2: movies_shard1_replica2

node 3: movies_shard2_replica1

node 4: movies_shard2_replica2

To use the movie_directors collection in Solr join queries with the movies collection, it needs to have a replica on each of the four nodes. In other words, movie_directors must have one shard and replication factor of four:

node 1: movie_directors_shard1_replica1

node 2: movie_directors_shard1_replica2

node 3: movie_directors_shard1_replica3

node 4: movie_directors_shard1_replica4

At query time, the JoinQParser will access the local replica of the movie_directors collection to perform the join. If a local replica is not available or active, then the query will fail. At this point, it should be clear that since you're limited to a single shard and the data must be replicated across all nodes where it is needed, this approach works better with smaller data sets where there is a one-to-many relationship between the from collection and the to collection. Moreover, if you add a replica to the to collection, then you also need to add a replica for the from collection.

For more information about join queries, see the Solr Wiki page on Joins. Erick Erickson has also written a blog post about join performance called Solr and Joins, hosted by SearchHub.org.

Lucene Query Parser

The LuceneQParser extends the QParserPlugin by parsing Solr's variant on the Lucene QueryParser syntax. This is effectively the same query parser that is used in Lucene. It uses the operators q.op, the default operator ("OR" or "AND") and df, the default field name.


For more information about the syntax for the Lucene Query Parser, see the Classic QueryParser javadocs.

Learning To Rank Query Parser

The LTRQParserPlugin is a special purpose parser for reranking the top results of a simple query using a more complex ranking query which is based on a machine learnt model.


Details about using the LTRQParserPlugin can be found in the Learning To Rank section.

Max Score Query Parser

The MaxScoreQParser extends the LuceneQParser but returns the Max score from the clauses. It does this by wrapping all SHOULD clauses in a DisjunctionMaxQuery with tie=1.0. Any MUST or PROHIBITED clauses are passed through as-is. Non-boolean queries, e.g. NumericRange falls-through to the LuceneQParser parser behavior.


More Like This Query Parser

MLTQParser enables retrieving documents that are similar to a given document. It uses Lucene's existing MoreLikeThis logic and also works in SolrCloud mode. The document identifier used here is the unique id value and not the Lucene internal document id. The list of returned documents excludes the queried document.

This query parser takes the following parameters:




Specifies the fields to use for similarity.


Specifies the Minimum Term Frequency, the frequency below which terms will be ignored in the source document.


Specifies the Minimum Document Frequency, the frequency at which words will be ignored when they do not occur in at least this many documents.

maxdfSpecifies the Maximum Document Frequency, the frequency at which words will be ignored when they occur in more than this many documents.
minwlSets the minimum word length below which words will be ignored.
maxwlSets the maximum word length above which words will be ignored.
maxqtSets the maximum number of query terms that will be included in any generated query.
maxntpSets the maximum number of tokens to parse in each example document field that is not stored with TermVector support.
boostSpecifies if the query will be boosted by the interesting term relevance. It can be either "true" or "false".


Find documents like the document with id=1 and using the name field for similarity.

Adding more constraints to what qualifies as similar using mintf and mindf.

Nested Query Parser

The NestedParser extends the QParserPlugin and creates a nested query, with the ability for that query to redefine its type via local parameters. This is useful in specifying defaults in configuration and letting clients indirectly reference them.


If the q1 parameter is price, then the query would be a function query on the price field. If the q1 parameter is {!lucene}inStock:true}} then a term query is created from the Lucene syntax string that matches documents with inStock=true. These parameters would be defined in solrconfig.xml, in the defaults section:

For more information about the possibilities of nested queries, see Yonik Seeley's blog post Nested Queries in Solr, hosted by SearchHub.org.

Old Lucene Query Parser

OldLuceneQParser extends the QParserPlugin by parsing Solr's variant of Lucene's QueryParser syntax, including the deprecated sort specification after the query.


Payload Query Parsers

These query parsers utilize payloads encoded on terms during indexing.   The main query, for both of these parsers, is parsed straightforwardly from the field type's query analysis into a SpanQuery.  The generated SpanQuery will be either a SpanTermQuery or an ordered, zero slop SpanNearQuery, depending on how many tokens are emitted.   Payloads can be encoded on terms using either the DelimitedPayloadTokenFilter or the NumericPayloadTokenFilter.  The payload using parsers are:

  • PayloadScoreQParser
  • PayloadCheckQParser

Payload Score Parser

PayloadScoreQParser incorporates each matching term's numeric (integer or float) payloads into the scores.  

This parser accepts the following parameters:




Field to use (required)



Payload function: min, max, average (required)


If true, multiples computed payload factor by the score of the original query. If false, the computed payload factor is the score. (default: false)


Payload Check Parser

PayloadCheckQParser only matches when the matching terms also have the specified payloads.

This parser accepts the following parameters:




Field to use (required)



space-separated list of payloads that must match the query terms (required)

Each specified payload will be encoded using the encoder determined from the field type and encoded accordingly for matching.

DelimitedPayloadTokenFilter 'identity' encoded payloads also work here, as well as float and integer encoded ones.

Prefix Query Parser

PrefixQParser extends the QParserPlugin by creating a prefix query from the input value. Currently no analysis or value transformation is done to create this prefix query. The parameter is f, the field. The string after the prefix declaration is treated as a wildcard query.


This would be generally equivalent to the Lucene query parser expression myfield:foo*.

Raw Query Parser

RawQParser extends the QParserPlugin by creating a term query from the input value without any text analysis or transformation. This is useful in debugging, or when raw terms are returned from the terms component (this is not the default). The only parameter is f, which defines the field to search.


This example constructs the query: TermQuery(Term("myfield","Foo Bar")).

For easy filter construction to drill down in faceting, the TermQParserPlugin is recommended. For full analysis on all fields, including text fields, you may want to use the FieldQParserPlugin.

Re-Ranking Query Parser

The ReRankQParserPlugin is a special purpose parser for Re-Ranking the top results of a simple query using a more complex ranking query.

Details about using the ReRankQParserPlugin can be found in the Query Re-Ranking section.

Simple Query Parser

The Simple query parser in Solr is based on Lucene's SimpleQueryParser. This query parser is designed to allow users to enter queries however they want, and it will do its best to interpret the query and return results.

This parser takes the following parameters:


Comma-separated list of names of parsing operators to enable. By default, all operations are enabled, and this parameter can be used to effectively disable specific operators as needed, by excluding them from the list. Passing an empty string with this parameter disables all operators.

NameOperatorDescriptionExample query
AND+Specifies ANDtoken1+token2
OR|Specifies ORtoken1|token2
NOT-Specifies NOT-token3
PREFIX*Specifies a prefix queryterm*
PHRASE"Creates a phrase"term1 term2"
PRECEDENCE( )Specifies precedence; tokens inside the parenthesis will be analyzed first. Otherwise, normal order is left to right.token1 + (token2 | token3)
ESCAPE\Put it in front of operators to match them literallyC\+\+
WHITESPACEspace or [\r\t\n]Delimits tokens on whitespace. If not enabled, whitespace splitting will not be performed prior to analysis – usually most desirable. Not splitting whitespace is a unique feature of this parser that enables multi-word synonyms to work. However, it probably actually won't unless synonyms are configured to normalize instead of expand to all that match a given synonym. Such a configuration requires normalizing synonyms at both index time and query time. Solr's analysis screen can help here.term1 term2



At the end of terms, specifies a fuzzy query.

"N" is optional and may be either "1" or "2" (the default)

NEAR~NAt the end of phrases, specifies a NEAR query"term1 term2"~5
q.opDefines the default operator to use if none is defined by the user. Allowed values are AND and OR. OR is used if none is specified.
qfA list of query fields and boosts to use when building the query.

Defines the default field if none is defined in the Schema, or overrides the default field if it is already defined.

Any errors in syntax are ignored and the query parser will interpret queries as best it can. However, this can lead to odd results in some cases.

Spatial Query Parsers

There are two spatial QParsers in Solr: geofilt and bbox.  But there are other ways to query spatially: using the frange parser with a distance function, using the standard (lucene) query parser with the range syntax to pick the corners of a rectangle, or with RPT and BBoxField you can use the standard query parser but use a special syntax within quotes that allows you to pick the spatial predicate.

All these things are documented further in the section Spatial Search.

Surround Query Parser

The SurroundQParser enables the Surround query syntax, which provides proximity search functionality. There are two positional operators: w creates an ordered span query and n creates an unordered one. Both operators take a numeric value to indicate distance between two terms. The default is 1, and the maximum is 99.

Note that the query string is not analyzed in any way.


This example would find documents where the terms "foo" and "bar" were no more than 3 terms away from each other (i.e., no more than 2 terms between them).

This query parser will also accept boolean operators (AND, OR, and NOT, in either upper- or lowercase), wildcards, quoting for phrase searches, and boosting. The w and n operators can also be expressed in upper- or lowercase.

The non-unary operators (everything but NOT) support both infix (a AND b AND c) and prefix AND(a, b,  c) notation.

More information about Surround queries can be found at http://wiki.apache.org/solr/SurroundQueryParser.

Switch Query Parser

SwitchQParser is a QParserPlugin that acts like a "switch" or "case" statement.

The primary input string is trimmed and then prefixed with case. for use as a key to lookup a "switch case" in the parser's local params. If a matching local param is found the resulting param value will then be parsed as a subquery, and returned as the parse result.

The case local param can be optionally be specified as a switch case to match missing (or blank) input strings. The default local param can optionally be specified as a default case to use if the input string does not match any other switch case local params. If default is not specified, then any input which does not match a switch case local param will result in a syntax error.

In the examples below, the result of each query is "XXX":

A practical usage of this QParsePlugin, is in specifying appends fq params in the configuration of a SearchHandler, to provide a fixed set of filter options for clients using custom parameter names. Using the example configuration below, clients can optionally specify the custom parameters in_stock and shipping to override the default filtering behavior, but are limited to the specific set of legal values (shipping=any|free, in_stock=yes|no|all).

Term Query Parser

TermQParser extends the QParserPlugin by creating a single term query from the input value equivalent to readableToIndexed(). This is useful for generating filter queries from the external human readable terms returned by the faceting or terms components. The only parameter is f, for the field.


For text fields, no analysis is done since raw terms are already returned from the faceting and terms components. To apply analysis to text fields as well, see the Field Query Parser, above.

If no analysis or transformation is desired for any type of field, see the Raw Query Parser, above.

Terms Query Parser

TermsQParser, functions similarly to the Term Query Parser but takes in multiple values separated by commas and returns documents matching any of the specified values.  This can be useful for generating filter queries from the external human readable terms returned by the faceting or terms components, and may be more efficient in some cases than using the Standard Query Parser to generate an boolean query since the default implementation "method" avoids scoring.

This query parser takes the following parameters:




The field on which to search. Required.


Separator to use when parsing the input. If set to " " (a single blank space), will trim additional white space from the input terms. Defaults to ",".


The internal query-building implementation: termsFilter, booleanQuery, automaton, or docValuesTermsFilter. Defaults to "termsFilter".


 XML Query Parser

The XmlQParserPlugin extends the QParserPlugin and supports the creation of queries from XML. Example:






<BooleanQuery fieldName="description">
<Clause occurs="must"> <TermQuery>shirt</TermQuery> </Clause>
<Clause occurs="mustnot"> <TermQuery>plain</TermQuery> </Clause>
<Clause occurs="should"> <TermQuery>cotton</TermQuery> </Clause>
<Clause occurs="must">
<BooleanQuery fieldName="size">
<Clause occurs="should"> <TermsQuery>S M L</TermsQuery> </Clause>

The XmlQParser implementation uses the SolrCoreParser class which extends Lucene's CoreParser class. XML elements are mapped to QueryBuilder classes as follows:

XML element

QueryBuilder class
































LegacyNumericRangeQuery(Builder) is deprecated

Customizing XML Query Parser

You can configure your own custom query builders for additional XML elements. The custom builders need to extend the SolrQueryBuilder or the SolrSpanQueryBuilder class. Example solrconfig.xml snippet:

  • No labels


  1. Typo in surround parser example : {!surround 3w(foo, bar)} => {!surround}3w(foo, bar)

  2. I just updated http://wiki.apache.org/solr/CollapsingQParserPlugin with info for Solr 4.7. Not a large change textually.

    Just added:

    Collapse based on the min/max value of a function. The new cscore() function can be used with the CollapsinqQParserPlugin to return the score of the current document being collapsed. (Solr 4.7)   

    fq={!collapse field=field_name max=sum(cscore(),field(A))}


    And updated the TODO section.

    Happy to make the changes here as well if I have access.





  3. I updated the CollapsingQParserPlugin section, but I didn't get the styling on the code block done properly. I'll check back in later and see if I can get the styling right.

    1. joel: the GUI is a bitch for things like that, i still haven't figured it out, buti solved your problem through creative cut/paste/edit of another code block.

  4. Example of documents and their child documents in JSON - https://gist.github.com/vthacker/9270623

    This should go under Block Join Query Parsers example docs


    1. Thanks Varun, i actually created a new sub-section for this content over in Uploading Data with Index Handlers

  5. Eff

    The example for "Block Join Children Query Parse" should return a document with id 4 not 12341. Also, the example for "Block Join Parent Query Parser" should return a document with id 1 not 12341. I've seen this mistake in the 4.8 reference guide.

    1. thanks eff, ... not sure what happened there, but fixed now.

  6. In join query parser I'd specify that from and to field must be compatible and indexed tokens must match.
    Ex: if  field A is string and field B is text, keywordtokenized and lowercased it won't work

    1. great point – i've added a note.

  7. You should explicit mention that for a block join parent query it is always a good idea to have a field only in partent documents (content_type).

    In my case, I used the query  {!parent which='region_id:33'}duration:3 but it failed with an exception. Even when only parents got region_id and only children got duration. I added the term <parent,true> to all parent documents and rewrote the query to +region_id:33 +{!parent which='parent:true'}duration:3 and everything was fine.

    See also:

    LUCENE-6660 - Assertion fails for ToParentBlockJoinQuery$BlockJoinScorer.nextDoc Closed

  8. We have a catch-22 definition for the ReRankQParserPlugin. It refers back to the same section for more details. Looks like a possible mistake when copying from the Wiki?

    1. Good catch, Alexandre. I fixed the reference, it was supposed to go to the Query Re-Ranking page.

  9. Mikhail Khludnev - at the end of the section on the Block Join Parsers on this page there is an example of a query but it is crossed out. At first I thought it was bad Confluence formatting, but then after staring at it for a while and not seeing in the page source what the query could have been that would cause it to render as crossed out by accident, I thought maybe it is meant as an example of what not to do. Since it appears to come from an edit you made a few months ago, I wanted to check with you if that was correct? If so, I think we might want to change the formatting there - maybe put it in a box with a title that says "bad example" or something. If it's a formatting error by Confluence, please let me know what it should be so we can escape characters properly for better display. 

    1. Yep!  You've got it right. It's a caveat for users. It's what they shouldn't to, but intends to do, quite frequently. I put it into a red box, if you give me an example.  Thanks for your care!  

      1. I moved the text into a warning box (which you do by typing "{" then the word "warning" - a pop-up menu shows the option to add a warning box) and removed the strike-through.

        If it's still not clear that it's a bad example, we could put it in a code box with a title that says "bad example", but right now I think it's overkill. Users will have to tell us if it's clear enough either by making the mistake after reading the page, or asking questions.

  10. Added 'XML Query Parser' draft section - reviews, comments, edits etc. welcome.
    Still to do: the links are to 5.4.1 javadocs and would need to be changed to 5.5.0 once the javadocs for that are released.

  11. It's a small thing, but in the section on Graph Query Parser the curl command to initialize the the graph has the direction of edges 2 and 9 reversed.  For Node A, edge 2 is an in edge and edge 9 is an out edge.  Node C also has the direction of these edges reversed. 

    1. thanks for catching that steve - fixed now.

  12. I think operator names are misplaced in parameter description table of SimpleQueryParser's. Inside description column of parameter 'q .operators',and in column-"Name", at 2nd and 3rd rows where operator names (OR & NOT) are interchanged.

  13. The MoreLikeThis query parser examples should include an example on how to define multiple fields with boosting, such as:

    1. Err.. never mind, it's not working properly even though the code suggests it should. I'll take a stab at fixing it first. See SOLR-9267 for the fix to CloudMLTQParser.

  14. SolrSpanQueryBuilder javadocs link added in 'Customizing XML Query Parser' section, SOLR-9933 added this in Solr 6.5 version.