warning: file_get_contents(https://raw.githubusercontent.com/MyRobotLab/myrobotlab/develop/src/main/resources/resource/Solr//service/Solr.py) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in /var/www/html/myrobotlab/themes/superclean/repoFile.php on line 38.



Solr is an Open Source search engine based on Lucene.  It is extremely fast and scales to a very large document set. 

A solr input document is how documents are added to the index.  All documents have a field called "id" that contains a unique identifier for this document in the index.  A solr input document is effectively a map from field names to field values.  A typical document will have a "title" field and a "content" field.  These are defined in the solr schema.  

When documents are added to the index, they need to be committed before they become searchable.  
The search method on the service will allow you to pass a SolrQuery object or a string.
The only configuration to be aware of is the "solrUrl"  this is the url that solr is running on.  By default it assumed http://localhost:8983/solr
Solr version 4.10.2 is the latest and greatest, so that's what is currently integrated.  This is just an integration of the SolrJ client APIs.



Example for develop:

#file : Solr.py edit raw

from org.apache.solr.common import SolrInputDocument
from org.apache.solr.common import SolrDocument
solr = Runtime.createAndStart("solr", "Solr");
doc = SolrInputDocument()
doc.setField("id", "doc1")
doc.setField("title", "This is the title of the document.")
doc.setField("content", "This is the body or main content of the document. myrobotlab rocks.")
# A word to search for
q = "myrobotlab"
response = solr.search(q)
# iterate the results
for i in range(0 , response.getResults().size()):
  # grab the doc and print out it's fields and values.
  doc = response.getResults().get(i);
  for fieldname in doc.getFieldNames():
    print(fieldname + ":" + str(doc.getFieldValue(fieldname)))