lucene簡單入門
來自: https://segmentfault.com/a/1190000004422101
序
說lucene是Java界的檢索之王,當之無愧。近年來elasticsearch的火爆登場,包括之前的solr及solr cloud,其底層都是lucene。簡單了解lucene,對使用elasticsearch還是有點幫助的。本文就簡單過一下其簡單的api使用。
添加依賴
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId>
<version>4.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-common</artifactId>
<version>4.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-queryparser</artifactId>
<version>4.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-codecs</artifactId>
<version>4.6.1</version>
</dependency>
索引與檢索
創建索引
File indexDir = new File(this.getClass().getClassLoader().getResource("").getFile());
@Test
public void createIndex() throws IOException {
// Directory index = new RAMDirectory();
Directory index = FSDirectory.open(indexDir);
// 0. Specify the analyzer for tokenizing text.
// The same analyzer should be used for indexing and searching
StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_46);
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_46, analyzer);
// 1. create the index
IndexWriter w = new IndexWriter(index, config);
addDoc(w, "Lucene in Action", "193398817");
addDoc(w, "Lucene for Dummies", "55320055Z");
addDoc(w, "Managing Gigabytes", "55063554A");
addDoc(w, "The Art of Computer Science", "9900333X");
w.close();
}
private void addDoc(IndexWriter w, String title, String isbn) throws IOException {
Document doc = new Document();
doc.add(new TextField("title", title, Field.Store.YES));
// use a string field for isbn because we don't want it tokenized
doc.add(new StringField("isbn", isbn, Field.Store.YES));
w.addDocument(doc);
}</pre>
檢索
@Test
public void search() throws IOException {
// 2. query
String querystr = "lucene";
// the "title" arg specifies the default field to use
// when no field is explicitly specified in the query.
Query q = null;
try {
StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_46);
q = new QueryParser(Version.LUCENE_46,"title", analyzer).parse(querystr);
} catch (Exception e) {
e.printStackTrace();
}
// 3. search
int hitsPerPage = 10;
Directory index = FSDirectory.open(indexDir);
IndexReader reader = DirectoryReader.open(index);
IndexSearcher searcher = new IndexSearcher(reader);
TopScoreDocCollector collector = TopScoreDocCollector.create(hitsPerPage, true);
searcher.search(q, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
// 4. display results
System.out.println("Found " + hits.length + " hits.");
for (int i = 0; i < hits.length; ++i) {
int docId = hits[i].doc;
Document d = searcher.doc(docId);
System.out.println((i + 1) + ". " + d.get("isbn") + "\t" + d.get("title"));
}
// reader can only be closed when there
// is no need to access the documents any more.
reader.close();
}</pre>
分詞
對于搜索來說,分詞出現在兩個地方,一個是對用戶輸入的關鍵詞進行分詞,另一個是在索引文檔時對文檔內容的分詞。兩個分詞最好一樣,這樣才可以更好地匹配出來。
@Test
public void cutWords() throws IOException {
// StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_46);
// CJKAnalyzer analyzer = new CJKAnalyzer(Version.LUCENE_46);
SimpleAnalyzer analyzer = new SimpleAnalyzer();
String text = "Spark是當前最流行的開源大數據內存計算框架,采用Scala語言實現,由UC伯克利大學AMPLab實驗室開發并于2010年開源。";
TokenStream tokenStream = analyzer.tokenStream("content", new StringReader(text));
CharTermAttribute charTermAttribute = tokenStream.addAttribute(CharTermAttribute.class);
try {
tokenStream.reset();
while (tokenStream.incrementToken()) {
System.out.println(charTermAttribute.toString());
}
tokenStream.end();
} finally {
tokenStream.close();
analyzer.close();
}
}
輸出
spark
是
當前
最
流行
的
開源
大數
據
內存
計算
框架
采用
scala
語言
實現
由
uc
伯克利
大學
amplab
實驗室
開發
并于
2010
年
開源
本文由用戶 SteffenM01 自行上傳分享,僅供網友學習交流。所有權歸原作者,若您的權利被侵害,請聯系管理員。
轉載本站原創文章,請注明出處,并保留原始鏈接、圖片水印。
本站是一個以用戶分享為主的開源技術平臺,歡迎各類分享!