summaryrefslogtreecommitdiff
path: root/doc/todo/different_search_engine.mdwn
blob: 81ca47547a53e3e498a2710b3b0c495088f821e0 (plain)

After using it for a while, my feeling is that [[hyperestraier]], as used in the [[plugins/search]] plugin, is not robust enough for ikiwiki. It doesn't upgrade well, and it has a habit of sig-11 on certain input from time to time.

So some other engine should be found and used instead.

Enrico had one that he was using for debtags stuff that looked pretty good. That was Xapian, which has perl bindings in libsearch-xapian-perl. The nice thing about xapian is that it does a ranked search so it understands what words are most important in a search. (So does Lucene..) Another nice thing is it supports "more documents like this one" kind of search. --[[Joey]]

xapian

I've invesitgated xapian briefly. I think a custom xapian indexer and use of the omega for cgi searches could work well for ikiwiki. --[[Joey]]

indexer

A custom indexer is needed because omindex isn't good enough for ikiwiki's needs for incremental rendering. (And because, since ikiwiki has page info in memory, it's silly to write it to disk and have omindex read it back.)

The indexer would run as a ikiwiki hook. It needs to be passed the page name, and the content. Which hook to use is an open question. Possibilities:

  • filter - Since this runs before preprocess, only the actual text written on the page would be indexed. Not text generated by directives, pulled in by inlining, etc. There's something to be said for that. And something to be said against it. It would also get markdown formatted content, mostly, though it would still need to strip html.
  • sanitize - Would get the htmlized content, so would need to strip html. Preprocessor directive output would be indexed.
  • format - Would get the entire html page, including the page template. Probably not a good choice as indexing the same template for each page is unnecessary.

Currently, a filter hook seems the best option.

The hook would remove any html from the content, and index it. It would need to add the same document data that omindex would, as well as adding the same special terms (see http://xapian.org/docs/omega/overview.html "Boolean terms").

(Note that the U term is a bit tricky because I'll have to replicate ominxes's hash_string() to hash terms > 240 chars.)

The indexer (and deleter) will need a way to figure out the ids in xapian of the documents to delete. One way is storing the id of each page in the ikiwiki index.

The other way would be adding a special term to the xapian db that can be used with replace_document_by_term/delete_document_by_term. omindex uses U as a term, and I guess I could just use that, and then map page names to urls when deleting a page ... only real problem being the hashing; a collision would be bad.

At the moment, storing xapian ids in the ikiwiki index file seems like the best approach.

The hook should try to avoid re-indexing pages that have not changed since they were last indexed. One problem is that, if a page with an inline is built, every inlined item will get each hook run. And so a naive hook would index each of those items, even though none of them have necessarily changed. Date stamps are one possibility. Another would be to avoid having the hook not do any indexing when %preprocessing is set (Ikiwiki.pm would need to expose that variable.) Another approach would be to use a needsbuild hook and only index the pages that are being built.

cgi

The cgi hook would exec omega to handle the searching, much as is done with estseek in the current search plugin.

It would first set OMEGA_CONFIG_FILE=.ikiwiki/omega.conf ; that omega.conf would set database_dir=.ikiwiki/xapian and probably also set a custom template_dir, which would have modified templates branded for ikiwiki. So the actual xapian db would be in .ikiwiki/xapian/default/.

lucene

I've done a bit of prototyping on this. The current hip search library is Lucene. There's a Perl port called Plucene. Given that it's already packaged, as libplucene-perl, I assumed it would be a good starting point. I've written a very rough patch against IkiWiki/Plugin/search.pm to handle the indexing side (there's no facility to view the results yet, although I have a command-line interface working). That's below, and should apply to SVN trunk.

Of course, there are problems. ;-)

  • Plucene throws up a warning when running under Taint mode. There's a patch on the mailing list, but I haven't tried applying it yet. So for now you'll have to build IkiWiki with NOTAINT=1 make install.
  • If I kill ikiwiki while it's indexing, I can screw up Plucene's locks. I suspect that this will be an easy fix.

There is a C++ port of Lucene which is packaged as libclucene0. The Perl interface to this is called Lucene. This is supposed to be significantly faster, and presumably won't have the taint bug. The API is virtually the same, so it will be easy to switch over. I'd use this now, were it not for the lack of package. (I assume you won't want to make core functionality depend on installing a module from CPAN). I've never built a Debian package before, so I can either learn then try building this, or somebody else could do the honours. ;-)

If this seems a sensible approach, I'll write the CGI interface, and clean up the plugin. -- Ben

The weird thing about lucene is that these are all reimplmentations of it. Thank you java.. The C++ version seems like a better choice to me (packages are trivial). --[[Joey]]

Might I suggest renaming the "search" plugin to "hyperestraier", and then creating new search plugins for different engines? No reason to pick a single replacement. --[[JoshTriplett]]