summaryrefslogtreecommitdiff
path: root/doc/todo
diff options
context:
space:
mode:
Diffstat (limited to 'doc/todo')
-rw-r--r--doc/todo/ACL.mdwn27
-rw-r--r--doc/todo/Add_HTML_support_to_po_plugin.mdwn7
-rw-r--r--doc/todo/Add_label_to_search_form_input_field.mdwn8
-rw-r--r--doc/todo/Add_nicer_math_formatting.mdwn5
-rw-r--r--doc/todo/CSS_classes_for_links.mdwn26
-rw-r--r--doc/todo/CVS_backend.mdwn4
-rw-r--r--doc/todo/Configurable_minimum_length_of_log_message_for_web_edits.mdwn2
-rw-r--r--doc/todo/Fix_selflink_in_po_plugin.mdwn21
-rw-r--r--doc/todo/Google_Sitemap_protocol.mdwn13
-rw-r--r--doc/todo/Mailing_list.mdwn16
-rw-r--r--doc/todo/More_flexible_po-plugin_for_translation.mdwn5
-rw-r--r--doc/todo/Multiple_categorization_namespaces.mdwn103
-rw-r--r--doc/todo/OpenSearch.mdwn20
-rw-r--r--doc/todo/Option_to_make_title_an_h1__63__.mdwn2
-rw-r--r--doc/todo/Raw_view_link.mdwn4
-rw-r--r--doc/todo/Resolve_native_reStructuredText_links_to_ikiwiki_pages.mdwn333
-rw-r--r--doc/todo/Restrict_page_viewing.mdwn39
-rw-r--r--doc/todo/Separate_OpenIDs_and_usernames.mdwn44
-rw-r--r--doc/todo/Set_templates_for_whole_sections_of_the_site.mdwn33
-rw-r--r--doc/todo/Suggested_location_should_be_subpage_if_siblings_exist.mdwn2
-rw-r--r--doc/todo/Support_XML-RPC-based_blogging.mdwn3
-rw-r--r--doc/todo/a_navbar_based_on_page_properties.mdwn7
-rw-r--r--doc/todo/abbreviation.mdwn2
-rw-r--r--doc/todo/adjust_commit_message_for_rename__44___remove.mdwn5
-rw-r--r--doc/todo/allow_displaying_number_of_comments.mdwn30
-rw-r--r--doc/todo/allow_plugins_to_add_sorting_methods.mdwn304
-rw-r--r--doc/todo/allow_site-wide_meta_definitions.mdwn208
-rw-r--r--doc/todo/anon_push_of_comments.mdwn14
-rw-r--r--doc/todo/auto-create_tag_pages_according_to_a_template.mdwn290
-rw-r--r--doc/todo/auto_getctime_on_fresh_build.mdwn13
-rw-r--r--doc/todo/auto_publish_expire.mdwn33
-rw-r--r--doc/todo/auto_rebuild_on_template_change.mdwn78
-rw-r--r--doc/todo/avatar.mdwn70
-rw-r--r--doc/todo/avoid_attachement_ui_if_upload_not_allowed.mdwn25
-rw-r--r--doc/todo/backlinks_result_is_lossy.mdwn1
-rw-r--r--doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn112
-rw-r--r--doc/todo/blocking_ip_ranges.mdwn3
-rw-r--r--doc/todo/cache_backlinks.mdwn25
-rw-r--r--doc/todo/cas_authentication.mdwn13
-rw-r--r--doc/todo/cdate_and_mdate_available_for_templates.mdwn15
-rw-r--r--doc/todo/comment_moderation_feed.mdwn9
-rw-r--r--doc/todo/conflict_free_comment_merges.mdwn23
-rw-r--r--doc/todo/dependency_types.mdwn579
-rw-r--r--doc/todo/description_meta_param_passed_to_templates.mdwn10
-rw-r--r--doc/todo/double-click_protection_for_form_buttons.mdwn5
-rw-r--r--doc/todo/edit_form:_no_fixed_size_for_textarea.mdwn20
-rw-r--r--doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn8
-rw-r--r--doc/todo/enable-htaccess-files.mdwn19
-rw-r--r--doc/todo/enable_arbitrary_markup_for_directives.mdwn47
-rw-r--r--doc/todo/finer_control_over___60__object___47____62__s.mdwn98
-rw-r--r--doc/todo/generated_po_stuff_not_ignored_by_git.mdwn1
-rw-r--r--doc/todo/generic_insert_links24
-rw-r--r--doc/todo/git_attribution/discussion.mdwn6
-rw-r--r--doc/todo/html.mdwn2
-rw-r--r--doc/todo/htpasswd_mirror_of_the_userdb.mdwn29
-rw-r--r--doc/todo/http_bl_support.mdwn67
-rw-r--r--doc/todo/inline_plugin:_specifying_ordered_page_names.mdwn2
-rw-r--r--doc/todo/l10n.mdwn71
-rw-r--r--doc/todo/language_definition_for_the_meta_plugin.mdwn21
-rw-r--r--doc/todo/link_plugin_perhaps_too_general__63__.mdwn25
-rw-r--r--doc/todo/make_link_target_search_all_paths_as_fallback.mdwn27
-rw-r--r--doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn11
-rw-r--r--doc/todo/matching_different_kinds_of_links.mdwn149
-rw-r--r--doc/todo/mercurial.mdwn8
-rw-r--r--doc/todo/mirrorlist_with_per-mirror_usedirs_settings.mdwn24
-rw-r--r--doc/todo/more_flexible_inline_postform.mdwn5
-rw-r--r--doc/todo/multiple_template_directories.mdwn60
-rw-r--r--doc/todo/multiple_templates.mdwn2
-rw-r--r--doc/todo/openid_user_filtering.mdwn4
-rw-r--r--doc/todo/optimize_simple_dependencies.mdwn95
-rw-r--r--doc/todo/optional_underlaydir_prefix.mdwn46
-rw-r--r--doc/todo/pagespec_to_disable_ikiwiki_directives.mdwn5
-rw-r--r--doc/todo/pagestats_among_a_subset_of_pages.mdwn1
-rw-r--r--doc/todo/paste_plugin.mdwn36
-rw-r--r--doc/todo/po:_avoid_rebuilding_to_fix_meta_titles.mdwn45
-rw-r--r--doc/todo/po:_better_documentation.mdwn3
-rw-r--r--doc/todo/po:_better_links.mdwn12
-rw-r--r--doc/todo/po:_better_translation_interface.mdwn2
-rw-r--r--doc/todo/po:_remove_po_files_when_disabling_plugin.mdwn11
-rw-r--r--doc/todo/po:_rethink_pagespecs.mdwn11
-rw-r--r--doc/todo/po:_translation_of_directives.mdwn8
-rw-r--r--doc/todo/po_needstranslation_pagespec.mdwn12
-rw-r--r--doc/todo/preview_changes_before_git_commit.mdwn17
-rw-r--r--doc/todo/rewrite_ikiwiki_in_haskell.mdwn1
-rw-r--r--doc/todo/salmon_protocol_for_comment_sharing.mdwn21
-rw-r--r--doc/todo/should_optimise_pagespecs.mdwn213
-rw-r--r--doc/todo/smarter_sorting.mdwn141
-rw-r--r--doc/todo/sort_parameter_for_map_plugin_and_directive.mdwn7
-rw-r--r--doc/todo/source_link.mdwn5
-rw-r--r--doc/todo/structured_page_data.mdwn5
-rw-r--r--doc/todo/support_link__40__.__41___in_pagespec.mdwn13
-rw-r--r--doc/todo/svg.mdwn19
-rw-r--r--doc/todo/tagging_with_a_publication_date.mdwn31
-rw-r--r--doc/todo/tmplvars_plugin/discussion.mdwn1
-rw-r--r--doc/todo/toc_plugin:_set_a_header_ceiling___40__opposite_of_levels__61____41__.mdwn45
-rw-r--r--doc/todo/tracking_bugs_with_dependencies.mdwn98
-rw-r--r--doc/todo/two-way_convert_of_wikis.mdwn15
-rw-r--r--doc/todo/user-defined_templates_outside_the_wiki.mdwn10
-rw-r--r--doc/todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both.mdwn2
-rw-r--r--doc/todo/wrapperuser.mdwn7
100 files changed, 3977 insertions, 282 deletions
diff --git a/doc/todo/ACL.mdwn b/doc/todo/ACL.mdwn
index e9fb2717f..dd9793233 100644
--- a/doc/todo/ACL.mdwn
+++ b/doc/todo/ACL.mdwn
@@ -21,6 +21,11 @@ something, that I think is very valuable.
>>>> Which would rule out openid, or other fun forms of auth. And routing all access
>>>> through the CGI sort of defeats the purpose of ikiwiki. --[[Ethan]]
+>>>>> I think what Joey is suggesting is to use apache ACLs in conjunction
+>>>>> with basic HTTP auth to control read access, and ikiwiki can use the
+>>>>> information via the httpauth plugin for other ACLs (write, admin). But
+>>>>> yes, that would rule out non-httpauth mechanisms. -- [[Jon]]
+
Also see [[!debbug 443346]].
> Just a few quick thoughts about this:
@@ -69,3 +74,25 @@ Here is how I see it:
<pre>
\[[!acl user=* page=/subsite/* acl=/subsite/acl.mdwn]]
</pre>
+
+Any idea when this is going to be finished? If you want, I am happy to beta test.
+
+> It's already done, though that is sorta hidden in the above. :-)
+> Example of use to only allow two users to edit the tipjar page:
+> locked_pages => 'tipjar and !(user(joey) or user(bob))',
+> --[[Joey]]
+
+> > Thank you for the hint but I am being still confused (read: dense)... What I am trying to do is this:
+
+> > * No anonymous access.
+> > * Logged in users can edit and create pages.
+> > * Users can set who can edit their pages.
+> > * Some pages are only viewable by admins.
+
+> > Is it possible? If so how?...
+
+>>> I don't believe this is currently possible. What is missing is the concept
+>>> of page 'ownership'. -- [[Jon]]
+
+>>>> GAH! That is really a shame... Any chance of adding that? No, I do not really expect it to be added, after all my requirements are pushing the boundary of what a wikiwiki
+ should be. Nonetheless, thanks for your help!
diff --git a/doc/todo/Add_HTML_support_to_po_plugin.mdwn b/doc/todo/Add_HTML_support_to_po_plugin.mdwn
new file mode 100644
index 000000000..ec29e4f61
--- /dev/null
+++ b/doc/todo/Add_HTML_support_to_po_plugin.mdwn
@@ -0,0 +1,7 @@
+The HTML page type should be fully supported by the PO plugin: po4a's
+HTML support is able to extract translatable strings and to disregard
+the rest.
+
+This is implemented in my po branch, please review. --[[intrigeri]]
+
+[[!tag patch]]
diff --git a/doc/todo/Add_label_to_search_form_input_field.mdwn b/doc/todo/Add_label_to_search_form_input_field.mdwn
index e4e83428c..514108fba 100644
--- a/doc/todo/Add_label_to_search_form_input_field.mdwn
+++ b/doc/todo/Add_label_to_search_form_input_field.mdwn
@@ -47,4 +47,10 @@ The patch below adds a label for the field to improve usability:
> to get it to appear higher up is to put it first, or to use Evil absolute
> positioning. (CSS sucks.) --[[Joey]]
-[[!tag done wishlist]]
+> Update: html5 allows just adding `placeholder="Search"` to the input
+> element. already works in eg, chromium. However, ikiwiki does not use
+> html5 yet. --[[Joey]]
+
+>> [[Done]], placeholder added, in html5 mode only.
+
+[[!tag wishlist bugs/html5_support]]
diff --git a/doc/todo/Add_nicer_math_formatting.mdwn b/doc/todo/Add_nicer_math_formatting.mdwn
new file mode 100644
index 000000000..041eaee11
--- /dev/null
+++ b/doc/todo/Add_nicer_math_formatting.mdwn
@@ -0,0 +1,5 @@
+It would be nice to add nicer math formatting. I currently use the [[plugins/teximg]] plugin, but I wonder if [jsMath](http://www.math.union.edu/~dpvc/jsMath/) wouldn't be a better option.
+
+[[Will]]
+
+[[!tag wishlist]]
diff --git a/doc/todo/CSS_classes_for_links.mdwn b/doc/todo/CSS_classes_for_links.mdwn
index 5013a9d12..38db87724 100644
--- a/doc/todo/CSS_classes_for_links.mdwn
+++ b/doc/todo/CSS_classes_for_links.mdwn
@@ -75,3 +75,29 @@ I find CSS3 support still spotty... Here are some notes on how to do this in Ik
> [[rel_attribute|rel_attribute_for_links]] now that CSS can use.)
>
> --[[Joey]]
+
+>> I had a little look at this, last weekend. I added a class definition to
+>> the `htmllink` call in `linkify` in `link.pm`. It works pretty well, but
+>> I'd also need to adjust other `htmllink` calls (map, inline, etc.). I found
+>> other methods (CSS3 selectors, etc.) to be unreliable.
+>>
+>> Would you potentially accept a patch that added `class="internal"` to
+>> various `htmllink` calls in ikiwiki?
+>>
+>> How configurable do you think this behaviour should be? I'm considering a
+>> config switch to enable or disable this behaviour, or possibly a
+>> configurable list of class names to append for internal links (defaulting
+>> to an empty list for backwards compatibility)>
+>>
+>> As an alternative to patching the uses of `htmllink`, what do you think
+>> about patching `htmllink` itself? Are there circumstances where it might be
+>> used to generate a non-internal link? -- [[Jon]]
+
+>>> I think that the minimum configurability to get something that
+>>> can be used by CSS to style the links however the end user wants
+>>> is the best thing to shoot for. Ideally, no configurability. And
+>>> a tip or something documenting how to use the classes in your CSS
+>>> to style links so that eg, external links have a warning icon.
+>>>
+>>> `htmllink` can never be used to generate an external link. So,
+>>> patching it seems the best approach. --[[Joey]]
diff --git a/doc/todo/CVS_backend.mdwn b/doc/todo/CVS_backend.mdwn
index 3c6527290..c450542e2 100644
--- a/doc/todo/CVS_backend.mdwn
+++ b/doc/todo/CVS_backend.mdwn
@@ -9,6 +9,8 @@ Original discussion:
>> No, although the existing svn backend could fairly esily be modified into
>> a CVS backend, by someone who doesn't mind working with CVS. --[[Joey]]
>>
->>> Wouldn't say I don't mind, but I needed it. See [[plugins/contrib/cvs]]. --[[Schmonz]]
+>>> Wouldn't say I don't mind, but I needed it. See [[rcs/cvs]]. --[[Schmonz]]
+
+[[done]]
[[!tag wishlist]]
diff --git a/doc/todo/Configurable_minimum_length_of_log_message_for_web_edits.mdwn b/doc/todo/Configurable_minimum_length_of_log_message_for_web_edits.mdwn
index 7870281ae..74a4fb1b1 100644
--- a/doc/todo/Configurable_minimum_length_of_log_message_for_web_edits.mdwn
+++ b/doc/todo/Configurable_minimum_length_of_log_message_for_web_edits.mdwn
@@ -1,3 +1,5 @@
It would be nice to specify a minimum length for the change log for web edits, and if it's only required vs. non-required. I realise this is not going to solve the problem of crap log messages, but it helps guard against accidental submissions which one would have logged. Mediawiki/wikipedia has that option, and I find it a useful reminder. --[[madduck]]
+> +1 --[[lnussel]]
+
[[!tag wishlist]]
diff --git a/doc/todo/Fix_selflink_in_po_plugin.mdwn b/doc/todo/Fix_selflink_in_po_plugin.mdwn
new file mode 100644
index 000000000..b276c075d
--- /dev/null
+++ b/doc/todo/Fix_selflink_in_po_plugin.mdwn
@@ -0,0 +1,21 @@
+Using the po plugin, a link to /bla is present in the sidebar.
+When viewing /bla in the default language, this link is detected as
+a selflink. When viewing a translation of /bla, it
+isn't. --[[intrigeri]]
+
+Fixed in my po branch. --[[intrigeri]]
+
+[[!tag patch done]]
+
+> bump?
+
+>> I know I've looked at 88c6e2891593fd508701d728602515e47284180c
+>> before, and something about it just seemed wrong. Maybe it's
+>> the triviality of the sub, which it would seem to be easy to
+>> decide to refactor back into its one caller (which would reintroduce the
+>> bug). --[[Joey]]
+
+>>> Well, I can hear and understand this. Apart of adding a comment to
+>>> the sub, explaining the rationale (which is now done in my po
+>>> branch), I don't know what I can do to make it not seem wrong.
+>>> --[[intrigeri]]
diff --git a/doc/todo/Google_Sitemap_protocol.mdwn b/doc/todo/Google_Sitemap_protocol.mdwn
index 057a88b72..ea8ee7f03 100644
--- a/doc/todo/Google_Sitemap_protocol.mdwn
+++ b/doc/todo/Google_Sitemap_protocol.mdwn
@@ -34,6 +34,9 @@ for an example. You will probably need to strip out the metadata variables I
>>>[xtermin.us rather than localhost](http://xtermin.us/git/?p=website.git;a=blob;f=plugins/googlesitemap.pm) is 404 now.
>>> -- weakish
+
+Although it is not able to read the meta-data from files, using google-sitemapgen [works well for me](http://bzed.de/posts/2010/06/creating_a_google_sitemap_for_ikiwiki/) to create a sitemap for my ikiwiki installation. -- [[bzed|BerndZeimetz]]
+
There is a [sitemap XML standard](http://www.sitemaps.org/protocol.php) that ikiwiki needs to generate for.
# Google Webmaster tools and RSS
@@ -45,3 +48,13 @@ On [Google Webmaster tools](https://www.google.com/webmasters/tools) you can sub
[Google should grok feeds as sitemaps.](http://www.google.com/support/webmasters/bin/answer.py?answer=34654) Or rather [[plugins/inline]] should be improved to support the [sitemap protocol](http://sitemaps.org/protocol.php) natively.
-- [[Hendry]]
+
+
+Took me a minute to figure this out so I figured I'd share the steps I took:
+
+* Added rss=>1 and allowrss=>1 to my setup file
+* Created a new page where the RSS would be created with this content, replacing "first_page" with the page in my wiki with the earliest date:
+
+<pre>
+\[[!inline pages="* and !*/Discussion and created_after(first_page)" archive="yes" rss="yes" ]]
+</pre>
diff --git a/doc/todo/Mailing_list.mdwn b/doc/todo/Mailing_list.mdwn
index b6a207420..67cbbb00b 100644
--- a/doc/todo/Mailing_list.mdwn
+++ b/doc/todo/Mailing_list.mdwn
@@ -18,3 +18,19 @@ Does this sound okay?
> todo/bugs/forum feeds, or to some other feed they create on their user page.
> And there's work on making the discussion pages more structured, on
> accepting comments sent via mail, etc. --[[Joey]]
+
+>>I was going to make the very same request, so I'm glad to know I'm not the only one who felt the need for it.
+
+>>I can see your reasoning, though I don't think ikiwiki has reached the level (yet) of facilitating discussion as well as a mailing list does.
+>>You've already pointed out the need for (a) more structured discussion pages, (b) comments sent via mail, but I'm not sure whether that will be enough. This is because the nature of a wiki means that discussions are scattered all over the site, as people discuss in discussion pages about the given topic - and so they should. The consequence of this, however, is that one has a choice (in regard to RSS feeds) of having too much or too little. Too little, if one only feeds on news/todo/bugs/forum, since one misses out on discussions elsewhere. Too much, because the only other option appears to be subscribing to recentchanges, which will give one *everything*, whether it is relevant or not.
+>>Unfortunately, I'm not really sure what the best solution is for this problem.
+
+>> For those who might be interested, I've added the following RSS feeds to <http://www.dreamwidth.org>:
+*ikiwiki_bugs_feed,
+ikiwiki_forum_feed,
+ikiwiki_news_feed,
+ikiwiki_recent_feed,
+ikiwiki_todo_feed,
+ikiwiki_wishlist_feed*
+
+>>--[[KathrynAndersen]]
diff --git a/doc/todo/More_flexible_po-plugin_for_translation.mdwn b/doc/todo/More_flexible_po-plugin_for_translation.mdwn
new file mode 100644
index 000000000..3399f7834
--- /dev/null
+++ b/doc/todo/More_flexible_po-plugin_for_translation.mdwn
@@ -0,0 +1,5 @@
+I have a website with multi-language content, where some content is only in English, some in German, and some is available in both languages.
+
+The po-module currently has only one master-language, with slave languages, and a PageSpec should be considered.
+
+It would be nice to flag the content which should have a translation on a file-by-file basis (with some inline directive?) which could contain the information of the master-language for that file and the desired target-languages.
diff --git a/doc/todo/Multiple_categorization_namespaces.mdwn b/doc/todo/Multiple_categorization_namespaces.mdwn
new file mode 100644
index 000000000..3e9f8feaa
--- /dev/null
+++ b/doc/todo/Multiple_categorization_namespaces.mdwn
@@ -0,0 +1,103 @@
+I came across this when working on converting my old blog into an ikiwiki, but I think it could be of more general use.
+
+The background: I have a (currently suspended, waiting to be converted) blog on the [il Cannocchiale](http://www.ilcannocchiale.it) hosting platform. Aside from the usual metatadata (title, author), il Cannocchiale also provides tags and two additional categorization namespaces: a blog-specific user-defind "column" (Rubrica) and a platform-wide "category" (Categoria). The latter is used to group and label a couple of platform-wide lists of latest posts, the former may be used in many different ways (e.g. multi-author blogs could have one column per author or so, or as a form of 'macro-tagging'). Columns are also a little more sophisticated than classical tags because you can assign them a subtitle too.
+
+When I started working on the conversion, my first idea was to convert Rubriche to subdirectories of an ikiwiki blog. However, this left me with a few annoying things: when rebuilding links from the import, I had to (programmatically) dive into each subdirectory to see where each post was; this would also be problematic for future posting, too. It also meant that moving a post from a Rubrica to the other would break all links (unless ikiwiki has a way to fix this automagically). And I wasn't too keen on the fact that the Rubrica would come up in the URL of the post. And finally, of course, I couldn't use this to preserve the Categoria metadata.
+
+Another solution I thought about was to use special deeper tags for the Rubrica and Categoria (like: `\[[!tag "Rubrica/Some name"]]`), but this is horrible, clumsy, and makes special treatment of these tags a PITN (for example you wouldn't want the Rubrica to be displayed together with the other tags, and you would want it displayed somewhere else like next to the title of the post). This solution however looks to me as the proper path, as long as tags could support totally separate namespaces. I have a tentative implementation of this `tagtype` feature at [my git clone of ikiwiki](http://git.oblomov.eu/ikiwiki).
+
+The feature is currently implemented as follows: a `tagtypes` config options takes an array of strings: the tag types to be defined _aside from the usual tags_. Each tag type automatically provides a new directive which sets up tags that different from standard tags by having a different tagbase (the same as the tagtype) and link type (again, the same as the tagtype) (a TODO item for this would to make the directive, tagbase and link type customizable). For example, for my imported blog I would define
+
+ tagtypes => [qw{Categoria Rubrica}]
+
+and then in the blog posts I would have stuff like
+
+ \[[!Categoria "LAVORO/Vita da impiegato"]]
+ \[[!Rubrica "Il mio mondo"]]
+ \[[!meta title="Blah blah"]]
+ \[[!meta author="oblomov"]]
+
+ The body of the article
+
+ \[[!tag a bunch of tags]]
+
+and the tags would appear at the bottom of the post, the Rubrica next to the title, etc. All of this information would end up as categories in the feeds (although I would like to rework that code to make use of namespaces, terms and labels in a different way).
+
+> Note [[plugins/contrib/report/discussion]]. To quote myself from the latter page:
+> *I find tags as they currently exist to be too limiting. I prefer something that can be used for Faceted Tagging http://en.wikipedia.org/wiki/Faceted_classification; that is, things like Author:Fred Nurk, Genre:Historical, Rating:Good, and so on. Of course, that doesn't mean that each tag is limited to only one value, either; just to take the above examples, something might have more than one author, or have multiple genres (such as Historical + Romance).*
+
+> So you aren't the only one who wants to do more with tags, but I don't think that adding a new directive for each tag type is the way to go; I think it would be simpler to just have one directive, and take advantage of the new [[matching different kinds of links]] functionality, and enhance the tag directive.
+> Perhaps something like this:
+
+ \[[!tag categorica="LAVORO/Vita da impiegato" rubrica="Il mio mondo"]]
+
+> Part of my thinking in this is to also combine tags with [[plugins/contrib/field]], so that the tags for a page could be queried and displayed; that way, one could put them wherever you wanted on the page, using any of [[plugins/contrib/getfield]], [[plugins/contrib/ftemplate]], or [[plugins/contrib/report]].
+> --[[KathrynAndersen]]
+
+>> A very generic metadata framework could cover all possible usages of fields, tags, and related metadata, but keeping its _user interface_ generic would only make it hard to use. Note that this is not an objection to the idea of collapsing the fields and tags functionality (at quick glance, I cannot see a real difference between single-valued custom tagtypes and fields, but see below), but more about the syntax.
+
+>> I had thought about the `\[[!tag type1=value1 type2=value2]]` syntax myself, but ultimately decided against it for a number of reasons, most importantly the fact that (1) it's harder to type, (2) it's harder to spot errors in the tag types (so for example if one misspelled `categoria` as `categorica`, he might not notice it as quickly as seeing the un-parsed `\[[!categorica ]]` directive in the output html) and (3) it encourages collapsing possibly unrelated metadata together (for example, I would never consider putting the categoria information together with the rubrica one; of course with your syntax it's perfectly possible to keep them separate as well).
+
+>> Point (2) may be considered a downside as well as an upside, depending on perspective, of course. And it would be possible to have a set of predefined tag types to match against, like in my tagtype directive approach but with your syntax.
+
+>>> You seem to have answered your own objections already. -- K.A.
+
+>>Point (3) is of course entirely in the hands of the user, but that's exactly what syntax should be about. There is nothing functionally wrong with e.g. `\[[!meta tag=sometag author=someauthor title=sometitle rubrica=somecolumn]]`, but I honestly find it horrible.
+
+>>> So, really, point 3 comes down to differing aesthetics. -- K.A.
+
+>> A solution could be to allow both syntaxes, getting to have for example `\[[!sometagtype "blah"]]` as a shortcut for `\[[!tag sometagtype="blah"]]` (or, in the more general case, `\[[!somefieldname "blah"]]` as a shortcut for `\[[!meta fieldname="blah"]]`).
+
+>> I would like to point out however that there are some functional differences between categorization metadata vs other metadata that might suggest to keep fields and (my extended) tags separate. For examples, in feeds you'd want all categorization metadata to fall in one place, with some appropriate manipulation (which I still have to implement, by the way), while things like author or title would go to the corresponding feed item properties. Although it all would be possible with appropriate report or template juggling, having such default metadata handled natively looks like a bonus to me.
+
+>>> Whereas I prefer being able to control such things with templates, because it gives more flexibility AND control. - K.A.
+
+>>>> Flexibility and control is good for tuning and power-usage, but sensible defaults are a must for a platform to be usable out of the box without much intervention. Moreover, there's a possible problem with what kind of data must be passed over to templates.
+
+Aside from the name of the plugin (and thus of the main directive), which could be `tag`, `meta`, `field` or whatever (maybe extending `meta` would be the most sensible choice), the features we want are
+
+1. allow multiple values per type/attribute/field/whatever (fields currently only allows one)
+ * Agreed about multiple values; I've been considering whether I should add that to `field`. -- K.A.
+2. allow both hidden and visible references (a la tag vs taglink)
+ * Hidden and visible references; that's fair enough too. My approach with `ymlfront` and `getfield` is that the YAML code is hidden, and the display is done with `getfield`, but there's no reason not to use additional approaches. -- K.A.
+3. allow each type/attribute/field to be exposed under multiple queries (e.g. tags and categories; this is mostly important for backwards compatibility, not sure if it might have other uses too)
+ * I'm not sure what you mean here. -- K.A.
+ * Typical example is tags: they are accessible both as `tags` and as `categories`, although the way they are presented changes a little -- G.B.
+4. allow arbitrary types/attributes/fields/whatever (even 'undefined' ones)
+ * Are you saying that these must be typed, or are you saying that they can be user-defined? -- K.A.
+ * I am saying that the user should be able to define (e.g. in the config) some set of types/fields/attributes/whatever, following the specification illustrated below, but also be able to use something like `\[[!meta somefield="somevalue"]]` where `somefield` was never defined before. In this case `somefield` will have some default values for the properties described in the spec below. -- G.B.
+
+Each type/attribute/field/whatever (predefined, user-defined, arbitrary) would thus have the following parameters:
+
+* `directive` : the name of the directive that can be used to set the value as a hidden reference; we can discuss whether, for pre- or user-defined types, it being undef means no directive or a default directive matching the attribute name would be defined.
+ * I still want there to be able to be enough flexibility in the concept to enable plugins such as `yamlfront`, which sets the data using YAML format, rather than using directives. -- K.A.
+ * The possibility to use a directive does not preclude other ways of defining the field values. IOW, even if the directive `somefield` is defined, the user would still be able to use the syntax `\[[!meta somefield="somevalue"]]`, or any other syntax (such as YAML). -- G.B.
+* `linkdirective` : the name of the directive that can be used for a visible reference; no such directive would be defined by default
+* `linktype` : link type for (hidden and visible) references
+ * Is this the equivalent to "field name"? -- K.A.
+ * This would be such by default, but it could be set to something different. [[Typed links|matching_different_kinds_of_links]] is a very recent ikiwiki feature. -- G.B.
+* `linkbase` : akin to the tagbase parameter
+ * Is this a field-name -> directory mapping? -- K.A.
+ * yes, with each directory having one page per value. It might not make sense for all fields, of course -- G.B.
+ * (nods) I've been working on something similar with my unreleased `tagger` module. In that, by default, the field-name maps to the closest wiki-page of the same name. Thus, if one had the field "genre=poetry" on the page fiction/stories/mary/lamb, then that would map to fiction/genre/poetry if fiction/genre existed. --K.A.
+ * that's the idea. In your case you could have the linkbase of genre be fiction/genre, and it would be created if it was missing. -- G.B.
+* `queries` : list of template queries this type/attribute/field/whatever is exposed to
+ * I'm not sure what you mean here. -- K.A.
+ * as mentioned before, some fields may be made accessible through different template queries, in different form. This is the case already for tags, that also come up in the `categories` query (used by Atom and RSS feeds). -- G.B.
+ * Ah, do you mean that the input value is the same, but the output format is different? Like the difference between TMPL_VAR NAME="FOO" and TMPL_VAR NAME="raw_FOO"; one is htmlized, and the other is not. -- K.A.
+ * Actually this is about the same information appearing in different queries (e.g. NAME="FOO" and NAME="BAR"). Example: say that I defined a "Rubrica" field. I would want both tags and categories to appear in `categories` template query, but only tags would appear in the `tags` query, and only Rubrica values to appear in `rubrica` queries. The issue of different output formats was presented in the next paragraph instead. -- G.B.
+
+Where this approach is limiting is on the kind of data that is passed to (template) queries. The value of the metadata fields might need some massaging (e.g. compare how tags are passed to tags queries vs cateogires queries, or also see what is done with the fields in the current `meta` plugin). I have problems on picturing an easy way to make this possible user-side (i.e. via templates and not in Perl modules). Suggestions welcome.
+
+One possibility could be to have the `queries` configuration allow a hash mapping query names to functions that would transform the data. Lacking that possibility, we might have to leave some predefined fields to have custom Perl-side treatment and leave custom fields to be untransformable.
+
+-----
+
+I've now updated the [[plugins/contrib/field]] plugin to have:
+
+* arrays (multi-valued fields)
+* the "linkbase" option as mentioned above (called field_tags), where the linktype is the field name.
+
+I've also updated [[plugins/contrib/ftemplate]] and [[plugins/contrib/report]] to be able to use multi-valued fields, and [[plugins/contrib/ymlfront]] to correctly return multi-valued fields when they are requested.
+
+--[[KathrynAndersen]]
diff --git a/doc/todo/OpenSearch.mdwn b/doc/todo/OpenSearch.mdwn
index e63ded688..c35da54e1 100644
--- a/doc/todo/OpenSearch.mdwn
+++ b/doc/todo/OpenSearch.mdwn
@@ -15,4 +15,24 @@ contain the wiki title from `ikiwiki.setup`.
--[[JoshTriplett]]
+> I support adding this. I think all that is needed, beyond the simple task
+> of adding the link header, is to make the search plugin write out
+> the xml file, probably based on a template.
+>
+> One problem is that the
+> [specification](http://www.opensearch.org/Specifications/OpenSearch/1.1#OpenSearch_description_document)
+> for the XML file contains a number of silly limits to field lenghs.
+> For example, it wants a "ShortName" that identifies the search engine,
+> to be 16 characters or less. The Description is limited to 1024,
+> the LongName to 48. This limits what existing config settings can be
+> reused for those.
+>
+> Another semi-problem is that the specification saz:
+>
+>> OpenSearch description documents should include at least one Query element of role="example" that is expected to return search results. Search clients may use this example query to validate that the search engine is working properly.
+>
+> How should ikiwiki know what example query will return actual results?
+> (How would a client know if a HTML page contains results or not, anyway?)
+> Sillyness. Ignore this? --[[Joey]]
+
[[wishlist]]
diff --git a/doc/todo/Option_to_make_title_an_h1__63__.mdwn b/doc/todo/Option_to_make_title_an_h1__63__.mdwn
index f4023d6dd..8345cd010 100644
--- a/doc/todo/Option_to_make_title_an_h1__63__.mdwn
+++ b/doc/todo/Option_to_make_title_an_h1__63__.mdwn
@@ -11,4 +11,4 @@ Currently, the page title (either the name of the page or the title specified wi
> latter, making `#` (only when on the first line) set the page title, removing it from
> the page body. --[[JasonBlevins]], October 22, 2008
- [h1title]: http://code.jblevins.org/ikiwiki/plugins.git/plain/h1title.pm
+ [h1title]: http://jblevins.org/git/ikiwiki/plugins.git/plain/h1title.pm
diff --git a/doc/todo/Raw_view_link.mdwn b/doc/todo/Raw_view_link.mdwn
index fd64074c2..b62a9022b 100644
--- a/doc/todo/Raw_view_link.mdwn
+++ b/doc/todo/Raw_view_link.mdwn
@@ -4,7 +4,7 @@ The configuration setting for Mercurial could be something like this:
rawurl => "http://localhost:8000//raw-file/tip/\[[file]]",
-> What I want to do when I want to see if the raw source is either
+> What I do when I want to see if the raw source is either
> click on the edit link, or click on history and navigate to it in the
> history browser (easier to do in viewvc than in gitweb, IIRC).
> Not that I'm opposed to the idea of a plugin that adds a Raw link
@@ -14,4 +14,6 @@ The configuration setting for Mercurial could be something like this:
>> to gitweb/etc. I think Will's patch is a good approach, and have improved
>> on it a bit in a git branch.
+>>> Since that is merged in now, I'm marking this [[done]] --[[Joey]]
+
[[!tag wishlist]]
diff --git a/doc/todo/Resolve_native_reStructuredText_links_to_ikiwiki_pages.mdwn b/doc/todo/Resolve_native_reStructuredText_links_to_ikiwiki_pages.mdwn
new file mode 100644
index 000000000..6e0f32fd5
--- /dev/null
+++ b/doc/todo/Resolve_native_reStructuredText_links_to_ikiwiki_pages.mdwn
@@ -0,0 +1,333 @@
+_NB! this page has been refactored, hopefully it is clearer now_
+_I propose putting discussion posts somewhere in the vincity of
+the secttion Individual reStructuredText Issues_
+
+## Design ##
+
+**Goal**
+
+To be able to use rst as a first-class markup language in ikiwiki. I think
+most believe this is almost impossible (ikiwiki is built around markdown).
+
+## Wikilinks ##
+
+**WikiLinks**, first and foremost, are needed for a wiki. rST already allows
+specifying absolue and relative URL links, and relative links can be used to
+tie together wiki of rst documents.
+
+1. Below are links to a small, working implementation for resolving
+ undefined rST references using ikiwiki's mechanism. This is **Proposal 1**
+ for rst WikiLinks.
+
+2. Looking over at rST-using systems such as trac and MoinMoin; I think it
+ would be wiser to implement wikilinks by the `:role:` mechanism, together
+ with allowing a custom URL scheme to point to wiki links. This is
+ **Proposal 2**.
+
+ This is a simple wiki page, with :wiki:`WikiLinks` and other_ links
+
+ .. _other: wiki:wikilink
+
+ We can get rid of the role part as well for WikiLinks::
+
+ .. default-role:: wiki
+
+ Enables `WikiLinks` but does not impact references such as ``other``
+ This can be made the default for ikiwiki.
+
+Benefits of using a `:role:` and a `wiki: page/subpage` URL scheme are
+following:
+
+1. rST documents taken out of the context (the wiki) will not fail as bad as
+ if they have lots of Proposal-1 links: They look just the same as valid
+ references, and you have to edit them all.
+ In contrast, should the `:wiki:` role disappear, one line is enough
+ to redefined it and silence all the warnings for the document:
+
+ .. role:: wiki (title)
+
+### Implementation ###
+
+Implementation of Proposal-2 wikilinks are in the branch
+[rst-wikilinks][rst-wl]
+
+
+ This is a simple wiki page, with :wiki:`WikiLinks` and |named| links
+
+ .. |named| wiki:: Some Page
+
+ We can get rid of the role part as well for WikiLinks::
+
+ .. default-role:: wiki
+
+ Enables `WikiLinks` but does not impact references such as ``named``
+ This can be made the default for ikiwiki.
+
+[rst-wl]: http://github.com/engla/ikiwiki/commits/rst-wikilinks
+
+**rst-wikilinks** patch series includes changes at the end to use ikiwiki's
+'htmllink' for the links (which is the only sane thing to do to work in all configurations).
+This means a :wiki:`Link` should render just exactly like [[Link]] whether
+the target exists or not.
+
+On top of **rst-wikilinks** is [rst-customize][rst-custom] which adds two
+power user features: Global (python) file to read in custom directives
+(unsafe), and a wikifile as "header" file for all parsed .rst files (safe,
+but disruptive since all .rst depend on it). Well, the customizations have
+to be picked and chosen from this, but at least the global python file can
+be very convenient.
+
+> Did you consider just including the global rst header text into an item
+> in the setup file? --[[Joey]]
+>
+>> Then `rst_header` would not be much different from the python script
+>> `rst_customize`. rst_header is as safe as other files (though disruptive
+>> as noted), so it should/could be a editable file in the Wiki. A Python
+>> script of course can not be. There is nothing you can do in the
+>> rst_header (that you sensibly would do, I think) that couldn't be done in
+>> the Python script. `rst_header` has very limited use, but it is another
+>> possibility, mainly for the user-editable aspect. --[[ulrik]]
+>>
+>> (I foresaw only two things to be added to the rst_header: the default
+>> role could be configured there (as with rst_wikirole), and if you have a
+>> meta-role like :shortcut:, shortcuts could be defined there.)
+>
+> I have some discussion on the [docutils mailing list][dml], the developers
+> of docutils seems to favor "Proposal 1", while I defend my ideas. They
+> want all users of ReST to use only the basic featureset to remain
+> compatible, of course. -- [[ulrik]]
+
+[dml]: http://thread.gmane.org/gmane.text.docutils.user/5376
+
+Some rst-custom [examples are here](http://kaizer.se/wiki/rst_examples/)
+
+[rst-custom]: http://github.com/engla/ikiwiki/commits/rst-customize
+
+## Directives ##
+
+Now **Directives**: As it is now, ikiwiki goes though (roughly):
+filter, preprocess, htmlize, format as major stages of content
+transformation. rST has major problems to work with any HTML that enters the
+picture before it.
+
+1. Formatting rST in `htmlize` (as is done now): Raw html can be escaped by
+ raw blocks:
+
+ .. raw:: html
+
+ \[[!inline and do stuff]]
+
+ (This can be simplified to alias the above as `.. ikiwiki::`)
+ This escape method works, if ikwiki can be persuaded to maintain the
+ indent when inserting html, so that it stays inside the raw block.
+
+2. Formatting rST in `filter` (idea)
+ 1. rST does not have to see any HTML (raw not needed)
+ 2. rST directives can alias ikiwiki syntax:
+
+ ..ikiwiki:: inline pages= ...
+
+ 3. Using rST directives as ikiwiki directives can be complicated;
+ but rST directives allow a direct line (after :: on first line),
+ an option list, and a content block.
+
+> You've done a lot of work already, but ...
+>
+> The filter approach seems much simpler than the other approaches
+> for users to understand, since they can just use identical ikiwiki
+> markup on rst pages as they would use anywhere else. This is very desirable
+> if the wiki allows rst in addition to mdwn, since then users don't have
+> to learn two completly different ways of doing wikilinks and directives.
+> I also wonder if even those familiar with rst would find entirely natural
+> the ways you've found to shoehorn in wikilinks, named wikilinks, and ikiwiki
+> directives?
+>
+> Htmlize in filter avoids these problems. It also leaves open the possibility
+> that ikiwiki could become smarter about the rendering chain later, and learn
+> to use a better order for rst (ie, htmlize first). If that later happened,
+> the htmlize in filter hack could go away. --[[Joey]]
+
+> (BTW, the [[plugins/txt]] plugin already does html formatting
+> in filter, for similar reasons.) --[[Joey]]
+
+>> Thank you for the comments! Forget the work, it's not so much.
+>> I'd rank the :wiki: link addition pretty high, and the other changes way
+>> behind that:
+>>
+>> The :wiki:`Wiki Link` syntax is *very* appropriate as rst syntax
+>> since it fits well with other uses of roles (notice that :RFC:`822`
+>> inserts a link to RFC822 etc, and that the default role is a *title* role
+>> (title of some work); thus very appropriate for medium-specific links like
+>> wiki links. So I'd rank :wiki: links a worthwhile addition regardless of
+>> outcome here, since it's a very rst-like alternative for those who wish to
+>> use more rst-like syntax (and documents degrades better outside the wiki as
+>> noted).
+>>
+>>> Unsure about the degredation argument. It will work some of
+>>> the time, but ikiwiki's [[ikiwiki/subpage/linkingrules]]
+>>> are sufficiently different from normal html relative link
+>>> rules that it often won't work. --[[Joey]]
+>>>
+>>>> With degradation I mean that if you take a file out of the wiki; the
+>>>> links degrade to stylized text. If using default role, they degrade to
+>>>> :title: which renders italicized text (which I find is exactly
+>>>> appropriate). There is no way for them to degrade into links, except of
+>>>> course if you reimplement the :wiki: role. You can also respecify
+>>>> either the default role (the `wikilink` syntax) or the :wiki: role (the
+>>>> :wiki:`wikilink` syntax) to any other markup, for example None.
+>>>> --[[ulrik]]
+>>
+>> The named link syntax (just like the :wiki: role) are inspired from
+>> [trac][tracrst] and a good fit, but only if the wiki is committed to
+>> using only rst, which I don't think is the case.
+>>
+>> The rst-customize changes are very useful for custom directive
+>> installations (like the sourcecode directive, or shortcut roles I show
+>> in the examples page), but there might be a way for the user to inject
+>> docutils addons that I'm missing (one very ugly way would be to stick
+>> them in sitecustomize.py which affects all Python programs).
+>>
+>> With the presented changes, I already have a working RestructuredText
+>> wiki, but I'm admitting that using .. raw:: html around all directives is
+>> very ugly (I use few directives: inline, toggle, meta, tag, map)
+>>
+>> On filter/htmlize: Well **rst** is clearly antisocial: It can't see HTML,
+>> and ikiwiki directives are wrappend in paragraph tags. (For wikilinks
+>> this is probably no problem). So the suggestion about `.. ikiwiki:` is
+>> partly because it looks good in rst syntax, but also since it would emit
+>> a div to wrap around the element instead of a paragraph.
+>>
+>> I don't know if you mean that rst could be reordered to do htmlize before
+>> other phases? rst must be before any preprocess hook to avoid seeing any
+>> HTML.
+>>
+>>> One of my long term goals is to refactor all the code in ikiwiki
+>>> that manually runs the various stages of the render pipeline,
+>>> into one centralized place. Once that's done, that place can get
+>>> smart about what order to run the stages, and use a different
+>>> order for rst. --[[Joey]]
+>>
+>> If I'm thinking right, processing to HTML already in filter means any
+>> processing in scan can be reused directly (or skipped if it's legal to
+>> emit 'add_link' in filter.)
+>>
+>> -- [[ulrik]]
+
+>>> Seems it could be, yes. --[[Joey]]
+>>>
+>>>> It is not clear how we can work around reST wrapping directives with
+>>>> paragraph tags. Also, some escaping of xml characters & <> might
+>>>> happen, but I can't imagine right now what breakage can come from that.
+>>>> -- [[ulrik]]
+
+[tracrst]: http://trac.edgewall.org/wiki/WikiRestructuredText
+
+### Implementation ###
+
+Preserving indents in the preprocessor are in branch [pproc-indent][ppi]
+
+(These simple patches come with a warning: _Those are the first lines of
+Perl I've ever written!_)
+
+> This seems like a good idea, since it solves issues for eg, indented
+> directives in mdwn as well. But, looking at the diff, I see a clear bug:
+>
+> - return "[[!$command <span class=\"error\">".
+> + $result = "[[!$command <span class=\"error\">".
+>
+> That makes it go on and parse an infinitely nested directive chain, instead
+> of immediatly throwing an error.
+>
+> Also, it seems that the "indent" matching in the regexps may be too broad,
+> wouldn't it also match whitespace before a directive that was not at the beginning
+> of a line, and treat it as an indent? With some bad luck, that could cause mdwn
+> to put the indented output in a pre block. --[[Joey]]
+>
+>> You are probably right about the bug. I'm not quite sure what the nested
+>> directives examples looks like, but I must have overlooked how the
+>> recursion counter works; I thought simply changing if to elif the next
+>> few lines would solve that. I'm sorry for that!
+>>
+>> We don't have to change the `$handle` function at all, if it is possible
+>> to do the indent substitution all in one line instead of passing it to
+>> handle, I don't know if it is possible to turn:
+>>
+>> $content =~ s{$regex}{$handle->($1, $2, $3, $4, $5)}eg;
+>>
+>> into
+>>
+>> $content =~ s{$regex}{s/^/$1/gm{$handle->($2, $3, $4, $5)}}eg;
+>>
+>> Well, no idea how that would be expressed, but I mean, replace the indent
+>> directly in $handle's return value.
+>>
+>>> Yes, in effect just `indent($1, handle->($2,$,4))` --[[Joey]]
+>>
+>> The indent-catching regex is wrong in the way you mention, it has been
+>> nagigng my mind a bit as well; I think matching start of line + spaces
+>> and tabs is the only thing we want.
+>> -- [[ulrik]]
+>>
+>>> Well, seems you want to match the indent at the start of the line containing
+>>> the directive, even if the directive does not start the line. That would
+>>> be quite hard to make a regexp do, though. --[[Joey]]
+>>
+>> I wasted a long time getting the simpler `indent($1, handle->($2,$,4))` to
+>> work (remember, I don't know perl at all). Somehow `$1` does not arrive, I
+>> made a simple testcase that worked, and I conclude something inside $handle
+>> results in the value of $1 not arriving as it should!
+>>
+>> Anyway, instead a very simple incremental patch is in [pproc-indent][ppi]
+>> where the indentation regex is `(^[ \t]+|)` instead, which seems to work
+>> very well (and the regex is multiline now as well). I'm happy to rebase the
+>> changes if you want or you can just squash the four patches 1+3 => 1+1
+>> -- [[ulrik]]
+
+[ppi]: http://github.com/engla/ikiwiki/commits/pproc-indent
+
+## Discussion ##
+
+I guess you (or someone) has been through this before and knows why it
+simply won't work. But I hoped there was something original in the above;
+and I know there are wiki installations where rST works. --ulrik
+
+**Individual reStructuredText Issues**
+
+* We resolve rST links without definition, we don't help resolving defined
+ relative links, so we don't support specifying link name and target
+ separately.
+
+ * Resolved by |replacement| links with the wiki:: directive.
+
+**A first implementation: Resolving unmatched links**
+
+I have a working minimal implementation letting the rst renderer resolve
+undefined native rST links to ikiwiki pages. I have posted it as one patch at:
+
+Preview commit: http://github.com/engla/ikiwiki/commit/486fd79e520da1d462f00f40e7a90ab07e9c6fdf
+Repository: git://github.com/engla/ikiwiki.git
+
+Design issues of the patch:
+
+The page is rST-parsed once in 'scan' and once in 'htmlize' (the first to generate backlinks). Can the parse output be safely reused?
+
+> The page content fed to htmlize may be different than that fed to scan,
+> as directives can change the content. If you cached the input and output
+> at scan time, you could reuse the cached data at htmlize time for inputs
+> that are the same -- but that could be a very big cache! --[[Joey]]
+
+>> I would propose using a simple heuristic: If you see \[[ anywhere on the
+>> page, don't cache it. It would be an effective cache for pure-rst wikis
+>> (without any ikiwiki directives or wikilinks).
+>> However, I think that if the cache does not work for a big load, it should
+>> not work at all; small loads are small so they don't matter. --ulrik
+
+-----
+
+Another possiblity is using empty url for wikilinks (gitit uses this approach), for example:
+
+ `SomePage <>`_
+
+Since it uses *empty* url, I would like to call it *proposal 0* :-) --[weakish]
+
+[weakish]: http://weakish.pigro.net
diff --git a/doc/todo/Restrict_page_viewing.mdwn b/doc/todo/Restrict_page_viewing.mdwn
new file mode 100644
index 000000000..9c1889d63
--- /dev/null
+++ b/doc/todo/Restrict_page_viewing.mdwn
@@ -0,0 +1,39 @@
+I'd like to have some pages of my wiki to be only viewable by some users.
+
+I could use htaccess for that, but it would force the users to have
+2 authentication mecanisms, so I'd prefer to use openID for that too.
+
+* I'm thinking of adding a "show" parameter to the cgi script, thanks
+ to a plugin similar to goto.
+* When called, it would check the credential using the session stuff
+ (that I don't understand yet).
+* If not enough, it would serve a 403 error of course.
+* If enough, it would read the file locally on the server side and
+ return this as a content.
+
+Then, I'd have to generate the private page the regular way with ikiwiki,
+and prevent apache from serving them with an appropriate and
+much more maintainable htaccess file.
+
+-- [[users/emptty]]
+
+> While I'm sure a plugin could do this, it adds so much scalability cost
+> and is so counter to ikiwiki's design.. Have you considered using the
+> [[plugins/httpauth]] plugin to unify around htaccess auth? --[[Joey]]
+
+>> I'm not speaking of rendering the pages on demand, but to serve them on demand.
+>> They would still be compiled the regular way;
+>> I'll have another look at [[plugins/httpauth]] but I really like the openID whole idea.
+>> --[[emptty]]
+
+>>> How about
+>>> [mod_auth_openid](http://trac.butterfat.net/public/mod_auth_openid), then?
+>>> A plugin for ikiwiki to serve its own pages is far afield from ikiwiki's roots,
+>>> as Joey pointed out, but might be a neat option to have anyway -- for unifying
+>>> authentication across views and edits, for systems not otherwise running
+>>> web servers, for systems with web servers you don't have access to, and
+>>> doubtless for other purposes. Such a plugin would add quite a bit of flexibility,
+>>> and in that sense (IMO, of course) it'd be in the spirit of ikiwiki. --[[schmonz]]
+
+>>>> Yes, I think this could probably be used in combination with ikiwiki's
+>>>> httpauth and openid plugins. --[[Joey]]
diff --git a/doc/todo/Separate_OpenIDs_and_usernames.mdwn b/doc/todo/Separate_OpenIDs_and_usernames.mdwn
index 2cd52e8c4..a4940220a 100644
--- a/doc/todo/Separate_OpenIDs_and_usernames.mdwn
+++ b/doc/todo/Separate_OpenIDs_and_usernames.mdwn
@@ -6,6 +6,48 @@ I see this being implemented in one of two possible ways. The easiest seems like
A slightly more complex next step would be to request sreg from the provider and, if provided, automatically set the identity's username and email address from the provided persona. If username login to accounts with blank passwords is disabled, then you have the best of both worlds. Passwordless signin, human-friendly attribution, automatic setting of preferences.
+> Given that openids are a global user identifier, that can look as pretty
+> as the user cares to make it look via delegation, I am not a fan of
+> having a site-local identifier that layered on top of that. Perhaps
+> partly because every site that I have seen that does that has openid
+> implemented as a badly-done wart on the side of their regular login
+> system.
+>
+> The openid plugin now attempts to get an email and a username, and stores
+> them in the session database for later use (ie, when the user edits a
+> page).
+>
+> I am considering displaying the userid or fullname, if available,
+> instead of the munged openid url in recentchanges and comments.
+> It would be nice for those nasty [[google_openids|forum/google_openid_broken?]].
+> But, I first have to find a way to encode the name in the VCS commit log,
+> while still keeping the openid of the committer in there too.
+> Perhaps something like this (for git): --[[Joey]]
+>
+> Author: Joey Hess &lt;http://joey.kitenet.net/@web&gt;
+>
+> Only problem with the above is that the openid will still be displayed
+> by CIA. Other option is this, which solves that, but at the expense of
+> having to munge the username to fit inside the email address,
+> and generally seems backwards: --[[Joey]]
+>
+> Author: http://joey.kitenet.net/ &lt;Joey_Hess@web&gt;
+>
+> So, what needs to be done:
+>
+> * Change `rcs_commit` and `rcs_commit_staged` to take a session object,
+> instead of just a userid. (For back-compat, if the parameter is
+> not an object, it's a userid.) Bump ikiwiki plugin interface version.
+> (done)
+> * Modify all RCS plugins to include the session username somewhere
+> in the commit, and parse it back out in `rcs_recentchanges`.
+> (done for git only so far)
+> * Modify recentchanges plugin to display the username instead of the
+> `openiduser`.
+> (done)
+> * Modify comment plugin to put the session username in the comment
+> template instead of the `openiduser`. (done)
+
Unfortunately I don't speak Perl, so hopefully someone thinks these suggestions are good enough to code up. I've hacked on openid code in Ruby before, so hopefully these changes aren't all that difficult to implement. Even if you don't get any data via sreg, you're no worse off than where you are now, so I don't think there'd need to be much in the way of error/sanity-checking of returned data. If it's null or not available then no big deal, typing in a username is no sweat.
-[[!tag wishlist]]
+[[!tag wishlist done]]
diff --git a/doc/todo/Set_templates_for_whole_sections_of_the_site.mdwn b/doc/todo/Set_templates_for_whole_sections_of_the_site.mdwn
new file mode 100644
index 000000000..b130f4ec5
--- /dev/null
+++ b/doc/todo/Set_templates_for_whole_sections_of_the_site.mdwn
@@ -0,0 +1,33 @@
+At the moment, IkiWiki allows you to set the template for individual pages using the [[plugins/pagetemplate]] directive/plugin, but not for whole sections of the wiki.
+
+I've written a new plugin, sectiontemplate, available in the `page_tmpl` branch of my git repository that allows setting the template by pagespec in the config file.
+
+-- [[Will]]
+
+> This is an excellent idea and looks ready for merging.
+> Before I do though, would it perhaps make sense to merge
+> this in with the pagetemplate plugin? They do such a similar thing,
+> and unless letting users control the template by editing a page is not
+> wanted, I can't see any reason not to combine them. --[[Joey]]
+
+> One other idea is moving the pagespec from setup file to a directive.
+> The edittemplate plugin does that, and it might be nice
+> to be consistent with that. To implement it, you'd have to
+> make the scan hook store the template pagespecs in `%pagestate`,
+> so a bit more complicated. --[[Joey]]
+
+>> I started with the pagetemplate plugin, which is why they're so similar. I guess they could be combined.
+>> I had a brief think about having the specs and templates defined in a directive rather than the config file, but it got tricky.
+>> How do you know what needs to be rebuilt when? There is probably a solution, maybe even an obvious one, but I thought that getting something done was more important than polishing it.
+>> In the worst case, admins can always use the web interface to change the setup :).
+
+>> I wanted this to put comments on my new blog, and was more interested in that goal than this subgoal. I've moved most of my web pages to IkiWiki and there is only one small part that is the blog.
+>> I wanted to use [[Disqus comments|tips/Adding_Disqus_to_your_wiki/]], but only on the blog pages. (I'm using Disqus rather than IkiWiki comments because I don't want to have to deal with spam, security, etc. I'll happily just let someone else host the comments.) -- [[Will]]
+
+>>> Yes, handing the rebuild is a good reason not to use directives for
+>>> this.
+>>>
+>>> I do still think combining this with pagetemplate would be good.
+>>> --[[Joey]]
+
+>>>> This is exactly what I was looking for and it took me a while to find it. I very much support the idea to provide this as a regular plugin, be it merged with pagetemplate or stand-alone. Thank you for your work and code! --BenTo
diff --git a/doc/todo/Suggested_location_should_be_subpage_if_siblings_exist.mdwn b/doc/todo/Suggested_location_should_be_subpage_if_siblings_exist.mdwn
index 0fb14bafe..4489dd5d2 100644
--- a/doc/todo/Suggested_location_should_be_subpage_if_siblings_exist.mdwn
+++ b/doc/todo/Suggested_location_should_be_subpage_if_siblings_exist.mdwn
@@ -23,4 +23,4 @@ that we're at the root of a (sub-)hierarchy.
>> This sounds like WONTFIX to me? --[[smcv]]
-[[!tag wishlist]]
+[[!tag wishlist done]]
diff --git a/doc/todo/Support_XML-RPC-based_blogging.mdwn b/doc/todo/Support_XML-RPC-based_blogging.mdwn
index f9685be73..6a0593b17 100644
--- a/doc/todo/Support_XML-RPC-based_blogging.mdwn
+++ b/doc/todo/Support_XML-RPC-based_blogging.mdwn
@@ -9,6 +9,9 @@ blog names would work. --[[JoshTriplett]]
>> I'd love to see support for this and would be happy to contribute towards a bounty (say US$100) :-). [PmWiki](http://www.pmwiki.org/) has a plugin which [implements this](http://www.pmwiki.org/wiki/Cookbook/XMLRPC) in a way which seems fairly sensible as an end user. --[[AdamShand]]
+>>> Bump. This would be a nice feature, and with the talent on this project I'm sure it could be done safely, too.
+
+
[[!tag soc]]
[[!tag wishlist]]
diff --git a/doc/todo/a_navbar_based_on_page_properties.mdwn b/doc/todo/a_navbar_based_on_page_properties.mdwn
index 09be409ac..091e0a39b 100644
--- a/doc/todo/a_navbar_based_on_page_properties.mdwn
+++ b/doc/todo/a_navbar_based_on_page_properties.mdwn
@@ -38,4 +38,11 @@ well.
There is a problem though if this navbar were included in a sidebar (the logical place): if a page is updated, the navbar needs to be rebuilt which causes the sidebar to be rebuilt, which causes the whole site to be rebuilt. Unless we can subscribe only to title changes, this will be pretty bad...
--[[madduck]]
+
+> I've just written a plugin for a automatically created menu for use
+> [here](http://www.ff-egersdorf-wachendorf.de/). The source is at
+> [gitorious](http://gitorious.org/ikiwiki-plugins/automenu). The problem with
+> rebuilding remains unsolved but doesn't matter that much for me as I always
+> generate the web site myself, ie it's not really a wiki. --[[lnussel]]
+
[[!tag wishlist]]
diff --git a/doc/todo/abbreviation.mdwn b/doc/todo/abbreviation.mdwn
index d24166710..f2880091c 100644
--- a/doc/todo/abbreviation.mdwn
+++ b/doc/todo/abbreviation.mdwn
@@ -2,4 +2,6 @@ We might want some kind of abbreviation and acronym plugin. --[[JoshTriplett]]
* Not sure if this is what you mean, but I'd love a way to make works which match existing page names automatically like (eg. if there is a page called "MySQL" then any time the word MySQL is mentioned it should become a link to that page). -- [[AdamShand]]
+ * The python-markdown-extras package has support for [abbreviations](http://www.freewisdom.org/projects/python-markdown/Abbreviations), with the syntax that you just use the abbreviation in text (e.g. HTML) and then define the abbreviations at the end (like "footnote-style" links). For consistency, it might be good to use the same syntax, which apparently derives from [PHP-markdown-extra](http://michelf.com/projects/php-markdown/extra/#abbr).
+
[[wishlist]]
diff --git a/doc/todo/adjust_commit_message_for_rename__44___remove.mdwn b/doc/todo/adjust_commit_message_for_rename__44___remove.mdwn
new file mode 100644
index 000000000..3d0d1aff4
--- /dev/null
+++ b/doc/todo/adjust_commit_message_for_rename__44___remove.mdwn
@@ -0,0 +1,5 @@
+When you rename or remove pages using the relevant plugins, a commit message is generated automatically by the plugin.
+
+It would be nice to provide a text field in the remove/rename form, pre-populated with the automatic message, so that a user may customize or append to the message (modulo VCS support)
+
+-- [[Jon]]
diff --git a/doc/todo/allow_displaying_number_of_comments.mdwn b/doc/todo/allow_displaying_number_of_comments.mdwn
new file mode 100644
index 000000000..02d55fc9b
--- /dev/null
+++ b/doc/todo/allow_displaying_number_of_comments.mdwn
@@ -0,0 +1,30 @@
+My `numcomments` Git branch adds a `NUMCOMMENTS` `TMPL_VAR`, which is
+useful to add to the `forumpage.tmpl` template to emulate (the nice
+bits of) a more usual webforum.
+
+Please review... and pull :)
+
+-- [[intrigeri]]
+
+> How is having this variable for showing a count of the comments
+> better (or more forum-ish) than the COMMENTSLINK variable which
+> includes a count and a link to the comments, and is already displayed
+> in inlinepage.tmpl?
+>
+> `num_comments` will never return undef.
+>
+> I see no need to add a second pagetemplate hook.
+> The existing one can be added to. Probably inside its `if ($shown)`
+> block.
+>
+> It may also be a good idea to either combine the calls to `num_comments`
+> used for this and for the commentslink,
+> or to memoize it. I'm thinking generally memoizing it may be a good idea
+> since the comments for a page will typically be counted twice when it's
+> inlined.
+> --[[Joey]]
+
+[[patch]]
+
+>> Well, the COMMENTSLINK variable fits my needs. Sorry for
+>> the disturbance. [[done]] --[[intrigeri]]
diff --git a/doc/todo/allow_plugins_to_add_sorting_methods.mdwn b/doc/todo/allow_plugins_to_add_sorting_methods.mdwn
new file mode 100644
index 000000000..b523cd19f
--- /dev/null
+++ b/doc/todo/allow_plugins_to_add_sorting_methods.mdwn
@@ -0,0 +1,304 @@
+[[!tag patch]]
+
+The available [[ikiwiki/pagespec/sorting]] methods are currently hard-coded in
+IkiWiki.pm, making it difficult to add any extra sorting mechanisms. I've
+prepared a branch which adds 'sort' as a hook type and uses it to implement a
+new `meta_title` sort type.
+
+Someone could use this hook to make `\[[!inline sort=title]]` prefer the meta
+title over the page name, but for compatibility, I'm not going to (I do wonder
+whether it would be worth making sort=name an alias for the current sort=title,
+and changing the meaning of sort=title in 4.0, though).
+
+> What compatability concerns, exactly, are there that prevent making that
+> change now? --[[Joey]]
+
+*[sort-hooks branch now withdrawn in favour of sort-package --s]*
+
+I briefly tried to turn *all* the current sort types into hook functions, and
+have some of them pre-registered, but decided that probably wasn't a good idea.
+That earlier version of the branch is also available for comparison:
+
+*[also withdrawn in favour of sort-package --s]*
+
+>> I wonder if IkiWiki would benefit from the concept of a "sortspec", like a [[ikiwiki/PageSpec]] but dedicated to sorting lists of pages rather than defining lists of pages? Rather than defining a sort-hook, define a SortSpec class, and enable people to add their own sort methods as functions defined inside that class, similarly to the way they can add their own pagespec definitions. --[[KathrynAndersen]]
+
+>>> [[!template id=gitbranch branch=smcv/ready/sort-package author="[[Simon_McVittie|smcv]]"]]
+>>> I'd be inclined to think that's overkill, but it wasn't very hard to
+>>> implement, and in a way is more elegant. I set it up so sort mechanisms
+>>> share the `IkiWiki::PageSpec` package, but with a `cmp_` prefix. Gitweb:
+>>> <http://git.pseudorandom.co.uk/smcv/ikiwiki.git?a=shortlog;h=refs/heads/sort-package>
+
+>>>> I agree it seems more elegant, so I have focused on it.
+>>>>
+>>>> I don't know about reusing `IkiWiki::PageSpec` for this.
+>>>> --[[Joey]]
+
+>>>>> Fair enough, `IkiWiki::SortSpec::cmp_foo` would be just
+>>>>> as easy, or `IkiWiki::Sorting::cmp_foo` if you don't like
+>>>>> introducing "sort spec" in the API. I took a cue from
+>>>>> [[ikiwiki/pagespec/sorting]] being a subpage of
+>>>>> [[ikiwiki/pagespec]], and decided that yes, sorting is
+>>>>> a bit like a pagespec :-) Which name would you prefer? --s
+
+>>>>>> `SortSpec` --[[Joey]]
+
+>>>>>>> [[Done]]. --s
+
+>>>> I would be inclined to drop the `check_` stuff. --[[Joey]]
+
+>>>>> It basically exists to support `title_natural`, to avoid
+>>>>> firing up the whole import mechanism on every `cmp`
+>>>>> (although I suppose that could just be a call to a
+>>>>> memoized helper function). It also lets sort specs that
+>>>>> *must* have a parameter, like
+>>>>> [[field|plugins/contrib/field/discussion]], fail early
+>>>>> (again, not so valuable).
+>>>>>
+>>>>>> AFAIK, `use foo` has very low overhead when the module is already
+>>>>>> loaded. There could be some evalation overhead in `eval q{use foo}`,
+>>>>>> if so it would be worth addressing across the whole codebase.
+>>>>>> --[[Joey]]
+>>>>>>
+>>>>>>> check_cmp_foo now dropped. --s
+>>>>>
+>>>>> The former function could be achieved at a small
+>>>>> compatibility cost by putting `title_natural` in a new
+>>>>> `sortnatural` plugin (that fails to load if you don't
+>>>>> have `title_natural`), if you'd prefer - that's what would
+>>>>> have happened if `title_natural` was written after this
+>>>>> code had been merged, I suspect. Would you prefer this? --s
+
+>>>>>> Yes! (Assuming it does not make sense to support
+>>>>>> natural order sort of other keys than the title, at least..)
+>>>>>> --[[Joey]]
+
+>>>>>>> Done. I added some NEWS.Debian for it, too. --s
+
+>>>> Wouldn't it make sense to have `meta(title)` instead
+>>>> of `meta_title`? --[[Joey]]
+
+>>>>> Yes, you're right. I added parameters to support `field`,
+>>>>> and didn't think about making `meta` use them too.
+>>>>> However, `title` does need a special case to make it
+>>>>> default to the basename instead of the empty string.
+>>>>>
+>>>>> Another special case for `title` is to use `titlesort`
+>>>>> first (the name `titlesort` is derived from Ogg/FLAC
+>>>>> tags, which can have `titlesort` and `artistsort`).
+>>>>> I could easily extend that to other metas, though;
+>>>>> in fact, for e.g. book lists it would be nice for
+>>>>> `field(bookauthor)` to behave similarly, so you can
+>>>>> display "Douglas Adams" but sort by "Adams, Douglas".
+>>>>>
+>>>>> `meta_title` is also meant to be a prototype of how
+>>>>> `sort=title` could behave in 4.0 or something - sorting
+>>>>> by page name (which usually sorts in approximately the
+>>>>> same place as the meta-title, but occasionally not), while
+>>>>> displaying meta-titles, does look quite odd. --s
+
+>>>>>> Agreed. --[[Joey]]
+
+>>>>>>> I've implemented meta(title). meta(author) also has the
+>>>>>>> `sortas` special case; meta(updated) and meta(date)
+>>>>>>> should also work how you'd expect them to (but they're
+>>>>>>> earliest-first, unlike age). --s
+
+>>>> As I read the regexp in `cmpspec_translate`, the "command"
+>>>> is required to have params. They should be optional,
+>>>> to match the documentation and because most sort methods
+>>>> do not need parameters. --[[Joey]]
+
+>>>>> No, `$2` is either `\w+\([^\)]*\)` or `[^\s]+` (with the
+>>>>> latter causing an error later if it doesn't also match `\w+`).
+>>>>> This branch doesn't add any parameterized sort methods,
+>>>>> in fact, although I did provide one on
+>>>>> [[field's_discussion_page|plugins/contrib/report/discussion]]. --s
+
+>>>> I wonder if it would make sense to add some combining keywords, so
+>>>> a sortspec reads like `sort="age then ascending title"`
+>>>> In a way, this reduces the amount of syntax that needs to be learned.
+>>>> I like the "then" (and it could allow other operations than
+>>>> simple combination, if any others make sense). Not so sure about the
+>>>> "ascending", which could be "reverse" instead, but "descending age" and
+>>>> "ascending age" both seem useful to be able to explicitly specify.
+>>>> --[[Joey]]
+
+>>>>> Perhaps. I do like the simplicity of [[KathrynAndersen]]'s syntax
+>>>>> from [[plugins/contrib/report]] (which I copied verbatim, except for
+>>>>> turning sort-by-`field` into a parameterized spec).
+>>>>>
+>>>>> If we're getting into English-like (or at least SQL-like) queries,
+>>>>> it might make sense to change the signature of the hook function
+>>>>> so it's a function to return a key, e.g.
+>>>>> `sub key_age { return -%pagemtime{$_[0]) }`. Then we could sort like
+>>>>> this:
+>>>>>
+>>>>> field(artistsort) or field(artist) or constant(Various Artists) then meta(titlesort) or meta(title) or title
+>>>>>
+>>>>> with "or" binding more closely than "then". Does this seem valuable?
+>>>>> I think the implementation would be somewhat more difficult. and
+>>>>> it's probably getting too complicated to be worthwhile, though?
+>>>>> (The keys that actually benefit from this could just
+>>>>> have smarter cmp functions, I think.)
+>>>>>
+>>>>> If the hooks return keys rather than cmp results, then we could even
+>>>>> have "lowercase" as an adjective used like "ascending"... maybe.
+>>>>> However, there are two types of adjective here: "lowercase"
+>>>>> really applies to the keys, whereas "ascending" applies to the "cmp"
+>>>>> result. Again, I think this is getting too complex, and could just
+>>>>> be solved with smarter cmp functions.
+>>>>>
+>>>>>> I agree. (Also, I think returning keys may make it harder to write
+>>>>>> smarter cmp functions.) --[[Joey]]
+>>>>>
+>>>>> Unfortunately, `sort="ascending mtime"` actually sorts by *descending*
+>>>>> timestamp (but`sort=age` is fine, because `age` could be defined as
+>>>>> now minus `ctime`). `sort=freshness` isn't right either, because
+>>>>> "sort by freshness" seems as though it ought to mean freshest first,
+>>>>> but "sort by ascending freshness" means put the least fresh first. If
+>>>>> we have ascending and descending keywords which are optional, I don't
+>>>>> think we really want different sort types to have different default
+>>>>> directions - it seems clearer to have `ascending` always be a no-op,
+>>>>> and `descending` always negate.
+>>>>>
+>>>>>> I think you've convinced me that ascending/descending impose too
+>>>>>> much semantics on it, so "-" is better. --[[Joey]]
+
+>>>>>>> I've kept the semantics from `report` as-is, then:
+>>>>>>> e.g. `sort="age -title"`. --s
+
+>>>>> Perhaps we could borrow from `meta updated` and use `update_age`?
+>>>>> `updateage` would perhaps be a more normal IkiWiki style - but that
+>>>>> makes me think that updateage is a quantity analagous to tonnage or
+>>>>> voltage, with more or less recently updated pages being said to have
+>>>>> more or less updateage. I don't know whether that's good or bad :-)
+>>>>>
+>>>>> I'm sure there's a much better word, but I can't see it. Do you have
+>>>>> a better idea? --s
+
+[Regarding the `meta title=foo sort=bar` special case]
+
+> I feel it sould be clearer to call that "sortas", since "sort=" is used
+> to specify a sort method in other directives. --[[Joey]]
+>> Done. --[[smcv]]
+
+## speed
+
+I notice the implementation does not use the magic `$a` and `$b` globals.
+That nasty perl optimisation is still worthwhile:
+
+ perl -e 'use warnings; use strict; use Benchmark; sub a { $a <=> $b } sub b ($$) { $_[0] <=> $_[1] }; my @list=reverse(1..9999); timethese(10000, {a => sub {my @f=sort a @list}, b => sub {my @f=sort b @list}, c => => sub {my @f=sort { b($a,$b) } @list}})'
+ Benchmark: timing 10000 iterations of a, b, c...
+ a: 80 wallclock secs (76.74 usr + 0.05 sys = 76.79 CPU) @ 130.23/s (n=10000)
+ b: 112 wallclock secs (106.14 usr + 0.20 sys = 106.34 CPU) @ 94.04/s (n=10000)
+ c: 330 wallclock secs (320.25 usr + 0.17 sys = 320.42 CPU) @ 31.21/s (n=10000)
+
+Unfortunatly, I think that c is closest to the new implementation.
+--[[Joey]]
+
+> Unfortunately, `$a` isn't always `$main::a` - it's `$Package::a` where
+> `Package` is the call site of the sort call. This was a showstopper when
+> `sort` was a hook implemented in many packages, but now that it's a
+> `SortSpec`, I may be able to fix this by putting a `sort` wrapper in the
+> `SortSpec` namespace, so it's like this:
+>
+> sub sort ($@)
+> {
+> my $cmp = shift;
+> return sort $cmp @_;
+> }
+>
+> which would mean that the comparison used `$IkiWiki::SortSpec::a`.
+> --s
+
+>> I've now done this. On a wiki with many [[plugins/contrib/album]]s
+>> (a full rebuild takes half an hour!), I tested a refresh after
+>> `touch tags/*.mdwn` (my tag pages contain inlines of the form
+>> `tagged(foo)` sorted by date, so they exercise sorting).
+>> I also tried removing sorting from `pagespec_match_list`
+>> altogether, as an upper bound for how fast we can possibly make it.
+>>
+>> * `master` at branch point: 63.72user 0.29system
+>> * `master` at branch point: 63.91user 0.37system
+>> * my branch, with `@_`: 65.28user 0.29system
+>> * my branch, with `@_`: 65.21user 0.28system
+>> * my branch, with `$a`: 64.09user 0.28system
+>> * my branch, with `$a`: 63.83user 0.36system
+>> * not sorted at all: 58.99user 0.29system
+>> * not sorted at all: 58.92user 0.29system
+>>
+>> --s
+
+> I do notice that `pagespec_match_list` performs the sort before the
+> filter by pagespec. Is this a deliberate design choice, or
+> coincidence? I can see that when `limit` is used, this could be
+> used to only run the pagespec match function until `limit` pages
+> have been selected, but the cost is that every page in the wiki
+> is sorted. Or, it might be useful to do the filtering first, then
+> sort the sub-list thus produced, then finally apply the limit? --s
+
+>> Yes, it was deliberate, pagespec matching can be expensive enough that
+>> needing to sort a lot of pages seems likely to be less work. (I don't
+>> remember what benchmarking was done though.) --[[Joey]]
+
+>>> We discussed this on IRC and Joey pointed out that this also affects
+>>> dependency calculation, so I'm not going to get into this now... --s
+
+Joey pointed out on IRC that the `titlesort` feature duplicates all the
+meta titles. I did that in order to sort by the unescaped version, but
+I've now changed the branch to only store that if it makes a difference.
+--s
+
+## Documentation from sort-package branch
+
+### advanced sort orders (conditionally added to [[ikiwiki/pagespec/sorting]])
+
+* `title_natural` - Orders by title, but numbers in the title are treated
+ as such, ("1 2 9 10 20" instead of "1 10 2 20 9")
+* `meta(title)` - Order according to the `\[[!meta title="foo" sortas="bar"]]`
+ or `\[[!meta title="foo"]]` [[ikiwiki/directive]], or the page name if no
+ full title was set. `meta(author)`, `meta(date)`, `meta(updated)`, etc.
+ also work.
+
+### Multiple sort orders (added to [[ikiwiki/pagespec/sorting]])
+
+In addition, you can combine several sort orders and/or reverse the order of
+sorting, with a string like `age -title` (which would sort by age, then by
+title in reverse order if two pages have the same age).
+
+### meta sortas parameter (added to [[ikiwiki/directive/meta]])
+
+[in title]
+
+An optional `sort` parameter will be used preferentially when
+[[ikiwiki/pagespec/sorting]] by `meta(title)`:
+
+ \[[!meta title="The Beatles" sort="Beatles, The"]]
+
+ \[[!meta title="David Bowie" sort="Bowie, David"]]
+
+[in author]
+
+ An optional `sortas` parameter will be used preferentially when
+ [[ikiwiki/pagespec/sorting]] by `meta(author)`:
+
+ \[[!meta author="Joey Hess" sortas="Hess, Joey"]]
+
+### Sorting plugins (added to [[plugins/write]])
+
+Similarly, it's possible to write plugins that add new functions as
+[[ikiwiki/pagespec/sorting]] methods. To achieve this, add a function to
+the IkiWiki::SortSpec package named `cmp_foo`, which will be used when sorting
+by `foo` or `foo(...)` is requested.
+
+The names of pages to be compared are in the global variables `$a` and `$b`
+in the IkiWiki::SortSpec package. The function should return the same thing
+as Perl's `cmp` and `<=>` operators: negative if `$a` is less than `$b`,
+positive if `$a` is greater, or zero if they are considered equal. It may
+also raise an error using `error`, for instance if it needs a parameter but
+one isn't provided.
+
+The function will also be passed one or more parameters. The first is
+`undef` if invoked as `foo`, or the parameter `"bar"` if invoked as `foo(bar)`;
+it may also be passed additional, named parameters.
diff --git a/doc/todo/allow_site-wide_meta_definitions.mdwn b/doc/todo/allow_site-wide_meta_definitions.mdwn
index 70ccc2b68..82670250e 100644
--- a/doc/todo/allow_site-wide_meta_definitions.mdwn
+++ b/doc/todo/allow_site-wide_meta_definitions.mdwn
@@ -5,8 +5,159 @@ I'd like to define [[plugins/meta]] values to apply across all pages
site-wide unless the pages define their own: default values for meta
definitions essentially.
-Here's a patch to achieve this (also in the "defaultmeta" branch of
-my github ikiwiki fork):
+ <snip old patch, see below for latest>
+
+-- [[Jon]]
+
+> This doesn't support multiple-argument meta directives like
+> `link=x rel=y`, or meta directives with special side-effects like
+> `updated`.
+>
+> The first could be solved (if you care) by a syntax like this:
+>
+> meta_defaults => [
+> { copyright => "© me" },
+> { link => "about:blank", rel => "silly", },
+> ]
+>
+> The second could perhaps be solved by invoking `meta::preprocess` from within
+> `scan` (which might be a simplification anyway), although this is complicated
+> by the fact that some (but not all!) meta headers are idempotent.
+>
+> --[[smcv]]
+
+>> Thanks for your comment. I've revised the patch to use the config syntax
+>> you suggest. I need to perform some more testing to make sure I've
+>> addressed the issues you highlight.
+>>
+>> I had to patch part of IkiWiki core, the merge routine in Setup, because
+>> the use of `possibly_foolish_untaint` was causing the hashrefs at the deep
+>> end of the data structure to be converted into strings. The specific change
+>> I've made may not be acceptable, though -- I'd appreciate someone providing
+>> some feedback on that hunk!
+
+>>> Well, re that hunk, taint checking is currently disabled, but
+>>> if the perl bug that disallows it is fixed and it is turned back on,
+>>> the hash values will remain tainted, which will probably lead to
+>>> problems.
+>>>
+>>> I'm also leery of using such a complex data structure in config.
+>>> The websetup plugin would be hard pressed to provide a UI for such a
+>>> data structure. (It lacks even UI for a single hash ref yet, let alone
+>>> a list.)
+>>>
+>>> Also, it seems sorta wrong to have two so very different syntaxes to
+>>> represent the same meta data. A user without a lot of experience will
+>>> be hard pressed to map from a directive to this in the setup file.
+>>>
+>>> All of which leads me to think the setup file could just contain
+>>> a text that could hold meta directives. Which generalizes really to
+>>> a text that contains any directives, and is, perhaps appended to the
+>>> top of every page. Which nearly generalizes to the sidebar plugin,
+>>> or perhaps something more general than that...
+>>>
+>>> However, excessive generalization is the root of all evil, so
+>>> I'm not necessarily saying that's a good idea. Indeed, my memory
+>>> concerns below invalidate this idea pretty well. --[[Joey]]
+
+ diff --git a/IkiWiki/Plugin/meta.pm b/IkiWiki/Plugin/meta.pm
+ index 6fe9cda..2f8c098 100644
+ --- a/IkiWiki/Plugin/meta.pm
+ +++ b/IkiWiki/Plugin/meta.pm
+ @@ -13,6 +13,8 @@ sub import {
+ hook(type => "needsbuild", id => "meta", call => \&needsbuild);
+ hook(type => "preprocess", id => "meta", call => \&preprocess, scan => 1);
+ hook(type => "pagetemplate", id => "meta", call => \&pagetemplate);
+ + hook(type => "scan", id => "meta", call => \&scan)
+ + if $config{"meta_defaults"};
+ }
+
+ sub getsetup () {
+ @@ -305,6 +307,15 @@ sub match {
+ }
+ }
+
+ +sub scan() {
+ + my %params = @_;
+ + my $page = $params{page};
+ + foreach my $default (@{$config{"meta_defaults"}}) {
+ + preprocess(%$default, page => $page,
+ + destpage => $page, preview => 0);
+ + }
+ +}
+ +
+ package IkiWiki::PageSpec;
+
+ sub match_title ($$;@) {
+ diff --git a/IkiWiki/Setup.pm b/IkiWiki/Setup.pm
+ index 8a25ecc..e4d50c9 100644
+ --- a/IkiWiki/Setup.pm
+ +++ b/IkiWiki/Setup.pm
+ @@ -51,7 +51,13 @@ sub merge ($) {
+ $config{$c}=$setup{$c};
+ }
+ else {
+ - $config{$c}=[map { IkiWiki::possibly_foolish_untaint($_) } @{$setup{$c}}]
+ + $config{$c}=[map {
+ + if(ref $_ eq 'HASH') {
+ + $_
+ + } else {
+ + IkiWiki::possibly_foolish_untaint($_)
+ + }
+ + } @{$setup{$c}}];
+ }
+ }
+ elsif (ref $setup{$c} eq 'HASH') {
+ diff --git a/doc/ikiwiki/directive/meta.mdwn b/doc/ikiwiki/directive/meta.mdwn
+ index 000f461..8d34ee4 100644
+ --- a/doc/ikiwiki/directive/meta.mdwn
+ +++ b/doc/ikiwiki/directive/meta.mdwn
+ @@ -12,6 +12,16 @@ also specifies some additional sub-parameters.
+ The field values are treated as HTML entity-escaped text, so you can include
+ a quote in the text by writing `&quot;` and so on.
+
+ +You can also define site-wide defaults for meta values by including them
+ +in your setup file. The key used is `meta_defaults` and the value is a list
+ +of hashes, one per meta directive. e.g.:
+ +
+ + meta_defaults = [
+ + { copyright => "Copyright 2007 by Joey Hess" },
+ + { license => "GPL v2+" },
+ + { link => "somepage", rel => "site entrypoint", },
+ + ],
+ +
+ Supported fields:
+
+ * title
+
+-- [[Jon]]
+
+>> Ok, I've had a bit of a think about this. There are currently 15 supported
+>> meta fields. Of these: title, licence, copyright, author, authorurl,
+>> and robots might make sense to define globally and override on a per-page
+>> basis.
+>>
+>> Less so, description (due to its impact on map); openid (why would
+>> someone want more than one URI to act as an openid endpoint to the same
+>> place?); updated. I can almost see why someone might want to set a global
+>> updated value. Almost.
+>>
+>> Not useful are permalink, date, stylesheet (you already have a global
+>> stylesheet), link, redir, and guid.
+>>
+>> In other words, the limitations of my first patch that [[smcv]] outlined
+>> are only relevant to defined fields that you wouldn't want to specify a
+>> global default for anyway.
+>>
+>>> I generally agree with this. It is *possible* that meta would have a new
+>>> field added, that takes parameters and make sense to use globally.
+>>> --[[Joey]]
+>>
+>> Due to this, and the added complexity of the second patch (having to adjust
+>> `IkiWiki/Setup.pm`), I think the first patch makes more sense. I've thus
+>> reverted to it here.
+>>
+>> Is this merge-worthy?
diff --git a/IkiWiki/Plugin/meta.pm b/IkiWiki/Plugin/meta.pm
index b229592..3132257 100644
@@ -56,19 +207,40 @@ my github ikiwiki fork):
-- [[Jon]]
-> This doesn't support multiple-argument meta directives like
-> `link=x rel=y`, or meta directives with special side-effects like
-> `updated`.
->
-> The first could be solved (if you care) by a syntax like this:
->
-> meta_defaults => [
-> { copyright => "© me" },
-> { link => "about:blank", rel => "silly", },
-> ]
->
-> The second could perhaps be solved by invoking `meta::preprocess` from within
-> `scan` (which might be a simplification anyway), although this is complicated
-> by the fact that some (but not all!) meta headers are idempotent.
->
-> --[[smcv]]
+>>> Merry Christmas/festive season/happy new year folks. I've been away from
+>>> ikiwiki for the break, and now I've returned to watching recentchanges.
+>>> Hopefully I'll be back in the mix soon, too. In the meantime, Joey, have
+>>> you had a chance to look at this yet? -- [[Jon]]
+
+>>>> Ping :) Hi. [[Joey]], would you consider this patch for the next
+>>>> ikiwiki release? -- [[Jon]]
+
+>>> For this to work with websetup and --dumpsetup, it needs to define the
+>>> `meta_*` settings in the getsetup function.
+>>>>
+>>>> I think this will be problematic with the current implementation of this
+>>>> patch. The datatype here is an array of hash references, with each hash
+>>>> having a variable (and arbitrary) number of key/value pairs. I can't
+>>>> think of an intuitive way of implementing a way of editing such a
+>>>> datatype in the web interface, let alone registering the option in
+>>>> getsetup.
+>>>>
+>>>> Perhaps a limited set of defined meta values could be exposed via
+>>>> websetup (the obvious ones: author, copyright, license, etc.) -- [[Jon]]
+>>>
+>>> I also have some concerns about both these patches, since both throw
+>>> a lot of redundant data at meta, which then stores it in a very redundant
+>>> way. Specifically, meta populates a per-page `%metaheaders` hash
+>>> as well as storing per-page metadata in `%pagestate`. So, if you have
+>>> a wiki with 10 thousand pages, and you add a 1k site-wide license text,
+>>> that will bloat the memory usage of ikiwiki by in excess of 2
+>>> megabytes. It will also cause ikiwiki to write a similar amount more data
+>>> to its state file which has to be loaded back in each
+>>> run.
+>>>
+>>> Seems that this could be managed much more efficiently by having
+>>> meta special-case the site-wide settings, not store them in these
+>>> per-page data structures, and just make them be used if no per-page
+>>> metadata of the given type is present. --[[Joey]]
+>>>>
+>>>> that should be easy enough to do. I will work on a patch. -- [[Jon]]
diff --git a/doc/todo/anon_push_of_comments.mdwn b/doc/todo/anon_push_of_comments.mdwn
new file mode 100644
index 000000000..b472ea13f
--- /dev/null
+++ b/doc/todo/anon_push_of_comments.mdwn
@@ -0,0 +1,14 @@
+It should be possible to use anonymous git push to post comments
+(created, say, by a ikiwiki-comment program). Currently, that is not
+allowed, because users cannot edit, or create internal page files.
+But, comments in allowed locations are an exception to that rule, and
+that exception should be communicated somehow to `IkiWiki::Receive`.
+--[[Joey]]
+
+> Complications include:
+>
+> * Hard to see a way to prevent users from committing a comment that
+> claims to be written by someone else.
+> * `checkcontent` hooks need to be run, but can't accept a comment
+> for later moderation, since it's coming in as part of a commit.
+> Best they could do is reject the commit.
diff --git a/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn b/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn
index f1d33114f..7eb404910 100644
--- a/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn
+++ b/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn
@@ -4,7 +4,7 @@ Tags are mainly specific to the object to which they’re stuck. However, I ofte
Also see: <http://madduck.net/blog/2008.01.06:new-blog/> and <http://users.itk.ppke.hu/~cstamas/code/ikiwiki/autocreatetagpage/>
-[[!tag wishlist plugins/tag patch]]
+[[!tag wishlist plugins/tag patch patch/core]]
I would love to see this as well. -- dato
@@ -15,88 +15,9 @@ A new setting is used to enable or disable auto-create tag pages, `tag_autocreat
The new tag file is created during the preprocess phase.
The new tag file is then complied during the change phase.
-_tag.pm from version 3.01_
-
-
- --- tag.pm 2009-02-06 10:26:03.000000000 -0700
- +++ tag_new.pm 2009-02-06 12:17:19.000000000 -0700
- @@ -14,6 +14,7 @@
- hook(type => "preprocess", id => "tag", call => \&preprocess_tag, scan => 1);
- hook(type => "preprocess", id => "taglink", call => \&preprocess_taglink, scan => 1);
- hook(type => "pagetemplate", id => "tag", call => \&pagetemplate);
- + hook(type => "change", id => "tag", call => \&change);
- }
-
- sub getopt () {
- @@ -36,6 +37,36 @@
- safe => 1,
- rebuild => 1,
- },
- + tag_autocreate => {
- + type => "boolean",
- + example => 0,
- + description => "Auto-create the new tag pages, uses autotagpage.tmpl ",
- + safe => 1,
- + rebulid => 1,
- + },
- +}
- +
- +my $autocreated_page = 0;
- +
- +sub gen_tag_page($) {
- + my $tag=shift;
- +
- + my $tag_file=$tag.'.'.$config{default_pageext};
- + return if (-f $config{srcdir}.$tag_file);
- +
- + my $template=template("autotagpage.tmpl");
- + $template->param(tag => $tag);
- + writefile($tag_file, $config{srcdir}, $template->output);
- + $autocreated_page = 1;
- +
- + if ($config{rcs}) {
- + IkiWiki::disable_commit_hook();
- + IkiWiki::rcs_add($tag_file);
- + IkiWiki::rcs_commit_staged(
- + gettext("Automatic tag page generation"),
- + undef, undef);
- + IkiWiki::enable_commit_hook();
- + }
- }
-
- sub tagpage ($) {
- @@ -47,6 +78,10 @@
- $tag=~y#/#/#s; # squash dups
- }
-
- + if (defined $config{tag_autocreate} && $config{tag_autocreate} ) {
- + gen_tag_page($tag);
- + }
- +
- return $tag;
- }
-
- @@ -125,4 +160,18 @@
- }
- }
-
- +sub change(@) {
- + return unless($autocreated_page);
- + $autocreated_page = 0;
- +
- + # This refresh/saveindex is to complie the autocreated tag pages
- + IkiWiki::refresh();
- + IkiWiki::saveindex();
- +
- + # This refresh/saveindex is to fix the Tags link
- + # With out this additional refresh/saveindex the tag link displays ?tag
- + IkiWiki::refresh();
- + IkiWiki::saveindex();
- +}
- +
+*see git history of this page if you want the patch --[[smcv]]*
-
-This uses a [[template|wikitemplates]] called `autotagpage.tmpl`, here is my template file:
+This uses a [[template|templates]] called `autotagpage.tmpl`, here is my template file:
\[[!inline pages="link(<TMPL_VAR TAG>)" archive="yes"]]
@@ -123,3 +44,208 @@ On the second extra pass, it doesn't notice that it has to update the "?"-link.
}
is not satisfied for the newly created tag page. I shall put debug msgs into Render.pm to find out better how it works. --Ivan Z.
+
+---
+
+I've made another attempt at fixing this
+
+The current progress can be found at my [git repository][gitweb] on branch
+`autotag`:
+
+ git://git.liegesta.at/git/ikiwiki
+
+[gitweb]: http://git.liegesta.at/?p=ikiwiki.git;a=shortlog;h=refs/heads/autotag (gitweb for branch autotag)
+
+It's not entirely finished yet, but already quite usable. Testing and comments
+on code quality, implementation details, as well as other patches would be
+appreciated.
+
+Here's what it does right now:
+
+* enabled by setting `tag_autocreate=1` in the configuration.
+* Tag pages will be created in `tagbase` from the template `autotag.tmpl`.
+* Will correctly render all links, and dependencies. Well, AFAIK.
+* When a tag page is deleted it will automatically recreated from template. (I
+consider this a feature, not a bug)
+* Requires a rebuild on first use.
+* Adds a function `add_autofile()` to the plugin API, to do all this.
+
+Todo/Bugs:
+
+* Will still create a page even if there's a page other than `$tag` under
+`tagbase` satisfying the tag link. (details? --[[Joey]])
+* Call from `IkiWiki.pm` to `Render.pm`, which adds a module dependency in the
+wrong direction. (fixed --[[Joey]] )
+* Add files to RCS.
+* Unit tests.
+* Proper documentation. (fixed (mostly) --[[Joey]])
+
+--[[David_Riebenbauer]]
+
+> Starting review of this. Some of your commits are to very delicate,
+> optimised, and security-sensitive ground, so I have to look at them very
+> carefully. --[[Joey]]
+
+>> First of, sorry that it took me so damn long to answer. I didn't lose
+>> interest but it took a while for me to find the time and motivation
+>> to address you suggestions. --[[David_Riebenbauer]]
+
+> * In the refactoring in [f3abeac919c4736429bd3362af6edf51ede8e7fe][],
+> you introduced at least 2 bugs, one a possible security hole.
+> Now one part of the code tests `if ($file)` and the other
+> caller tests `if ($f)`. These two tests both tested `if (! defined $f)`
+> before. Notice that the variable needs to be the untainted variable
+> for both. Also notice that `if ($f)` fails if `$f` contains `0`,
+> which is a very common perl gotcha.
+> * Your refactored code changes `-l $_ || -d _` to `-l $file || -d $file`.
+> The latter makes one more stat system call; note the use of a
+> bare `_` in the first to make perl reuse the stat buffer.
+> * (As a matter of style, could you put a space after the commas in your
+> perl?)
+
+>> The first two points should be addressed in
+>> [da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0][]. And sure, I can add the
+>> spaces. --[[David_Riebenbauer]]
+
+> I'd like to cherry-pick the above commit, once it's in shape, before
+> looking at the rest in detail. So just a few other things that stood out.
+>
+> * Commit [4af4d26582f0c2b915d7102fb4a604b176385748][] seems unnecessary.
+> `srcfile($file, 1)` already is documented to return undef if the
+> file does not exist. (But without the second parameter, it throws
+> an error.)
+
+>> You're right. I must have been some confused by some other promplem I
+>> introduced then. Reverted. --[[David_Riebenbauer]]
+
+> * Commit [f58f3e1bec41ccf9316f37b014ce0b373c8e49e1][] adds a line
+> that is intented by a space, not a tab.
+
+>> Sorry, That one was reverted anyway. --[[David_Riebenbauer]]
+
+> * Commit [f58f3e1bec41ccf9316f37b014ce0b373c8e49e1][] says that auto-added
+> files will be recreated if the user deletes them. That seems bad.
+> `autoindex` goes to some trouble to not recreate deleted files.
+
+>> I reverted the commit and addressed the issue in
+>> [a358d74bef51dae31332ff27e897fe04834571e6][] and
+>> [981400177d68a279f485727be3f013e68f0bf691][].
+ --[[David_Riebenbauer]]
+
+>>> This doesn't seem to have all the refinements that autoindex has:
+>>>
+>>> * `autoindex` attaches the record of deletions to the `index` page, which
+>>> is (nearly) guaranteed to exist; this one attaches the record of
+>>> deletions to the deleted page's page state. Won't that tend to result
+>>> in losing the record along with the deleted page?
+
+>>>> This is probably on of the harder things to do, 'cause there are (most of the
+>>>> time) several pages that are responsible for the creation of a single tag page.
+>>>> Of course I could attach the info to all of them.
+
+>>>> With current behaviour I think the information in `%pagestate` is kept around
+>>>> regardless whether the corresponding page exists or not.
+>>>> --[[David_Riebenbauer]]
+
+>>>>> Sorry, I'll try to be clearer: `autoindex` hard-codes that the [[index]] page
+>>>>> of the entire wiki is the one responsible for storing the page state. That
+>>>>> page isn't responsible for the creation of the tag page, it's just an
+>>>>> arbitrary page that's (more or less) guaranteed to exist. --[[smcv]]
+
+>>>>> I don't like that [[plugins/autoindex]] has to do that,
+>>>>> but `%pagestate` values are only stored for pages that exist,
+>>>>> so it was necessary. (Another way to look at this is that
+>>>>> `%pagestate` is not the ideal data structure.) --[[Joey]]
+
+>>>>>> Aha! Having looked at [[plugins/write]] again, it turns out that what this
+>>>>>> feature should really use is `%wikistate`, I think? :-) --[[smcv]]
+
+>>>>>>> Ah, indeed, that came after I wrote autoindex. I've fixed autoindex to
+>>>>>>> use it. --[[Joey]]
+
+>>>>> Ok, now I know what you mean. --[[David_Riebenbauer]]
+
+>>> * `autoindex` forgets that a page was deleted when that page is
+>>> re-created
+
+>>>> Yes, I forgot about that and that is a bug. I'll fix that.
+>>>> --[[David_Riebenbauer]]
+
+>>>>> In my branch, it keeps a list of autofiles that were created,
+>>>>> not deleted. And I think that turns out to be necessary, really.
+>>>>> However, I see no way to clean out that list on deletion and
+>>>>> manual recreation -- it still needs to remember it was once an autofile,
+>>>>> in order to avoid recreating it if it's deleted yet again. --[[Joey]]
+
+>>>>>> Are these really the semantics we want? It seems strange to me
+>>>>>> that this:
+>>>>>>
+>>>>>> * tag a page as foo
+>>>>>> * tags/foo automatically appears
+>>>>>> * delete tags/foo
+>>>>>> * create tags/foo manually
+>>>>>> * delete tags/foo again
+>>>>>> * tags/foo isn't automatically created
+>>>>>>
+>>>>>> isn't the same as this:
+>>>>>>
+>>>>>> * create tags/foo
+>>>>>> * delete tags/foo
+>>>>>> * tag a page as foo
+>>>>>> * tags/foo automatically appears
+>>>>>>
+>>>>>> or even this:
+>>>>>>
+>>>>>> * create tags/foo
+>>>>>> * tag a page as foo
+>>>>>> * delete tags/foo
+>>>>>> * tags/foo automatically appears (?)
+>>>>>>
+>>>>>> --[[smcv]]
+
+>>>>>>> I agree that the last of these is not desired. It could be avoided
+>>>>>>> by extending the list of autofiles to include those that were not
+>>>>>>> created due to the file/page already existing.
+>>>>>>>
+>>>>>>> Hmm, that would fix the previous scenario too. --[[Joey]]
+
+>>> * `autoindex` forgets that a page was deleted when it's no longer needed
+>>> anyway (this may be harder for `autotag`?)
+
+>>>> I don't think so. AFAIK ikiwiki can detect whether there are taglinks to a page
+>>>> anyway, so it should be quite easy. I'll try to implement that too.
+>>>> --[[David_Riebenbauer]]
+
+>>> It'd probably be an interesting test of the core change to port
+>>> `autoindex` to use it? (Adding the file to the RCS would be
+>>> necessary to get parity with `autoindex`.) --[[smcv]]
+
+>>>> Good suggestion. Adding the files to RCS is on my todo list anyway.
+>>>> --[[David_Riebenbauer]]
+
+>>>>> I think it may be better to allow the `add_autofile` caller
+>>>>> to specify if it is added to RCS. In my branch, it can do
+>>>>> so by just making the callback it registers call `rcs_add`;
+>>>>> and I have tag do this. Other plugins might want autofiles
+>>>>> that do not get checked in, conceivably.
+>>>>> --[[Joey]]
+
+> Regarding the call from `IkiWiki.pm` to `Render.pm`, wouldn't this be
+> quite easy to solve by moving `verify_src_file` to IkiWiki.pm? --[[smcv]]
+
+>> True. I'll do that. --[[David_Riebenbauer]]
+>> Fixed in my branch --[[Joey]]
+
+[[!template id=gitbranch branch=origin/autotag author="[[Joey]]"]]
+I've pushed an autotag branch of my own, which refactors
+things a bit and fixes bugs around deletion/recreation.
+I've tested it fairly thouroughly. --[[Joey]]
+
+[f3abeac919c4736429bd3362af6edf51ede8e7fe]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=f3abeac919c4736429bd3362af6edf51ede8e7fe (commitdiff for f3abeac919c4736429bd3362af6edf51ede8e7fe)
+[4af4d26582f0c2b915d7102fb4a604b176385748]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=4af4d26582f0c2b915d7102fb4a604b176385748 (commitdiff for 4af4d26582f0c2b915d7102fb4a604b176385748)
+[f58f3e1bec41ccf9316f37b014ce0b373c8e49e1]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=f58f3e1bec41ccf9316f37b014ce0b373c8e49e1 (commitdiff for f58f3e1bec41ccf9316f37b014ce0b373c8e49e1)
+[da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0 (commitdiff for da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0)
+[a358d74bef51dae31332ff27e897fe04834571e6]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=a358d74bef51dae31332ff27e897fe04834571e6 (commitdiff for a358d74bef51dae31332ff27e897fe04834571e6)
+[981400177d68a279f485727be3f013e68f0bf691]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=981400177d68a279f485727be3f013e68f0bf691 (commitdiff for 981400177d68a279f485727be3f013e68f0bf691)
+
+[[!tag done]]
diff --git a/doc/todo/auto_getctime_on_fresh_build.mdwn b/doc/todo/auto_getctime_on_fresh_build.mdwn
new file mode 100644
index 000000000..760c56fa1
--- /dev/null
+++ b/doc/todo/auto_getctime_on_fresh_build.mdwn
@@ -0,0 +1,13 @@
+[[!tag wishlist]]
+
+It might be a good idea to enable --gettime when `.ikiwiki` does not
+exist. This way a new checkout of a `srcdir` would automatically get
+ctimes right. (Running --gettime whenever a rebuild is done would be too
+slow.) --[[Joey]]
+
+Could this be too annoying in some cases, eg, checking out a large wiki
+that needs to get set up right away? --[[Joey]]
+
+> Not for git with the new, optimised --getctime. For other VCS.. well,
+> pity they're not as fast as git ;), but it is a one-time expense...
+> [[done]] --[[Joey]]
diff --git a/doc/todo/auto_publish_expire.mdwn b/doc/todo/auto_publish_expire.mdwn
new file mode 100644
index 000000000..7a5a17517
--- /dev/null
+++ b/doc/todo/auto_publish_expire.mdwn
@@ -0,0 +1,33 @@
+It could be nice to mark some page such that:
+
+* the page is automatically published on some date (i.e. build, linked, syndicated, inlined/mapped, etc.)
+* the page is automatically unpublished at some other date (i.e. removed)
+
+I know that ikiwiki is a wiki compiler so that something has to refresh the wiki periodically to enforce the rules (a cronjob for instance). It seems to me that the calendar plugin rely on something similar.
+
+The date for publishing and expiring could be set be using some new directives; an alternative could be to expand the [[plugin/meta]] plugin with [<span/>[!meta date="auto publish date"]] and [<span/>[!meta expires="auto expire date"]].
+
+--[[JeanPrivat]]
+
+> This is a duplicate, and expansion, of
+> [[todo/tagging_with_a_publication_date]].
+> There, I suggest using a branch to develop
+> prepublication versions of a site, and merge from it
+> when the thing is published.
+>
+> Another approach I've seen used is to keep such pages in a pending/
+> directory, and move them via cron job when their publication time comes.
+> But that requires some familiarity with, and access to, cron.
+>
+> On [[todo/tagging_with_a_publication_date]], I also suggested using meta
+> date to set a page's date into the future,
+> and adding a pagespec that matches only pages with dates in the past,
+> which would allow filtering out the unpublished ones.
+> Sounds like you are thinking along these lines, but possibly using
+> something other than the page's creation or modification date to do it.
+>
+> I do think the general problem with that approach is that you have to be
+> careful to prevent the unpublished pages from leaking out in any
+> inlines, maps, etc. --[[Joey]]
+
+[[!tag wishlist]]
diff --git a/doc/todo/auto_rebuild_on_template_change.mdwn b/doc/todo/auto_rebuild_on_template_change.mdwn
new file mode 100644
index 000000000..ea990b877
--- /dev/null
+++ b/doc/todo/auto_rebuild_on_template_change.mdwn
@@ -0,0 +1,78 @@
+If `page.tmpl` is changed, it would be nice if ikiwiki automatically
+noticed, and rebuilt all pages. If `inlinepage.tmpl` is changed, a rebuild
+of all pages using it in an inline would be stellar.
+
+This would allow setting:
+
+ templatedir => "$srcdir/templates",
+
+.. and then the [[templates]] are managed like other wiki files; and
+like other wiki files, a change to them automatically updates dependent
+pages.
+
+Originally, it made good sense not to have the templatedir inside the wiki.
+Those templates can be used to bypass the htmlscrubber, and you don't want
+just anyone to edit them. But the same can be said of `style.css` and
+`ikiwiki.js`, which *are* in the wiki. We rely on `allowed_attachments`
+being set to secure those to prevent users uploading replacements. And we
+assume that users who can directly (non-anon) commit *can* edit them, and
+that's ok.
+
+So, perhaps the easiest way to solve this [[wishlist]] would be to
+make templatedir *default* to "$srcdir/templates/, and make ikiwiki
+register dependencies on `page.tmpl`, `inlinepage.tmpl`, etc, as they're
+used. Although, having every page declare an explicit dep on `page.tmpl`
+is perhaps a bit much; might be better to implement a special case for that
+one. Also, having the templates be copied to `destdir` is not desirable.
+In a sense, these template would be like internal pages, except not wiki
+pages, but raw files.
+
+The risk is that a site might have `allowed_attachments` set to
+`templates/*` or `*.tmpl` something like that. I think such a configuration
+is the *only* risk, and it's unlikely enough that a NEWS warning should
+suffice.
+
+(This would also help to clear up the tricky disctinction between
+wikitemplates and in-wiki templates.)
+
+Note also that when using templates from "$srcdir/templates/", `no_includes`
+needs to be set. Currently this is done by the two plugins that use
+such templates, while includes are allowed in `templatedir`.
+
+Have started working on this.
+[[!template id=gitbranch branch=origin/templatemove author="[[Joey]]"]]
+
+> But would this require that templates be parseable as wiki pages? Because that would be a nuisance. --[[KathrynAndersen]]
+
+>> It would be better for them not to be rendered separately at all.
+>> --[[Joey]]
+
+>>> I don't follow you. --[[KathrynAndersen]]
+
+>>>> If they don't render to output files, they clearly don't
+>>>> need to be treated as wiki pages. (They need to be treated
+>>>> as raw files anyway, because you don't want random users editing them
+>>>> in the online editor.) --[[Joey]]
+
+>>>>> Just to be clear, the raw files would not be copied across to the output
+>>>>> directory? -- [[Jon]]
+
+>>>>>> Without modifying ikiwiki, they'd be copied to the output directory as
+>>>>>> (e.g.) http://ikiwiki.info/templates/inlinepage.tmpl; to not copy them,
+>>>>>> it'd either be necessary to make them be internal pages
+>>>>>> (templates/inlinepage._tmpl) or special-case them in some other way.
+>>>>>> --[[smcv]]
+
+>>>>>>> In my branch, I left in support for the templatedir, and also
+>>>>>>> /usr/share/ikiwiki/templates. So, users do not have to put their
+>>>>>>> custom templates in templates/ in the wiki. If they do,
+>>>>>>> the templates are copied to the destdir like other non-wiki page files
+>>>>>>> are. The templates are not wiki pages, except those used by a few
+>>>>>>> things like the [[plugins/template]] plugin.
+>>>>>>>
+>>>>>>> That seems acceptable, since users probably don't need to modify
+>>>>>>> many templates, so the clutter is small. (Especially when
+>>>>>>> compared to the other clutter the basewiki always puts in destdir.)
+>>>>>>> This could be revisted later. --[[Joey]]
+
+[[done]]
diff --git a/doc/todo/avatar.mdwn b/doc/todo/avatar.mdwn
index b8aa2327f..91f924fa1 100644
--- a/doc/todo/avatar.mdwn
+++ b/doc/todo/avatar.mdwn
@@ -1,38 +1,19 @@
[[!tag wishlist]]
It would be nice if ikiwiki, particularly [[plugins/comments]]
-supported user avatar icons. I was considering adding a directive for this,
-as designed below.
+(but also, ideally, recentchanges) supported user avatar icons.
-However, there is no *good* service for mapping openids to avatars --
-openavatar has many issues, including not supporting delegated openids, and
-after trying it, I don't trust it to push users toward.
-Perhaps instead ikiwiki could get the email address from the openid
-provider, though I think the perl openid modules don't support the openid
-2.x feature that allows that.
-
-At the moment, working on this doesn't feel like a good use of my time.
---[[Joey]]
-
-Hmm.. unless is just always used a single provider (gravatar) and hashed
-the openid. Then wavatars could be used to get a unique avatar per openid
-at least. --[[Joey]]
-
-----
-
-The directive displays a small avatar image for a user. Pass it the
-email address, openid, or wiki username of the user.
+Idea is to add a directive that displays a small avatar image for a user.
+Pass it a user's the email address, openid, username, or the md5 hash
+of their email address:
\[[!avatar user@example.com]]
\[[!avatar http://joey.kitenet.net/]]
\[[!avatar user]]
+ \[[!avatar hash]]
-The avatars are provided by various sites. For email addresses, it uses a
-[gravatar](http://gravatar.com/). For openid,
-[openavatar](http://www.openvatar.com/) is used. For a wiki username, the
-user's email address is looked up and the gravatar for that user is
-displayed. (Of course, the user has to have filled in their email address
-on their Preferences page for that to work.)
+These directives can then be hand-inserted onto pages, or more likely,
+included in eg, a comment post via a template.
An optional second parameter can be included, containing additional
options to pass in the
@@ -45,3 +26,40 @@ not have a gravatar, uses a cute auto-generated "wavatar" avatar.
The `gravitar_options` setting in the setup file can be used to
specify additional options to pass. So for example if you want
to use wavatars everywhere, set it to "default=wavatar".
+
+The avatars are provided by various sites. For email addresses, it uses a
+[gravatar](http://gravatar.com/). For a wiki username, the
+user's email address is looked up and the gravatar for that user is
+displayed. (Of course, the user has to have filled in their email address
+on their Preferences page for that to work. Also, when the user changes
+their email address in Preferences, the gravatar won't change until the
+wiki is rebuilt.)
+
+For openid, openavatar sucked and is now dead. So we need to use an email
+address instead, I guess. Problem is that the email address of a given
+openid is only known when that user is logged in and making a change.
+And we don't want to leak an openid user's email into a page either.
+Hmm. Suppose the gravatar hash could be calculated from the email address
+and embedded instead of the openid? That would work for comments,
+but not if the directive were used elsewhere.
+
+Or, for openid, could use <http://paulisageek.com/openidavatar>. Which
+works fine, but users are not likely to figure out what they need to do to
+get an avatar associated with their openid.
+
+---
+
+Alternative, not overdesigned approach:
+
+Modify comments plugin to have an option to display avatars.
+
+When posting a comment, fill in the avatarhash field in the template.
+The hash is calculated from the user's email address. If the user's email
+is not known, skip it.
+
+End. :P
+
+---
+
+[libravatar](https://launchpad.net/libravatar) is a federated avatar
+system. Young but might be the right way to get avatars eventually.
diff --git a/doc/todo/avoid_attachement_ui_if_upload_not_allowed.mdwn b/doc/todo/avoid_attachement_ui_if_upload_not_allowed.mdwn
new file mode 100644
index 000000000..487915850
--- /dev/null
+++ b/doc/todo/avoid_attachement_ui_if_upload_not_allowed.mdwn
@@ -0,0 +1,25 @@
+Any way to make it so an edit page doesn't offer the attachment capability
+unless it matches a specific user, is an admin, and/or is an allowed page?
+(For now, I have it on all pages, and then it prohibits after I submit
+based on the allowed_attachments.)
+
+> To do that, ikiwiki would have to try to match the `allowed_attachments`
+> pagespec against a sort of dummy upload to the current page. Then if it
+> failed, assume all real uploads would fail. Now consider a pagespec like
+> "user(joey) and mimetype(audio/mpeg)" -- it'd be hard to make a dummy
+> upload to test this pagespec against.
+>
+> So, there would need to be some sort of test mode, where terms like
+> `mimetype()` always succeed. But then consider a pagespec like
+> "user(joey) and !mimetype(video/mpeg)" -- if mimetype succeeds, this
+> fails.
+>
+> So, maybe we can instead just filter out all the pagespec terms aside
+> from `user()`, `ip()`, and `admin()`. Transforming that into just
+> "user(joey)", which would succeed in the test.
+>
+> That'd work, I guess. Pulling a pagespec apart, filtering out terms, and
+> putting it back together is nontrivial, but doable.
+>
+> Other approach would be to have a separate pagespec that explicitly
+> controlls what pages to show the attachment UI on. --[[Joey]]
diff --git a/doc/todo/backlinks_result_is_lossy.mdwn b/doc/todo/backlinks_result_is_lossy.mdwn
index 11b5fbcae..2a9fc4a0a 100644
--- a/doc/todo/backlinks_result_is_lossy.mdwn
+++ b/doc/todo/backlinks_result_is_lossy.mdwn
@@ -1,5 +1,4 @@
[[!tag patch patch/core]]
-[[!template id=gitbranch branch=smcv/ready/among author="[[smcv]]"]]
IkiWiki::backlinks returns a form of $backlinks{$page} that has undergone a
lossy transformation (to get it in the form that page templates want), making
diff --git a/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn b/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn
index 8a4b852b7..fdaa09f26 100644
--- a/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn
+++ b/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn
@@ -10,4 +10,116 @@ those contents instead.
--[[madduck]]
+
+> In mine I just copied sidebar out and made some extra "sidebars", but they go elsewhere. Ugly hack, but it works. --[[simonraven]]
+
+>> Here a simple [[patch]] for multiple sidebars. Not too fancy but better than having multiple copies of the sidebar plugin. --[[jeanprivat]]
+
+>>> I made a [[git]] branch for it [[!template id=gitbranch branch="privat/multiple_sidebars" author="[[jeanprivat]]"]] --[[jeanprivat]]
+
+>>>> Ping for [[Joey]]. Do you have any comment? I could improve it if there is things you do not like. I prefer to have such a feature integrated upstream. --[[JeanPrivat]]
+
+>>>>> The code is fine.
+>>>>>
+>>>>> I did think about having it examine
+>>>>> the `page.tmpl` for parameters with names like `FOO_SIDEBAR`
+>>>>> and automatically enable page `foo` as a sidebar in that case,
+>>>>> instead of using the setup file to enable. But I'm not sure about
+>>>>> that idea..
+>>>>>
+>>>>> The full compliment of sidebars would be a header, a footer,
+>>>>> a left, and a right sidebar. It would make sense to go ahead
+>>>>> and add the parameters to `page.tmpl` so enabling each just works,
+>>>>> and add whatever basic CSS makes sense. Although I don't know
+>>>>> if I want to try to get a 3 column CSS going, so perhaps leave the
+>>>>> left sidebar out of that.
+
+-------------------
+
+<pre>
+--- /usr/share/perl5/IkiWiki/Plugin/sidebar.pm 2010-02-11 22:53:17.000000000 -0500
++++ plugins/IkiWiki/Plugin/sidebar.pm 2010-02-27 09:54:12.524412391 -0500
+@@ -19,12 +19,20 @@
+ safe => 1,
+ rebuild => 1,
+ },
++ active_sidebars => {
++ type => "string",
++ example => qw(sidebar banner footer),
++ description => "Which sidebars must be activated and processed.",
++ safe => 1,
++ rebuild => 1
++ },
+ }
+
+-sub sidebar_content ($) {
++sub sidebar_content ($$) {
+ my $page=shift;
++ my $sidebar=shift;
+
+- my $sidebar_page=bestlink($page, "sidebar") || return;
++ my $sidebar_page=bestlink($page, $sidebar) || return;
+ my $sidebar_file=$pagesources{$sidebar_page} || return;
+ my $sidebar_type=pagetype($sidebar_file);
+
+@@ -49,11 +57,17 @@
+
+ my $page=$params{page};
+ my $template=$params{template};
+-
+- if ($template->query(name => "sidebar")) {
+- my $content=sidebar_content($page);
+- if (defined $content && length $content) {
+- $template->param(sidebar => $content);
++
++ my @sidebars;
++ if (defined $config{active_sidebars} && length $config{active_sidebars}) { @sidebars = @{$config{active_sidebars}}; }
++ else { @sidebars = qw(sidebar); }
++
++ foreach my $sidebar (@sidebars) {
++ if ($template->query(name => $sidebar)) {
++ my $content=sidebar_content($page, $sidebar);
++ if (defined $content && length $content) {
++ $template->param($sidebar => $content);
++ }
+ }
+ }
+ }
+</pre>
+
+----------------------------------------
+## Further thoughts about this
+
+(since the indentation level was getting rather high.)
+
+What about using pagespecs in the config to map pages and sidebar pages together? Something like this:
+
+<pre>
+ sidebar_pagespec => {
+ "foo/*" => 'sidebars/foo_sidebar',
+ "bar/* and !bar/*/*' => 'bar/bar_top_sidebar',
+ "* and !foo/* and !bar/*" => 'sidebars/general_sidebar',
+ },
+</pre>
+
+One could do something similar for *pageheader*, *pagefooter* and *rightbar* if desired.
+
+Another thing which I find compelling - but probably because I am using [[plugins/contrib/field]] - is to be able to treat the included page as if it were *part* of the page it was included into, rather than as an included page. I mean things like \[[!if ...]] would test against the page name of the page it's included into rather than the name of the sidebar/header/footer page. It's even more powerful if one combines this with field/getfield/ftemplate/report, since one could make "generic" headers and footers that could apply to a whole set of pages.
+
+Header example:
+<pre>
+#{{$title}}
+\[[!ftemplate id="nice_data_table"]]
+</pre>
+
+Footer example:
+<pre>
+------------
+\[[!report template="footer_trail" trail="trailpage" here_only=1]]
+</pre>
+
+(Yes, I am already doing something like this on my own site. It's like the PmWiki concept of GroupHeader/GroupFooter)
+
+-- [[KathrynAndersen]]
+
[[!tag wishlist]]
diff --git a/doc/todo/blocking_ip_ranges.mdwn b/doc/todo/blocking_ip_ranges.mdwn
index 95026eef1..ac2344ece 100644
--- a/doc/todo/blocking_ip_ranges.mdwn
+++ b/doc/todo/blocking_ip_ranges.mdwn
@@ -2,3 +2,6 @@ Admins need the ability to block IP ranges. They can already ban users.
See [[fileupload]] for a propsal that grew to encompass the potential to do
this.
+
+[[done]] (well, there is no pagespec for IP ranges yet, but we can block
+individual IPs)
diff --git a/doc/todo/cache_backlinks.mdwn b/doc/todo/cache_backlinks.mdwn
new file mode 100644
index 000000000..dc13d464e
--- /dev/null
+++ b/doc/todo/cache_backlinks.mdwn
@@ -0,0 +1,25 @@
+I'm thinking about caching the backlinks between runs. --[[Joey]]
+
+* It would save some time (spent resolving every single link
+ on every page, every run). The cached backlinks could be
+ updated by only updating backlinks from changed pages.
+ (Saved time is less than 1/10th of a second for docwiki.)
+
+* It may allow attacking [[bugs/bestlink_change_update_issue]],
+ since that seems to need a copy of the old backlinks.
+ Actually, just the next change will probably solve that:
+
+* It should allow removing the `%oldlink_targets`, `%backlinkchanged`,
+ and `%linkchangers` calculation code. Instead, just generate
+ a record of which pages' backlinks have changed when updating
+ the backlinks, and then rebuild those pages.
+
+Proposal:
+
+* Store a page's backlinks in the index, same as everything else.
+
+* Do *something* to generate or store the `%brokenlinks` data.
+ This is currently generated when calculating backlinks, and
+ is only used by the brokenlinks plugin. It's not the right
+ "shape" to be stored in the index, but could be changed around
+ to fit.
diff --git a/doc/todo/cas_authentication.mdwn b/doc/todo/cas_authentication.mdwn
index 8bf7042df..ed8010518 100644
--- a/doc/todo/cas_authentication.mdwn
+++ b/doc/todo/cas_authentication.mdwn
@@ -21,6 +21,19 @@ follows) ?
> license statement at the top. I have a few questions that I'll insert
> inline with the patch below. --[[Joey]]
+>> I have made some corrections to this patch (my cas plugin) in order to use
+>> IkiWiki 3.00 interface and take your comments into account. It should work
+>> fine now.
+>>
+>> You can pull it from my git repo at
+>> http://git.boulgour.com/bbb/ikiwiki.git/ and maybe add it to your main
+>> repo.
+>>
+>> I will add GNU GPL copyright license statement as soon as I get some free
+>> time.
+>>
+>> --[[/users/bbb]]
+
------------------------------------------------------------------------------
diff --git a/IkiWiki/Plugin/cas.pm b/IkiWiki/Plugin/cas.pm
new file mode 100644
diff --git a/doc/todo/cdate_and_mdate_available_for_templates.mdwn b/doc/todo/cdate_and_mdate_available_for_templates.mdwn
new file mode 100644
index 000000000..70d8fc8c9
--- /dev/null
+++ b/doc/todo/cdate_and_mdate_available_for_templates.mdwn
@@ -0,0 +1,15 @@
+[[!tag wishlist]]
+
+`CDATE_3339`, `CDATE_822`, `MDATE_3339` and `MDATE_822` template variables would be useful for evey page, at least for my templates with Dublin Core metadata.
+
+I tried to pick the relevant lines of the [[inline|plugins/inline]] plugin and hack it into a custom plugin, but it failed miserably because of my obvious lack of perl litteracy...
+
+Anyway, I'm sure this is almost nothing...
+
+* `sub date_822 ($) {}`
+* `sub date_3339 ($) {}`
+* and something like `$template->param('cdate_822' => date_822($IkiWiki::pagectime{$page}));`
+
+Anyone can fill the missing lines?
+
+-- [[nil]]
diff --git a/doc/todo/comment_moderation_feed.mdwn b/doc/todo/comment_moderation_feed.mdwn
new file mode 100644
index 000000000..267706b1b
--- /dev/null
+++ b/doc/todo/comment_moderation_feed.mdwn
@@ -0,0 +1,9 @@
+There should be a way to generate a feed that is updated whenever a new
+comment needs moderation. Otherwise, it can be hard to remember to check
+sites, which may rarely get comments.
+
+The feed should not include the comment subject or body, but could mention
+the author. It would be especially handy if it was generated statically.
+One way would be to generate internal pages corresponding to each comment
+that needs moderation; then the feed could be constructed via a usual
+inline.
diff --git a/doc/todo/conflict_free_comment_merges.mdwn b/doc/todo/conflict_free_comment_merges.mdwn
new file mode 100644
index 000000000..e84400c17
--- /dev/null
+++ b/doc/todo/conflict_free_comment_merges.mdwn
@@ -0,0 +1,23 @@
+Currently, new comments are named with an incrementing ID (comment_N). So
+if a wiki has multiple disconnected servers, and comments are made to the
+same page on both, merging is guaranteed to result in conflicts.
+
+I propose avoiding such merge problems by naming a comment with a sha1sum
+of its (full) content. Keep the incrementing ID too, so there is an
+-ordering. And so duplicate comments are allowed..)
+So, "comment_N_SHA1".
+
+Note: The comment body will need to use meta title in the case where no
+title is specified, to retain the current behavior of the default title
+being "comment N".
+
+What do you think [[smcv]]? --[[Joey]]
+
+> I had to use md5sums, as sha1sum perl module may not be available and I
+> didn't want to drag it in. But I think that's ok; this doesn't need to be
+> cryptographically secure and even the chances of being able to
+> purposefully cause a md5 collision and thus an undesired merge conflict
+> are quite low since it modifies the input text and adds a date stamp to
+> it.
+>
+> Anyway, I think it's good, [[done]] --[[Joey]]
diff --git a/doc/todo/dependency_types.mdwn b/doc/todo/dependency_types.mdwn
new file mode 100644
index 000000000..4db633ead
--- /dev/null
+++ b/doc/todo/dependency_types.mdwn
@@ -0,0 +1,579 @@
+Ikiwiki currently only has one type of dependency between pages
+(plus wikilinks special cased in on the side). This has resulted in various
+problems, and it's seemed for a long time to me that ikiwiki needs to get
+smarter about what types of dependencies are supported.
+
+### unnecessary work
+
+The current single dependency type causes the depending page to be rebuilt
+whenever a matching dependency is added, removed, or *modified*. But a
+great many things don't care about the modification case, and often cause
+unnecessary page rebuilds:
+
+* map only cares if the pages are added or removed. Content change does
+ not matter (unless show=title is used).
+* brokenlinks, orphans, pagecount, ditto (generally)
+* inline in archive mode cares about page title, author changing, but
+ not content. (Ditto for meta with show=title.)
+* Causes extra work when solving the [[bugs/transitive_dependencies]]
+ problem.
+
+### two types of dependencies needed for [[tracking_bugs_with_dependencies]]
+
+>> it seems that there are two types of dependency, and ikiwiki
+>> currently only handles one of them. The first type is "Rebuild this
+>> page when any of these other pages changes" - ikiwiki handles this.
+>> The second type is "rebuild this page when set of pages referred to by
+>> this pagespec changes" - ikiwiki doesn't seem to handle this. I
+>> suspect that named pagespecs would make that second type of dependency
+>> more important. I'll try to come up with a good example. -- [[Will]]
+
+>>> Hrm, I was going to build an example of this with backlinks, but it
+>>> looks like that is handled as a special case at the moment (line 458 of
+>>> render.pm). I'll see if I can breapk
+>>> things another way. Fixing this properly would allow removal of that special case. -- [[Will]]
+
+>>>> I can't quite understand the distinction you're trying to draw
+>>>> between the two types of dependencies. Backlinks are a very special
+>>>> case though and I'll be suprised if they fit well into pagespecs.
+>>>> --[[Joey]]
+
+>>>>> The issue is that the existential pagespec matching allows you to build things that have similar
+>>>>> problems to backlinks.
+>>>>> e.g. the following inline:
+
+ \[[!inline pages="define(~done, link(done)) and link(~done)" archive=yes]]
+
+>>>>> includes any page that links to a page that links to done. Now imagine I add a new link to 'done' on
+>>>>> some random page somewhere - a page which some other page links to which didn't previously get included - the set of pages accepted by the pagespec, and hence the set of
+>>>>> pages inlined, will change. But, there is no dependency anywhere on the page that I altered, so
+>>>>> ikiwiki will not rebuild the page with the inline in it. What is happening is that the page that I altered affects
+>>>>> the set of pages matched by the pagespec without itself being matched by the pagespec, and hence included in the dependency list.
+
+>>>>> To make this work well, I think you need to recognise two types of dependencies for each page (and no
+>>>>> special cases for particular types of links, eg backlinks). The first type of dependency says, "The content of
+>>>>> this page depends upon the content of these other pages". The `add_depends()` in the shortcuts
+>>>>> plugin is of this form: any time the shortcuts page is edited, any page with a shortcut on it
+>>>>> is rebuilt. The inline plugin also needs to add dependencies of this form to detect when the inlined
+>>>>> content changes. By contrast, the map plugin does not need a dependency of this form, because it
+>>>>> doesn't actually care about the content of any pages, just which pages it needs to include (which we'll handle next).
+
+>>>>> The second type of dependency says, "The content of this page depends upon the exact set of pages matched
+>>>>> by this pagespec". The first type of dependency was about the content of some pages, the second type is about
+>>>>> which pages get matched by a pagespec. This is the type of dependency tracking that the map plugin needs.
+>>>>> If the set of pages matched by map pagespec changes, then the page with the map on it needs to be rebuilt to show a different list of pages.
+>>>>> Inline needs this type of dependency as well as the previous type - This type handles a change in which pages
+>>>>> are inlined, the previous type handles a change in the content of any of those pages. Shortcut does not need this type of
+>>>>> dependency. Most of the places that use `add_depends()` seem to need this type of dependency rather than the first type.
+
+>>>>>> Note that inline and map currently achieve the second type of dependency by
+>>>>>> explicitly calling `add_depends` for each page the displayed.
+>>>>>> If any of those pages are removed, the regular pagespec would not
+>>>>>> match them -- since they're gone. However, the explicit dependency
+>>>>>> on them does cause them to match. It's an ugly corner I'd like to
+>>>>>> get rid of. --[[Joey]]
+
+>>>>> Implementation Details: The first type of dependency can be handled very similarly to the current
+>>>>> dependency system. You just need to keep a list of pages that the content depends upon. You could
+>>>>> keep that list as a pagespec, but if you do this you might want to check that the pagespec doesn't change,
+>>>>> possibly by adding a dependency of the second type along with the dependency of the first type.
+
+>>>>>> An example of the current system not tracking enough data is
+>>>>>> described in [[bugs/transitive_dependencies]].
+>>>>>> --[[Joey]]
+
+>>>>> The second type of dependency is a little more tricky. For each page, we'd need a list of pagespecs that
+>>>>> the page depended on, and for each pagespec you'd want to store the list of pages that currently match it.
+>>>>> On refresh, you'd need to check each pagespec to see if the set of pages that match it has changed, and if
+>>>>> that set has changed, then rebuild the dependent page(s). Oh, and for this second type of dependency, I
+>>>>> don't think you can merge pagespecs. If I wanted to know if either "\*" or "link(done)" changes, then just checking
+>>>>> to see if the set of pages matched by "\* or link(done)" changes doesn't work.
+
+>>>>> The current system works because even though you usually want dependencies of the second type, the set of pages
+>>>>> referred to by a pagespec can only change if one of those pages itself changes. i.e. A dependency check of the
+>>>>> first type will catch a dependency change of the second type with current pagespecs.
+>>>>> This doesn't work with backlinks, and it doesn't work with existential matching. Backlinks are currently special-cased. I don't know
+>>>>> how to special-case existential matching - I suspect you're better off just getting the dependency tracking right.
+
+>>>>> I also tried to come up with other possible solutions: e.g. can we find the dependencies for a pagespec? That
+>>>>> would be the set of pages where a change on one of those pages could lead to a change in the set of pages matched by the pagespec.
+>>>>> For old-style pagespecs without backlinks, the dependency set for a pagespec is the same as the set of pages the pagespec matches.
+>>>>> Unfortunately, with existential matching, the set of pages that each
+>>>>> pagespec depends upon can quickly become "*", which is not very useful. -- [[Will]]
+
+### proposal
+
+I propose the following. --[[Joey]]
+
+* Add a second type of dependency, call it an "presence dependency".
+* `add_depends` defaults to adding a regular ("full") dependency, as
+ before. (So nothing breaks.)
+* `add_depends($page, $spec, presence => 0)` adds an presence dependency.
+* `refresh` only looks at added/removed pages when resolving presence
+ dependencies.
+
+This seems straightforwardly doable. I'd like [[Will]]'s feedback on it, if
+possible. The type types of dependencies I am proposing are not identical
+to the two types he talks about above, but I hope are close enough that
+they can be used.
+
+This doesn't deal with the stuff that only depend on the metadata of a
+page, as collected in the scan pass, changing. But it does leave a window
+open for adding such a dependency type later.
+
+----
+
+I implemented the above in a branch.
+[[!template id=gitbranch branch=origin/dependency-types author="[[joey]]"]]
+
+Then I found some problems:
+
+* Something simple like pagecount, that seems like it could use a
+ presence dependency, can have a pagespec that uses metadata, like
+ `author()` or `copyright()`.
+* pagestats, orphans and brokenlinks cannot use presence dependencies
+ because they need to update when links change.
+
+Now I'm thinking about having a special dependency look at page
+metadata, and fire if the metadata changes. And it seems links should
+either be included in that, or there should be a way to make a dependency
+that fires when a page's links change. (And what about backlinks?)
+
+It's easy to see when a page's links change, since there is `%oldlinks`.
+To see when metadata is changed is harder, since it's stored in the
+pagestate by the meta plugin. Also, there are many different types of
+metadata, that would need to be matched with the pagespecs somehow.
+
+Quick alternative: Make add_depends look at the pagespec. Ie, if it
+is a simple page name, or a glob, we know a presence dependency
+can be valid. If's more complex, convert the dependency from
+presence to full.
+
+There is a lot to dislike about this method. Its parsing of the pagespec,
+as currently implemented, does not let plugins add new types of pagespecs
+that only care about presence. Its pagespec parsing is also subject to
+false negatives (though these should be somewhat rare, and no false
+positives). Still, it does work, and it makes things like simple maps and
+pagecounts much more efficient.
+
+----
+
+#### Will's first pass feedback.
+
+If the API is going to be updated, then it would be good to make it forward compatible.
+I'd like for the API to be extendible to what is useful for complex pagespecs, even if we
+that is a little redundant at the moment.
+
+My attempt to play with this is in my git repo. [[!template id=gitbranch branch=origin/depends-spec author="[[will]]"]]
+That branch is a little out of date, but if you just look at the changes in IkiWiki.pm you'll see the concept I was looking at.
+I added an "add_depends_spec()" function that adds a dependency on the pagespec passed to it. If the set of matched pages
+changes, then the dependent page is rebuilt. At the moment the implementation uses the same hack used by map and inline -
+just add all the pages that currently exist as traditional content dependencies.
+
+> As I note below, a problem with this approach is that it has to try
+> matching the pagespec against every page, redundantly with the work done
+> by the plugin. (But there are ways to avoid that redundant matching.)
+> --[[Joey]]
+
+Getting back to commenting on your proposal:
+
+Just talking about the definition of a "presence dependency" for the moment, and ignoring implementation. Is a
+"presence dependency" supposed to cause an update when a page disappears? I assume so. Is a presence dependency
+supposed to cause an update when a pages existence hasn't changed, but it no longer matches the pagespec.
+(e.g. you use `created_before(test_page)` in a pagespec, and there was a page, `new_page`, that was created
+after `test_page`. `new_page` will not match the spec. Now we'll delete and then re-create `test_page`. Now
+`new_page` will match the spec, and yet `new_page` itself hasn't changed. Nor has its 'presence' - it was present
+before and it is present now. Should this cause a re-build of any page that has a 'presence' dependency on the spec?
+
+> Yes, a presence dep will trigger when a page is added, or removed.
+
+> Your example is valid.. but it's also not handled right by normal,
+> (content) dependencies, for the same reasons. Still, I think I've
+> addressed it with the pagespec influence stuff below. --[[Joey]]
+
+I think that is another version of the problem you encountered with meta-data.
+
+In the longer term I was thinking we'd have to introduce a concept of 'internal pagespec dependencies'. Note that I'm
+defining 'internal' pagespec dependencies differently to the pagespec dependencies I defined above. Perhaps an example:
+If you had a pagespec that was `created_before(test_page)`, then you could list all pages created before `test_page`
+with a `map` directive. The map directive would add a pagespec dependency on `created_before(test_page)`.
+Internally, there would be a second page-spec parsing function that discovers which pages a given pagespec
+depends on. As well as the function `match_created_before()`, we'd have to add a new function `depend_created_before()`.
+This new function would return a list of pages, which when any of them change, the output of `match_created_before()`
+would change. In this example, it would just return `test_page`.
+
+These lists of dependent pages could just be concatenated for every `match_...()` function in a pagespec - you can ignore
+the boolean formula aspects of the pagespec for this. If a content dependency were added on these pages, then I think
+the correct rebuilds would occur.
+
+In all, this is a surprisingly difficult problem to solve perfectly. Consider the following case:
+
+PageA.mdwn:
+
+> [ShavesSelf]
+
+PageB.mdwn
+
+> Doesn't shave self.
+
+ShavedByBob.mdwn:
+
+> [!include pages="!link(ShavesSelf)"]
+
+Does ShavedByBob.mdwn include itself?
+
+(Yeah - in IkiWiki currently links are *not* included by include, but the idea holds. I had a good example a while back, but I can't think of it right now.)
+
+sigh.
+
+-- [[Will]]
+
+> I have also been thinking about some sort of analysis pass over pagespecs
+> to determine what metadata, pages, etc they depend on. It is indeed
+> tricky to do. More thoughts on influence lists a bit below. --[[Joey]]
+
+>> The big part of what makes this tricky is that there may be cycles in the
+>> dependency graph. This can lead to situations where the result is just not
+>> well defined. This is what I was trying to get at above. -- [[Will]]
+
+>>> Hmm, I'm not seeing cycles be a problem, at least with the current
+>>> pagespec terms. --[[Joey]]
+
+>>>> Oh, they're not with current pagespec terms. But this is really close to extending to handle
+>>>> functional pagespecs, etc. And I think I'd like to think about that now.
+>>>>
+>>>> Having said that, I don't want to hold you up - you seem to be making progress. The best is
+>>>> the enemy of the good, etc. etc.
+>>>>
+>>>> For my part, I'm imagining we have two more constructs in IkiWiki:
+>>>>
+>>>> * A map directive that actually wikilinks to the pages it links to, and
+>>>> * A `match_sharedLink(pageX)` matching function that matches pageY if both pageX and pageY each have links to any same third page, pageZ.
+>>>>
+>>>> With those two constructs, one page changing might change the set of pages included in a map somewhere, which might then change the set of pages matched by some other pagespec, which might then...
+>>>>
+>>>> --[[Will]]
+
+>>>>> I think that should be supported by [[bugs/transitive_dependencies]].
+>>>>> At least in the current implementation, which considers each page
+>>>>> that is rendered to be changed, and rebuilds pages that are dependent
+>>>>> on it, in a loop. An alternate implementation, which could be faster,
+>>>>> is to construct a directed graph and traverse it just once. Sounds
+>>>>> like that would probably not support what you want to do.
+>>>>> --[[Joey]]
+
+>>>>>> Yes - that's what I'm talking about - I'll add some comments there. -- [[Will]]
+
+----
+
+### Link dependencies
+
+* `add_depends($page, $spec, links => 1, presence => 1)`
+ adds a links + presence dependency.
+* Use backlinks change code to detect changes to link dependencies too.
+* So, brokenlinks can fire whenever any links in any of the
+ pages it's tracking change, or when pages are added or
+ removed.
+* To determine if a pagespec is valid to be used with a links dependency,
+ use the same set that are valid for presence dependencies. But also
+ allow `backlinks()` to be used in it, since that matches pages
+ that the page links to, which is just what link dependencies are
+ triggered on.
+
+[[done]]
+----
+
+### the removal problem
+
+So far I have not addressed fixing the removal problem (which Will
+discusses above).
+
+Summary of problem: A has a dependency on a pagespec such as
+"bugs/* and !link(done)". B currently matches. Then B is updated,
+in a way that makes A's dependency not match it (ie, it links to done).
+Now A is not updated, because ikiwiki does not realize that it
+depended on B before.
+
+This was worked around to fix [[bugs/inline_page_not_updated_on_removal]]
+by inline and map adding explicit dependencies on each page that appears
+on them. Then a change to B triggers the explicit dep. While this works,
+it's 1) ugly 2) probably not implemented by all plugins that could
+be affected by this problem (ie, linkmap) and 3) is most of the reason why
+we grew the complication of `depends_simple`.
+
+One way to fix this is to include with each dependency, a list of pages
+that currently match it. If the list changes, the dependency is triggered.
+
+Should be doable, but may involve more work than
+currently. Consider that a dependency on `bugs/*` currently
+is triggered by just checking until *one* page is found to match it.
+But to store the list, *every* page would have to be tried against it.
+Unless the list can somehow be intelligently updated, looking at only the
+changed pages.
+
+----
+
+Found a further complication in presence dependencies. Map now uses
+presence dependencies when adding its explicit dependencies on pages. But
+this defeats the purpose of the explicit dependencies! Because, now,
+when B is changed to not match a pagespec, the A's presence dep does
+not fire.
+
+I didn't think things through when switching it to use presence
+dependencies there. But, if I change it to use full dependencies, then all
+the work that was done to allow map to use presence dependencies for its
+main pagespec is for naught. The map will once again have to update
+whenever *any* content of the page changes.
+
+This points toward the conclusion that explicit dependencies, however they
+are added, are not the right solution at all. Some other approach, such as
+maintaining the list of pages that match a dependency, and noticing when it
+changes, is needed.
+
+----
+
+### pagespec influence lists
+
+I'm using this term for the concept of a list of pages whose modification
+can indirectly influence what pages a pagespec matches.
+
+> Trying to make a formal definition of this: (Note, I'm using the term sets rather than lists, but they're roughly equivalent)
+>
+> * Let the *matching set* for a pagespec be the set of existing pages that the pagespec matches.
+> * Let the *missing document matching set* be the set of pages that would match the spec if they didn't exist. These pages may or may not currently exist. Note that membership of this set depends upon how the `match_()` functions react to non-existant pages.
+> * Let the *indirect influence set* for a pagespec be the set of all pages, *p*, whose alteration might:
+> * cause the pagespec to include or exclude a page other than *p*, or
+> * cause the pagespec to exclude *p*, unless the alteration is the removal of *p* and *p* is in the missing document matching set.
+>
+> Justification: The 'base dependency mechanism' is to compare changed pages against each pagespec. If the page matches, then rebuild the spec. For this comparison, creation and removal
+> of pages are both considered changes. This base mechanism will catch:
+>
+> * The addition of any page to the matching set through its own modification/creation
+> * The removal of any page *that would still match while non-existant* from the matching set through its own removal. (Note: The base mechanism cannot remove a page from the matching set because of that page's own modification (not deletion). If the page should be removed matching set, then it obviously cannot match the spec after the change.)
+> * The modification (not deletion) of any page that still matches after the modification.
+>
+> The base mechanism may therefore not catch:
+>
+> * The addition or removal of any page from the matching set through the modification/addition/removal of any other page.
+> * The removal of any page from the matching set through its own modification/removal if it does not still match after the change.
+>
+> The indirect influence set then should handle anything that the base mechanism will not catch.
+>
+> --[[Will]]
+
+>> I appreciate the formalism!
+>>
+>> Only existing pages need to be in these sets, because if a page is added
+>> in the future, the existing dependency code will always test to see
+>> if it matches. So it will be in the maching set (or not) at that point.
+>>
+>>> Hrm, I agree with you in general, but I think I can come up with nasty counter-examples. What about a pagespec
+>>> of "!backlink(bogus)" where the page bogus doesn't exist? In this case, the page 'bogus' needs to be in the influence
+>>> set even though it doesn't exist.
+>>>
+>>>> I think you're right, this is a case that the current code is not
+>>>> handling. Actually, I made all the pagespecs return influences
+>>>> even if the influence was not present or did not match. But, it
+>>>> currently only records influences as dependencies when a pagespec
+>>>> successfully matches. Now I'm sure that is wrong, and I've removed
+>>>> that false optimisation. I've updated some of the below. --[[Joey]]
+>>>
+>>> Also, I would really like the formalism to include the whole dependency system, not just any additions to it. That will make
+>>> the whole thing much easier to reason about.
+>>
+>> The problem with your definition of direct influence set seems to be
+>> that it doesn't allow `link()` and `title()` to have as an indirect
+>> influence, the page that matches. But I'm quite sure we need those.
+>> --[[Joey]]
+
+>>> I see what you mean. Does the revised definition capture this effectively?
+>>> The problem with this revised definition is that it still doesn't match your examples below.
+>>> My revised definition will include pretty much all currently matching pages to be in the influence list
+>>> because deletion of any of them would cause a change in which pages are matched - the removal problem.
+>>> -- [[Will]]
+
+#### Examples
+
+* The pagespec "created_before(foo)" has an indirect influence list that contains foo.
+ The removal or (re)creation of foo changes what pages match it. Note that
+ this is true even if the pagespec currently fails to match.
+
+>>> This is an annoying example (hence well worth having :) ). I think the
+>>> indirect influence list must contain 'foo' and all currently matching
+>>> pages. `created_before(foo)` will not match
+>>> a deleted page, and so the base mechanism would not cause a rebuild. The
+>>> removal problem strikes. -- [[Will]]
+
+>>>> But `created_before` can in fact match a deleted page. Because the mtime
+>>>> of a deleted page is temporarily set to 0 while the base mechanism runs to
+>>>> find changes in deleted pages. (I verified this works by experiment,
+>>>> also that `created_after` is triggered by a deleted page.) --[[Joey]]
+
+>>>>> Oh, okie. I looked at the source, saw the `if (exists $IkiWiki::pagectime{$testpage})` and assumed it would fail.
+>>>>> Of course, having it succeed doesn't cure all the issues -- just moves them. With `created_before()` succeeding
+>>>>> for deleted files, this pagespec will be match any removal in the entire wiki with the base mechanism. Whether this is
+>>>>> better or worse than the longer indirect influence list is an empirical question. -- [[Will]]
+
+* The pagespec "foo" has an empty influence list. This is because a
+ modification/creation/removal of foo directly changes what the pagespec
+ matches.
+
+* The pagespec "*" has an empty influence list, for the same reason.
+ Avoiding including every page in the wiki into its influence list is
+ very important!
+
+>>> So, why don't the above influence lists contain the currently matched pages?
+>>> Don't you need this to handle the removal problem? -- [[Will]]
+
+>>>> The removal problem is slightly confusingly named, since it does not
+>>>> affect pages that were matched by a glob and have been removed. Such
+>>>> pages can be handled without being influences, because ikiwiki knows
+>>>> they have been removed, and so can still match them against the
+>>>> pagespec, and see they used to match; and thus knows that the
+>>>> dependency has triggered.
+>>>>
+>>>>> IkiWiki can only see that they used to match if they're in the glob matching set. -- [[Will]]
+>>>>
+>>>> Maybe the thing to do is consider this an optimisation, where such
+>>>> pages are influences, but ikiwiki is able to implicitly find them,
+>>>> so they do not need to be explicitly stored. --[[Joey]]
+
+* The pagespec "title(foo)" has an influence list that contains every page
+ that currently matches it. A change to any matching page can change its
+ title, making it not match any more, and so the list is needed due to the
+ removal problem. A page that does not have a matching title is not an
+ influence, because modifying the page to change its title directly
+ changes what the pagespec matches.
+
+* The pagespec "backlink(index)" has an influence list
+ that contains index (because a change to index changes the backlinks).
+ Note that this is true even if the backlink currently fails.
+
+>>> This is another interesting example. The missing document matching set contains all links on the page index, and so
+>>> the influence list only needs to contain 'index' itself. -- [[Will]]
+
+* The pagespec "link(done)" has an influence list that
+ contains every page that it matches. A change to any matching page can
+ remove a link and make it not match any more, and so the list is needed
+ due to the removal problem.
+
+>> Why doesn't this include every page? If I change a page that doesn't have a link to
+>> 'done' to include a link to 'done', then it will now match... or is that considered a
+>> 'direct match'? -- [[Will]]
+
+>>> The regular dependency calculation code will check if every changed
+>>> page matches every dependency. So it will notice the link was added.
+>>> --[[Joey]]
+
+#### Low-level Calculation
+
+One way to calculate a pagespec's influence would be to
+expand the SuccessReason and FailReason objects used and returned
+by `pagespec_match`. Make the objects be created with an
+influence list included, and when the objects are ANDed or ORed
+together, combine the influence lists.
+
+That would have the benefit of allowing just using the existing `match_*`
+functions, with minor changes to a few of them to gather influence info.
+
+But does it work? Let's try some examples:
+
+Consider "bugs/* and link(done) and backlink(index)".
+
+Its influence list contains index, and it contains all pages that the whole
+pagespec matches. It should, ideally, not contain all pages that link
+to done. There are a lot of such pages, and only a subset influence this
+pagespec.
+
+When matching this pagespec against a page, the `link` will put the page
+on the list. The `backlink` will put index on the list, and they will be
+anded together and combined. If we combine the influences from each
+successful match, we get the right result.
+
+Now consider "bugs/* and link(done) and !backlink(index)".
+
+It influence list is the same as the previous one, even though a term has
+been negated. Because a change to index still influences it, though in a
+different way.
+
+If negation of a SuccessReason preserves the influence list, the right
+influence list will be calculated.
+
+Consider "bugs/* and (link(done) or backlink(index))"
+and "bugs/* and (backlink(index) or link(done))'
+
+Its clear that the influence lists for these are identical. And they
+contain index, plus all matching pages.
+
+When matching the first against page P, the `link` will put P on the list.
+The OR needs to be a non-short-circuiting type. (In perl, `or`, not `||` --
+so, `pagespec_translate` will need to be changed to not use `||`.)
+Given that, the `backlink` will always be evalulated, and will put index
+onto the influence list. If we combine the influences from each
+successful match, we get the right result.
+
+> This is implemented, seems to work ok. --[[Joey]]
+
+> `or` short-circuits too, but the implementation correctly uses `|`,
+> which I assume is what you meant. --[[smcv]]
+
+>> Er, yeah. --[[Joey]]
+
+----
+
+What about: "!link(done)"
+
+Specifically, I want to make sure it works now that I've changed
+`match_link` to only return a page as an influence if it *does*
+link to done.
+
+So, when matching against page P, that does not link to done,
+there are no influences, and the pagespec matches. If P is later
+changed to add a link to done, then the dependency resolver will directly
+notice that.
+
+When matching against page P, that does link to done, P
+is an influence, and the pagespec does not match. If P is later changed
+to not link to done, the influence will do its job.
+
+Looks good!
+
+----
+
+Here is a case where this approach has some false positives.
+
+"bugs/* and link(patch)"
+
+This finds as influences all pages that link to patch, even
+if they are not under bugs/, and so can never match.
+
+To fix this, the influence calculation would need to consider boolean
+operators. Currently, this turns into roughly:
+
+`FailReason() & SuccessReason(patch)`
+
+Let's say that the glob instead returns a HardFailReason, which when
+ANDed with another object, blocks their influences. (But when ORed,
+combines them.)
+
+Question: Are all pagespec terms that return reason objects w/o any
+influence info, suitable to block influence in this way?
+
+To be suitable to block, a term should never change from failing to match a
+page to successfully matching it, unless that page is directly changed in a
+way that influences are not needed for ikiwiki to notice. But, if a term
+did not meet these criteria, it would have an influence. QED.
+
+#### Influence types
+
+Note that influences can also have types, same as dependency types.
+For example, "backlink(foo)" has an influence of foo, of type links.
+"created_before(foo)" also is influenced by foo, but it's a presence
+type. Etc.
+
+> This is an interesting concept that I hadn't considered. It might
+> allow significant computational savings, but I suspect will be tricky
+> to implement. -- [[Will]]
+
+>> It was actually really easy to implement it, assuming I picked the right
+>> dependency types of course. --[[Joey]]
diff --git a/doc/todo/description_meta_param_passed_to_templates.mdwn b/doc/todo/description_meta_param_passed_to_templates.mdwn
new file mode 100644
index 000000000..712471258
--- /dev/null
+++ b/doc/todo/description_meta_param_passed_to_templates.mdwn
@@ -0,0 +1,10 @@
+[[!tag wishlist patch]]
+
+I'd like to use the description parameter from [[meta|/ikiwiki/directive/meta]] directives in custom [[inline|/ikiwiki/directive/inline]] templates. I guess this could be useful to others too.
+
+The only change required is on [line 266](http://github.com/joeyh/ikiwiki/blob/master/IkiWiki/Plugin/meta.pm#L266) of `meta.pm`
+
+ - foreach my $field (qw{author authorurl permalink}) {
+ + foreach my $field (qw{author authorurl description permalink}) {
+
+> Good idea, [[done]]. --[[Joey]]
diff --git a/doc/todo/double-click_protection_for_form_buttons.mdwn b/doc/todo/double-click_protection_for_form_buttons.mdwn
new file mode 100644
index 000000000..501be4498
--- /dev/null
+++ b/doc/todo/double-click_protection_for_form_buttons.mdwn
@@ -0,0 +1,5 @@
+A small piece of JS to prevent double-submitting forms would be quite nice. I seem to have developed a habit of doing this and having to resolve a merge conflict for two initial commits. -- [[Jon]]
+
+> By the time you see that merge conflict, the first commit has
+> already successfully happened, so you can just hit cancel
+> and throw away the second submit. --[[Joey]]
diff --git a/doc/todo/edit_form:_no_fixed_size_for_textarea.mdwn b/doc/todo/edit_form:_no_fixed_size_for_textarea.mdwn
index 4c9c2352a..77e46049f 100644
--- a/doc/todo/edit_form:_no_fixed_size_for_textarea.mdwn
+++ b/doc/todo/edit_form:_no_fixed_size_for_textarea.mdwn
@@ -10,7 +10,7 @@ On longer pages its not very comfortable to edit pages with such a small box. Th
> }
>
> Perhaps you have replaced it with a modified style sheet that does not
-> include that? --[[Joey]] [[!tag done]]
+> include that? --[[Joey]]
>> The screen shot was made with http://ikiwiki.info/ where i didn't change anything. The width is optimally used. The problem is the height.
@@ -32,3 +32,21 @@ On longer pages its not very comfortable to edit pages with such a small box. Th
>>> --[[Joey]]
>>>>>> the javascript approach would need to work something like this: you need to know about the "bottom-most" item on the edit page, and get a handle for that object in the DOM. You can then obtain the absolute position height-wise of this element and the absolute position of the bottom of the window to determine the pixel-difference. Then, you set the height of the textarea to (current height in px) + determined-value. This needs to be re-triggered on various resize events, at least for the window and probably for other elements too. I may have a stab at this at some point. -- [[Jon]]
+
+Google chrome has a completly elegant fix for this problem: All textareas
+have a small resize handle in a corner, that can be dragged around. No
+nasty javascript needed. IMHO, this is the right solution, and I hope other
+browsers emulate it. [[done]]
+--[[Joey]]
+
+Wouldn't it be possible to just implement an integer-valued setting for this, accessible via the "Setup" wiki page? This would require a wiki regen, but such a setting would not be changed frequently I suppose. Also, Mediawiki has this implemented as a per-user setting (two settings, actually, -- number of rows and columns of the edit area); such a per-user setting would be the best possible implementation, but I'm not sure if ikiwiki already supports per-user settings. Please consider implementing this as the current 20 rows is a great PITA for any non-trivial page.
+
+> I don't think it would need a wiki rebuild, as the textarea is generated dynamically by the CGI when you perform a CGI action, and (as far as I know) is not cooked into any static content. -- [[Jon]]
+
+>> There is no need for a configuration setting for this -- to change
+>> the default height from 20 rows to something else, you can just put
+>> something like this in your `local.css`: --[[Joey]]
+
+ #editcontent {
+ height: 50em;
+ }
diff --git a/doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn b/doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn
new file mode 100644
index 000000000..4bc10e432
--- /dev/null
+++ b/doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn
@@ -0,0 +1,8 @@
+[[plugins/edittemplate]] looks for the specified template relative to the
+page the directive appears on. Which can be handy, eg, make a
+blog/mytemplate and put the directive on blog, and it will find
+"mytemplate". However, it can also be confusing, since other templates
+always are looked for in `templates/`.
+
+I think it should probably fall back to looking for `templates/$foo`.
+--[[Joey]]
diff --git a/doc/todo/enable-htaccess-files.mdwn b/doc/todo/enable-htaccess-files.mdwn
index e302a49ed..3b9721d50 100644
--- a/doc/todo/enable-htaccess-files.mdwn
+++ b/doc/todo/enable-htaccess-files.mdwn
@@ -12,6 +12,13 @@
qr/(^|\/).svn\//, qr/.arch-ids\//, qr/{arch}\//],
wiki_link_regexp => qr/\[\[(?:([^\]\|]+)\|)?([^\s\]#]+)(?:#([^\s\]]+))?\]\]/,
+> Note that the above patch is **completely broken**.
+> It removes the crucial excludes of all files starting with a dot.
+> The negative regexps for htaccess have no effect, so the whole
+> thing only "works" because it allows *any* file starting with a dot.
+> If you applied this patch to your ikiwiki, you opened a huge security
+> hole. --[[Joey]]
+
[[!tag patch patch/core]]
This lets the site administrator have a `.htaccess` file in their underlay
@@ -57,5 +64,17 @@ It should be off by default of course. --Max
---
+1 I want `.htaccess` so I can rewrite some old Wordpress URLs to make feeds work again. --[[hendry]]
+> Unless you cannot modify apache's configuration, you do not need htaccess
+> to do that. Apache's documentation recommends against using htaccess
+> unless you're a user who cannot modify the main server configuration.
+> --[[Joey]]
+
---
+1 for various purposes (but sometimes the filename isn't `.htaccess`, so please make it configurable) --[[schmonz]]
+
+> I've described a workaround for one use case at the [[plugins/rsync]] [[plugins/rsync/discussion]] page. --[[schmonz]]
+
+---
+
+[[done]], you can use the `include` setting to override the default
+excludes now. Please use extreme caution when doing so. --[[Joey]]
diff --git a/doc/todo/enable_arbitrary_markup_for_directives.mdwn b/doc/todo/enable_arbitrary_markup_for_directives.mdwn
new file mode 100644
index 000000000..c1f0f86ed
--- /dev/null
+++ b/doc/todo/enable_arbitrary_markup_for_directives.mdwn
@@ -0,0 +1,47 @@
+One of the good things about [PmWiki](http://www.pmwiki.org) is the ability to treat arbitrary markup as directives.
+In ikiwiki, all directives have the same format:
+
+\[[!name arguments]]
+
+But with PmWiki, directives can be added to the engine (with the "Markup" hook) with the usual name and function passing, but also with a regexp which has capturing parentheses, and the results of the match are passed to the given function.
+Would it be possible to alter the "preprocess" hook to have an optional regex argument which acted in a similar fashion?
+
+For example, one could then write a plugin which would treat
+
+Category: Foo, Bar
+
+as a tag, by using a regex such as /^Category:\s*([\w\s,]+)$/; the result "Foo, Bar" could then be further processed by the hook function.
+
+This could also make it easier to support more styles of markup, rather than having to do all the processing in "htmlize" and/or "filter".
+
+-- [[KathrynAndersen]]
+
+[[!taglink wishlist]]
+
+> Arbitrary text transformations can already be done via the filter and
+> sanitize hooks. That's how the smiley and typography plugins do their
+> thing.
+>
+> AFAICS, the only benefit to having a regexp-based-hook interface is less
+> overhead in passing page content into the hooks. But that overhead is a
+> small amount of the total render time.
+>
+> Also, I notice that smiley does such complicated things in its sanitize
+> hook (ie, it looks at html context around the smilies) that a simple
+> matching regexp would not be sufficient. Furthermore, typography needs to
+> pass the page content into the library it uses, which does not expose
+> regexps to match on. So ikiwiki's more general filtering interface seems
+> to allow both of these to do things that could not be done with the
+> PmWiki interface. --[[Joey]]
+
+>>You have some good points. I was aware of using filter, but it didn't occur to me that one could use sanitize to do processing also, probably because "sanitize" brought to mind removing harmful content rather than doing other alterations.
+>>It has also occurred to me, on further thought, that if one wants one's chosen markup to actually be processed during the "preprocess" stage, that one could do so by converting the chosen markup to directive-style markup during the "filter" stage and then processing the directive during the "preprocess" stage as per usual. Is there a tag for "no longer on the wishlist?". --[[KathrynAndersen]]
+
+>>> Yeah, sanitize is a misleading name for the relatively few things that
+>>> use it this way.
+>>>
+>>> While you could do a filter to preprocess step, it is a bit
+>>> of a long way round, since filter always runs just before
+>>> preprocess.
+>>>
+>>> Anyway, guess this is [[done]] --[[Joey]]
diff --git a/doc/todo/finer_control_over___60__object___47____62__s.mdwn b/doc/todo/finer_control_over___60__object___47____62__s.mdwn
new file mode 100644
index 000000000..50c4d43bf
--- /dev/null
+++ b/doc/todo/finer_control_over___60__object___47____62__s.mdwn
@@ -0,0 +1,98 @@
+IIUC, the current version of [HTML::Scrubber][] allows for the `object` tags to be either enabled or disabled entirely. However, while `object` can be used to add *code* (which is indeed a potential security hole) to a document, reading [Objects, Images, and Applets in HTML documents][objects-html] reveals that the &ldquo;dangerous&rdquo; are not all the `object`s, but rather those having the following attributes:
+
+ classid %URI; #IMPLIED -- identifies an implementation --
+ codebase %URI; #IMPLIED -- base URI for classid, data, archive--
+ codetype %ContentType; #IMPLIED -- content type for code --
+ archive CDATA #IMPLIED -- space-separated list of URIs --
+
+It seems that the following attributes are, OTOH, safe:
+
+ declare (declare) #IMPLIED -- declare but don't instantiate flag --
+ data %URI; #IMPLIED -- reference to object's data --
+ type %ContentType; #IMPLIED -- content type for data --
+ standby %Text; #IMPLIED -- message to show while loading --
+ height %Length; #IMPLIED -- override height --
+ width %Length; #IMPLIED -- override width --
+ usemap %URI; #IMPLIED -- use client-side image map --
+ name CDATA #IMPLIED -- submit as part of form --
+ tabindex NUMBER #IMPLIED -- position in tabbing order --
+
+Should the former attributes be *scrubbed* while the latter left intact, the use of the `object` tag would seemingly become safe.
+
+Note also that allowing `object` (either restricted in such a way or not) automatically solves the [[/todo/svg]] issue.
+
+For Ikiwiki, it may be nice to be able to restrict [URI's][URI] (as required by the `data` and `usemap` attributes) to, say, relative and `data:` (as per [RFC 2397][]) ones as well, though it requires some more consideration.
+
+&mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+[[wishlist]]
+
+> SVG can contain embedded javascript.
+
+>> Indeed.
+
+>> So, a more general tool (`XML::Scrubber`?) will be necessary to
+>> refine both [XHTML][] and SVG.
+
+>> &hellip; And to leave [MathML][] as is (?.)
+
+>> &mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+> The spec that you link to contains
+> examples of objects that contain python scripts, Microsoft OLE
+> objects, and Java. And then there's flash. I don't think ikiwiki can
+> assume all the possibilities are handled securely, particularly WRT XSS
+> attacks.
+> --[[Joey]]
+
+>> I've scanned over all the `object` examples in the specification and
+>> all of those that hold references to code (as opposed to data) have a
+>> distinguishing `classid` attribute.
+
+>> While I won't assert that it's impossible to reference code with
+>> `data` (and, thanks to `text/xhtml+xml` and `image/svg+xml`, it is
+>> *not* impossible), throwing away any of the &ldquo;insecure&rdquo;
+>> attributes listed above together with limiting the possible URI's
+>> (i.&nbsp;e., only *local* and certain `data:` ones for `data` and
+>> `usemap`) should make `object` almost as harmless as, say, `img`.
+
+>>> But with local data, one could not embed youtube videos, which surely
+>>> is the most obvious use case?
+
+>>>> Allowing a &ldquo;remote&rdquo; object to render on one's page is a
+ security issue by itself.
+ Though, of course, having an explicit whitelist of URI's may make
+ this issue more tolerable.
+ &mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+>>> Note that youtube embedding uses an
+>>> object element with no classid. The swf file is provided via an
+>>> enclosed param element. --[[Joey]]
+
+>>>> I've just checked a random video on YouTube and I see that the
+ `.swf` file is provided via an enclosed `embed` element. Whether
+ to allow those or not is a different issue.
+ &mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+>> (Though it certainly won't solve the [[SVG_problem|/todo/SVG]] being
+>> restricted in such a way.)
+
+>> Of the remaining issues I could only think of recursive
+>> `object` &mdash; the one that references its container document.
+
+>> &mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+## See also
+
+* [Objects, Images, and Applets in HTML documents][objects-html]
+* [[plugins/htmlscrubber|/plugins/htmlscrubber]]
+* [[todo/svg|/todo/svg]]
+* [RFC 2397: The &ldquo;data&rdquo; URL scheme. L.&nbsp;Masinter. August 1998.][RFC 2397]
+* [Uniform Resource Identifier &mdash; the free encyclopedia][URI]
+
+[HTML::Scrubber]: http://search.cpan.org/~podmaster/HTML-Scrubber-0.08/Scrubber.pm
+[MathML]: http://en.wikipedia.org/wiki/MathML
+[objects-html]: http://www.w3.org/TR/1999/REC-html401-19991224/struct/objects.html
+[RFC 2397]: http://tools.ietf.org/html/rfc2397
+[URI]: http://en.wikipedia.org/wiki/Uniform_Resource_Identifier
+[XHTML]: http://en.wikipedia.org/wiki/XHTML
diff --git a/doc/todo/generated_po_stuff_not_ignored_by_git.mdwn b/doc/todo/generated_po_stuff_not_ignored_by_git.mdwn
index 1d24fd385..29c017c5d 100644
--- a/doc/todo/generated_po_stuff_not_ignored_by_git.mdwn
+++ b/doc/todo/generated_po_stuff_not_ignored_by_git.mdwn
@@ -1,4 +1,3 @@
-[[!template id=gitbranch branch=smcv/gitignore author="[[smcv]]"]]
[[!tag patch]]
The recent merge of the po branch didn't come with a .gitignore.
diff --git a/doc/todo/generic_insert_links b/doc/todo/generic_insert_links
new file mode 100644
index 000000000..050f32ee7
--- /dev/null
+++ b/doc/todo/generic_insert_links
@@ -0,0 +1,24 @@
+The attachment plugin's Insert Links button currently only knows
+how to insert plain wikilinks and img directives (for images).
+
+[[wishlist]]: Generalize this, so a plugin can cause arbitrary text
+to be inserted for a particular file. --[[Joey]]
+
+Design:
+
+Add an insertlinks hook. Each plugin using the hook would be called,
+and passed the filename of the attachment. If it knows how to handle
+the file type, it returns a the text that should be inserted on the page.
+If not, it returns undef, and the next plugin is tried.
+
+This would mean writing plugins in order to handle links for
+special kinds of attachments. To avoid that for simple stuff,
+a fallback plugin could run last and look for a template
+named like `templates/embed_$extension`, and insert a directive like:
+
+ \[[!template id=embed_vp8 file=my_movie.vp8]]
+
+Then to handle a new file type, a user could just make a template
+that expands to some relevant html. In the example above,
+`templates/embed_vp8` could make a html5 video tag, possibly with some
+flash fallback code even.
diff --git a/doc/todo/git_attribution/discussion.mdwn b/doc/todo/git_attribution/discussion.mdwn
index dfb490bc2..6905d9b4b 100644
--- a/doc/todo/git_attribution/discussion.mdwn
+++ b/doc/todo/git_attribution/discussion.mdwn
@@ -72,7 +72,7 @@ no determination of uniqueness)
> GIT_AUTHOR_EMAIL can also be set.
>
> There is one thing yet to be solved, and that is how to tell the
-> difference between a web commit by 'Joey Hess <joey@kitenet.net>',
+> difference between a web commit by 'Joey Hess <joey\@kitenet.net>',
> and a git commit by the same. I think we do want to differentiate these,
> and the best way to do it seems to be to add a line to the end of the
> commit message. Something like: "\n\nWeb-commit: true"
@@ -94,5 +94,5 @@ no determination of uniqueness)
> * github pushes to twitter ;-)
>
> So while I tried that way at first, I'm now leaning toward encoding the
-> username in the email address. Like "user <user@web>", or
-> "joey <http://joey.kitenet.net/@web>".
+> username in the email address. Like "user <user\@web>", or
+> "joey <http://joey.kitenet.net/\@web>".
diff --git a/doc/todo/html.mdwn b/doc/todo/html.mdwn
index 44f20c876..4f4542be2 100644
--- a/doc/todo/html.mdwn
+++ b/doc/todo/html.mdwn
@@ -1,6 +1,6 @@
Create some nice(r) stylesheets.
Should be doable w/o touching a single line of code, just
-editing the [[wikitemplates]] and/or editing [[style.css]].
+editing the [[templates]] and/or editing [[style.css]].
[[done]] ([[css_market]] ..)
diff --git a/doc/todo/htpasswd_mirror_of_the_userdb.mdwn b/doc/todo/htpasswd_mirror_of_the_userdb.mdwn
new file mode 100644
index 000000000..e4a411780
--- /dev/null
+++ b/doc/todo/htpasswd_mirror_of_the_userdb.mdwn
@@ -0,0 +1,29 @@
+[[!tag wishlist]]
+
+Ikiwiki is static, so access control for viewing the wiki must be
+implemented on the web server side. Managing wiki users and access
+together, we can currently
+
+* use [[httpauth|plugins/httpauth/]], but some [[passwordauth|plugins/passwordauth]] functionnality [[is missing|todo/httpauth_feature_parity_with_passwordauth/]];
+* use [[passwordauth|plugins/passwordauth]] plus [[an Apache `mod_perl` authentication mechanism|plugins/passwordauth/discussion/]], but this is Apache-centric and enabling `mod_perl` just for auth seems overkill.
+
+Moreover, when ikiwiki is just a part of a wider web project, we may want
+to use the same userdb for the other parts of this project.
+
+I think an ikiwiki plugin which would (re)generate an htpasswd version of
+the user/passwd base (better, two htpasswd files, one with only the wiki
+admins and one with everyone) each time an user is added or modified would
+solve this problem:
+
+* access control can be managed from the web server
+* user management is handled by the passwordauth plugin
+* htpasswd format is understood by various servers (Apache, lighttpd, nginx, ...) and languages commonly used for web development (perl, python, ruby)
+* htpasswd files can be mirrored on other machines when the web site is distributed
+
+-- [[nil]]
+
+> I think this is a good idea. Although unless the password hashes that
+> are stored in the userdb are compatible with htpasswd hashes,
+> the htpasswd hashes will need to be stored in the userdb too. Then
+> any userdb change can just regenerate the htpasswd file, dumping out
+> the right kind of hashes. --[[Joey]]
diff --git a/doc/todo/http_bl_support.mdwn b/doc/todo/http_bl_support.mdwn
new file mode 100644
index 000000000..f7a46ee6c
--- /dev/null
+++ b/doc/todo/http_bl_support.mdwn
@@ -0,0 +1,67 @@
+[Project Honeypot](http://projecthoneypot.org/) has an HTTP:BL API available to subscribed (it's free, accept donations) people/orgs. There's a basic perl package someone wrote, I'm including a copy here.
+
+[from here](http://projecthoneypot.org/board/read.php?f=10&i=112&t=112)
+
+> The [[plugins/blogspam]] service already checks urls against
+> the surbl, and has its own IP blacklist. The best way to
+> support the HTTP:BL may be to add a plugin
+> [there](http://blogspam.repository.steve.org.uk/file/cc858e497cae/server/plugins/).
+> --[[Joey]]
+
+<pre>
+package Honeypot;
+
+use Socket qw/inet_ntoa/;
+
+my $dns = 'dnsbl.httpbl.org';
+my %types = (
+0 => 'Search Engine',
+1 => 'Suspicious',
+2 => 'Harvester',
+4 => 'Comment Spammer'
+);
+sub query {
+my $key = shift || die 'You need a key for this, you get one at http://www.projecthoneypot.org';
+my $ip = shift || do {
+warn 'no IP for request in Honeypot::query().';
+return;
+};
+
+my @parts = reverse split /\./, $ip;
+my $lookup_name = join'.', $key, @parts, $dns;
+
+my $answer = gethostbyname ($lookup_name);
+return unless $answer;
+$answer = inet_ntoa($answer);
+my(undef, $days, $threat, $type) = split /\./, $answer;
+my @types;
+while(my($bit, $typename) = each %types) {
+push @types, $typename if $bit & $type;
+}
+return {
+days => $days,
+threat => $threat,
+type => join ',', @types
+};
+
+}
+1;
+</pre>
+
+From the page:
+
+> The usage is simple:
+
+> use Honeypot;
+> my $key = 'XXXXXXX'; # your key
+> my $ip = '....'; the IP you want to check
+> my $q = Honeypot::query($key, $ip);
+
+> use Data::Dumper;
+> print Dumper $q;
+
+Any chance of having this as a plugin?
+
+I could give it a go, too. Would be fun to try my hand at Perl. --[[simonraven]]
+
+[[!tag wishlist]]
diff --git a/doc/todo/inline_plugin:_specifying_ordered_page_names.mdwn b/doc/todo/inline_plugin:_specifying_ordered_page_names.mdwn
index bbde04f83..85bb4ff5a 100644
--- a/doc/todo/inline_plugin:_specifying_ordered_page_names.mdwn
+++ b/doc/todo/inline_plugin:_specifying_ordered_page_names.mdwn
@@ -1,5 +1,3 @@
-[[!template id=gitbranch branch=smcv/ready/inline-pagenames author="[[smcv]]"]]
-
A [[!taglink patch]] in my git repository (the inline-pagenames branch) adds
the following parameter to the [[ikiwiki/directive/inline]] directive:
diff --git a/doc/todo/l10n.mdwn b/doc/todo/l10n.mdwn
index bba103494..a539f0424 100644
--- a/doc/todo/l10n.mdwn
+++ b/doc/todo/l10n.mdwn
@@ -1,3 +1,21 @@
+ikiwiki should be fully internationalized.
+
+----
+
+As to the hardcoded strings in ikiwiki, I've internationalized the program,
+and there is a po/ikiwiki.pot in the source that can be translated.
+--[[Joey]]
+
+----
+
+> The now merged po plugin handles l10n of wiki pages. The only missing
+> piece now is l10n of the templates.
+> --[[Joey]]
+
+----
+
+## template i18n
+
From [[Recai]]:
> Here is my initial work on ikiwiki l10n infrastructure (I'm sending it
> before finalizing, there may be errors).
@@ -48,54 +66,19 @@ Birisi[1], ki muhtemelen bu sizsiniz, <TMPL_VAR WIKINAME>[2] üzerindeki
bulundu. Parola: <TMPL_VAR USER_PASSWORD> -- ikiwiki [1] Parolayı isteyen
kullanıcının ait IP adresi: <TMPL_VAR REMOTE_ADDR>[2] <TMPL_VAR WIKIURL>
-> Looks like, tmpl_process3 cannot preserve line breaks in template files.
-> For example, it processed the following template:
-
This could be easily worked around in tmpl_process3, but I wouldn't like to
maintain a separate utility.
----
-As to the hardcoded strings in ikiwiki, I've internationalized the program,
-and there is a po/ikiwiki.pot in the source that can be translated.
---[[Joey]]
-
-----
-
-Danish l10n of templates and basewiki is available with the following commands:
-
- git clone http://source.jones.dk/ikiwiki.git newsite
- cd newsite
- make
-
-Updates are retrieved with this single command:
+Another way to approach this would be to write a small program that outputs
+the current set of templates. Now i18n of that program is trivial,
+and it can be run once per language to generate localized templates.
- make
+Then it's just a matter of installing the templates somewhere, and having
+them be used when a different language is enabled.
-l10n is maintained using po4a for basewiki, smiley and templates - please send me PO files if you
-translate to other languagess than the few I know about myself: <dr@jones.dk>
-
-As upstream ikiwiki is now maintained in GIT too, keeping the master mirror in sync with upstream
-could probably be automated even more - but an obstacle seems to be that content is not maintained
-separately but as an integral part of upstream source (GIT seems to not support subscribing to
-only parts of a repository).
-
-For example use, here's how to roll out a clone of the [Redpill support site](http://support.redpill.dk/):
-
- mkdir -p ~/public_cgi/support.redpill.dk
- git clone git://source.jones.dk/bin
- bin/localikiwikicreatesite -o git://source.redpill.dk/support rpdemo
-
-(Redpill support is inspired by <http://help.riseup.net> but needs to be reusable for several similarly configured networks)
-
---[[JonasSmedegaard]]
-
-> I don't understand at all why you're using git the way you are.
->
-> I think that this needs to be reworked into a patch against the regular
-> ikiwiki tree, that adds the po4a stuff needed to generate the pot files for the
-> basewiki and template content, as well as the stuff that generates the
-> translated versions of those from the po files.
->
-> The now merged po plugin also handles some of this.
-> --[[Joey]]
+It would make sense to make the existing `locale` setting control which
+templates are used. But the [[plugins/po]] plugin would probably want to do
+something more, and use the actual language the page is written in.
+--[[Joey]]
diff --git a/doc/todo/language_definition_for_the_meta_plugin.mdwn b/doc/todo/language_definition_for_the_meta_plugin.mdwn
index 90bfbef3b..44fcf32bb 100644
--- a/doc/todo/language_definition_for_the_meta_plugin.mdwn
+++ b/doc/todo/language_definition_for_the_meta_plugin.mdwn
@@ -95,7 +95,24 @@ This may be useful for sites with a few pages in different languages, but no ful
>>> Now that po has been merged, this patch should probably also be adapted
>>> so that the po plugin forces the meta::lang of every page to what po
->>> thinks it should be. Perhaps [[the_special_po_pagespecs|ikiwiki/pagespec/po]]
->>> should also work with meta-assigned languages? --[[smcv]]
+>>> thinks it should be. --[[smcv]]
+
+>>>> Agreed, users of the po plugin would greatly benefit from it.
+>>>> Seems doable. --[[intrigeri]]
+
+>>> Perhaps [[the_special_po_pagespecs|ikiwiki/pagespec/po]] should
+>>> also work with meta-assigned languages? --[[smcv]]
+
+>>>> Yes. But then, these special pagespecs should be moved outside of
+>>>> [[plugins/po]], as they could be useful to anyone using the
+>>>> currently discussed patch even when not using the po plugin.
+>>>>
+>>>> We could add these pagespecs to the core and make them use
+>>>> a simple language-guessing system based on a new hook. Any plugin
+>>>> that implements such a hook could decide whether it should
+>>>> overrides the language guessed by another one, and optionally use
+>>>> the `first`/`last` options (e.g. the po plugin will want to be
+>>>> authoritative on the pages of type po, and will then use
+>>>> `last`). --[[intrigeri]]
[[!tag wishlist patch plugins/meta translation]]
diff --git a/doc/todo/link_plugin_perhaps_too_general__63__.mdwn b/doc/todo/link_plugin_perhaps_too_general__63__.mdwn
new file mode 100644
index 000000000..8a5fd50eb
--- /dev/null
+++ b/doc/todo/link_plugin_perhaps_too_general__63__.mdwn
@@ -0,0 +1,25 @@
+[[!tag wishlist blue-sky]]
+(This isn't important to me - I don't use MediaWiki or Creole syntax myself -
+but just thinking out loud...)
+
+The [[ikiwiki/wikilink]] syntax IkiWiki uses sometimes conflicts with page
+languages' syntax (notably, [[plugins/contrib/MediaWiki]] and [[plugins/Creole]]
+want their wikilinks the other way round, like
+`\[[plugins/write|how to write a plugin]]`). It would be nice if there was
+some way for page language plugins to opt in/out of the normal wiki link
+processing - then MediaWiki and Creole could have their own `linkify` hook
+that was only active for *their* page types, and used the appropriate
+syntax.
+
+In [[todo/matching_different_kinds_of_links]] I wondered about adding a
+`\[[!typedlink to="foo" type="bar"]]` directive. This made me wonder whether
+a core `\[[!link]]` directive would be useful; this could be a fallback for
+page types where a normal wikilink can't be done for whatever reason, and
+could also provide extension points more easily than WikiLinks' special
+syntax with extra punctuation, which doesn't really scale?
+
+Straw-man:
+
+ \[[!link to="ikiwiki/wikilink" desc="WikiLinks"]]
+
+--[[smcv]]
diff --git a/doc/todo/make_link_target_search_all_paths_as_fallback.mdwn b/doc/todo/make_link_target_search_all_paths_as_fallback.mdwn
new file mode 100644
index 000000000..f971fd8e0
--- /dev/null
+++ b/doc/todo/make_link_target_search_all_paths_as_fallback.mdwn
@@ -0,0 +1,27 @@
+[[!tag wishlist]]
+
+## Idea
+
+After searching from the most local to the root for a wikilinkable page, descend into the tree of pages looking for a matching page.
+
+For example, if I link to \[\[Pastrami\]\] from /users/eric, the current behavior is to look for
+
+ * /users/eric/pastrami
+ * /users/pastrami
+ * /users/eric/pastrami
+
+I'd like it to find /sandwiches/pastrami.
+
+## Issues
+
+I know this is a tougher problem, especially to scale efficiently. There is also not a clear ordering unless it is the recursive dictionary ordering (ie the order of a breadth-first search with natural ordering). It would probably require some sort of static lookup table keyed by pagename and yielding a path. This could be generated initially by a breadth-first search and then updated incrementally when pages are added/removed/renamed. In some ways a global might not be ideal, since one might argue that the link above should match /users/eric/sandwiches/pastrami before /sandwiches/pastrami. I guess you could put all matching paths in the lookup table and then evaluate the ordering at parse-time.
+
+## Motivation
+
+Since I often access my documents using a text editor, I find it useful to keep them ordered in a heirarchy, about 3 levels deep with a branching factor of perhaps 10. When linking though, I'd like the wiki to find the document for me, since I am lazy.
+
+Also, many of my wiki pages comprise the canonical local representation of some unique entity, for example I might have /software/ikiwiki. The nesting, however, is only to aid navigation, and shouldn't be considered as part of resource's name.
+
+## Alternatives
+
+If an alias could be specified in the page body (for example, /ikiwiki for /software/ikiwiki) which would then stand in for a regular page when searching, then the navigational convenience of folders could be preserved while selectively flattening the search namespace.
diff --git a/doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn b/doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn
new file mode 100644
index 000000000..2b2b0242e
--- /dev/null
+++ b/doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn
@@ -0,0 +1,11 @@
+One feature of mediawiki which I quite like is the ability to mark a change as 'minor', or 'trivial'. This can then be used to filter the 'recentchanges' page, to only show substantial edits.
+
+The utility of this depends entirely on whether the editors use it properly.
+
+I currently use an inline on the front page of my personal homepage to show the most recent pages (by creation date) within a subsection of my site (a blog). Blog posts are rarely modified much after they are 'created' (or published - I bodge the creation time via meta when I publish a post. It might sit in draft form indefinitely), so this effectively shows only non-trivial changes.
+
+I would like to have a short list of the most recent modifications to the site on the front page. I therefore want to sort by modified time rather than creation time, but exclude edits that I self-identify as minor. I also only want to take a short number of items, the top 5, and display only their titles (which may be derived from filename, or set via meta again).
+
+I'm still thinking through how this might be achieved in an ikiwiki-suitable fashion, but I think I need a scheme to identify certain edits as trivial. This would have to work via web edits (easier: could add a check box to the edit form) and plain changes in the VCS (harder: scan for keywords in a commit message? in a VCS-agnostic fashion?)
+
+[[!tag wishlist]]
diff --git a/doc/todo/matching_different_kinds_of_links.mdwn b/doc/todo/matching_different_kinds_of_links.mdwn
index 26c5a072b..da3ea49f6 100644
--- a/doc/todo/matching_different_kinds_of_links.mdwn
+++ b/doc/todo/matching_different_kinds_of_links.mdwn
@@ -36,6 +36,11 @@ Besides pagespecs, the `rel=` attribute could be used for styles. --Ivan Z.
> normal links.) Might be better to go ahead and add the variable to
> core though. --[[Joey]]
+>> I've implemented this with the data structure you suggested, except that
+>> I called it `%typedlinks` instead of `%linktype` (it seemed to make more
+>> sense that way). I also ported `tag` to it, and added a `tagged_is_strict`
+>> config option. See below! --[[smcv]]
+
I saw somewhere else here some suggestions for the wiki-syntax for specifying the relation name of a link. One more suggestion---[the syntax used in Semantic MediaWiki](http://en.wikipedia.org/wiki/Semantic_MediaWiki#Basic_usage), like this:
<pre>
@@ -45,3 +50,147 @@ I saw somewhere else here some suggestions for the wiki-syntax for specifying th
So a part of the effect of [[`\[[!taglink TAG\]\]`|plugins/tag]] could be represented as something like `\[[tag::TAG]]` or (more understandable relation name in what concerns the direction) `\[[tagged::TAG]]`.
I don't have any opinion on this syntax (whether it's good or not)...--Ivan Z.
+
+-------
+
+>> [[!template id=gitbranch author="[[Simon_McVittie|smcv]]" branch=smcv/ready/link-types]]
+>> [[!tag patch]]
+
+## Documentation for smcv's branch
+
+### added to [[ikiwiki/pagespec]]
+
+* "`typedlink(type glob)`" - matches pages that link to a given page (or glob)
+ with a given link type. Plugins can create links with a specific type:
+ for instance, the tag plugin creates links of type `tag`.
+
+### added to [[plugins/tag]]
+
+If the `tagged_is_strict` config option is set, `tagged()` will only match
+tags explicitly set with [[ikiwiki/directive/tag]] or
+[[ikiwiki/directive/taglink]]; if not (the default), it will also match
+any other [[WikiLinks|ikiwiki/WikiLink]] to the tag page.
+
+### added to [[plugins/write]]
+
+#### `%typedlinks`
+
+The `%typedlinks` hash records links of specific types. Do not modify this
+hash directly; call `add_link()`. The keys are page names, and the values
+are hash references. In each page's hash reference, the keys are link types
+defined by plugins, and the values are hash references with link targets
+as keys, and 1 as a dummy value, something like this:
+
+ $typedlinks{"foo"} = {
+ tag => { short_word => 1, metasyntactic_variable => 1 },
+ next_page => { bar => 1 },
+ };
+
+Ordinary [[WikiLinks|ikiwiki/WikiLink]] appear in `%links`, but not in
+`%typedlinks`.
+
+#### `add_link($$;$)`
+
+ This adds a link to `%links`, ensuring that duplicate links are not
+ added. Pass it the page that contains the link, and the link text.
+
+An optional third parameter sets the link type (`undef` produces an ordinary
+[[ikiwiki/WikiLink]]).
+
+## Review
+
+Some code refers to `oldtypedlinks`, and other to `oldlinktypes`. --[[Joey]]
+
+> Oops, I'll fix that. That must mean missing test coverage, too :-(
+> --s
+
+>> A test suite for the dependency resolver *would* be nice. --[[Joey]]
+
+>>> Bug fixed, I think. A test suite for the dependency resolver seems
+>>> more ambitious than I want to get into right now, but I added a
+>>> unit test for this part of it... --s
+
+I'm curious what your reasoning was for adding a new variable
+rather than using `pagestate`. Was it only because you needed
+the `old` version to detect change, or was there other complexity?
+--J
+
+> You seemed to be more in favour of adding it to the core in
+> your proposal above, so I assumed that'd be more likely to be
+> accepted :-) I don't mind one way or the other - `%typedlinks`
+> costs one core variable, but saves one level of hash nesting. If
+> you're not sure either, then I think the decision should come down
+> to which one is easier to document clearly - I'm still unhappy with
+> my docs for `%typedlinks`, so I'll try to write docs for it as
+> `pagestate` and see if they work any better. --s
+
+>> On reflection, I don't think it's any better as a pagestate, and
+>> the contents of pagestates (so far) aren't documented for other
+>> plugins' consumption, so I'm inclined to leave it as-is, unless
+>> you want to veto that. Loose rationale: it needs special handling
+>> in the core to be a dependency type (I re-used the existing link
+>> type), it's API beyond a single plugin, and it's really part of
+>> the core parallel to pagestate rather than being tied to a
+>> specific plugin. Also, I'd need to special-case it to have
+>> ikiwiki not delete it from the index, unless I introduced a
+>> dummy typedlinks plugin (or just hook) that did nothing... --s
+
+I have not convinced myself this is a real problem, but..
+If a page has a typed link, there seems to be no way to tell
+if it also has a separate, regular link. `add_link` will add
+to `@links` when adding a typed, or untyped link. If only untyped
+links were recorded there, one could tell the difference. But then
+typed links would not show up at all in eg, a linkmap,
+unless it was changed to check for typed links too.
+(Or, regular links could be recorded in typedlinks too,
+with a empty type. (Bloaty.)) --J
+
+> I think I like the semantics as-is - I can't think of any
+> reason why you'd want to ask the question "does A link to B,
+> not counting tags and other typed links?". A typed link is
+> still a link, in my mind at least. --s
+
+>> Me neither, let's not worry about it. --[[Joey]]
+
+I suspect we could get away without having `tagged_is_strict`
+without too much transitional trouble. --[[Joey]]
+
+> If you think so, I can delete about 5 LoC. I don't particularly
+> care either way; [[Jon]] expressed concern about people relying
+> on the current semantics, on one of the pages requesting this
+> change. --s
+
+>> Removed in a newer version of the branch. --s
+
+I might have been wrong to introduce `typedlink(tag foo)`. It's not
+very user-friendly, and is more useful as a backend for other plugins
+that as a feature in its own right - any plugin introducing a link
+type will probably also want to have its own preprocessor directive
+to set that link type, and its own pagespec function to match it.
+I wonder whether to make a `typedlink` plugin that has the typedlink
+pagespec match function and a new `\[[!typedlink to="foo" type="bar"]]`
+though... --[[smcv]]
+
+> I agree, per-type matchers are more friendly and I'm not enamored of the
+> multi-parameter pagespec syntax. --[[Joey]]
+
+>> Removed in a newer version of the branch. I re-introduced it as a
+>> plugin in `smcv/typedlink`, but I don't think we really need it. --s
+
+----
+
+I am ready to merge this, but I noticed one problem -- since `match_tagged`
+now only matches pages with the tag linktype, a wiki will need to be
+rebuilt on upgrade in order to get the linktype of existing tags in it
+recorded. So there needs to be a NEWS item about this and
+the postinst modified to force the rebuild.
+
+> Done, although you'll need to plug in an appropriate version number when
+> you release it. Is there a distinctive reminder string you grep for
+> during releases? I've used `UNRELEASED` for now. --[[smcv]]
+
+Also, the ready branch adds `typedlink()` to [[ikiwiki/pagespec]],
+but you removed that feature as documented above.
+--[[Joey]]
+
+> [[Done]]. --s
diff --git a/doc/todo/mercurial.mdwn b/doc/todo/mercurial.mdwn
index e71c8106a..de1f148e5 100644
--- a/doc/todo/mercurial.mdwn
+++ b/doc/todo/mercurial.mdwn
@@ -119,3 +119,11 @@ I have a few notes on mercurial usage after trying it out for a while:
>> I think the ideal solution would be to build `$destdir/recentchanges/*` directly from the output of `hg log`. --[[buo]]
>>>> That would be 100 times as slow, so I chose not to do that. --[[Joey]]
+
+>>>> Since this is confusing people, allow me to clarify: Ikiwiki's
+>>>> recentchanges generation pulls log information directly out of the VCS as
+>>>> needed. It caches it in recentchanges/* in the `scrdir`. These cache
+>>>> files need not be preserved, should never be checked into VCS, and if
+>>>> you want to you can configure your VCSignore file to ignore them,
+>>>> just as you can configure it to ignore the `.ikiwiki` directory in the
+>>>> `scrdir`. --[[Joey]]
diff --git a/doc/todo/mirrorlist_with_per-mirror_usedirs_settings.mdwn b/doc/todo/mirrorlist_with_per-mirror_usedirs_settings.mdwn
new file mode 100644
index 000000000..6ca9962ba
--- /dev/null
+++ b/doc/todo/mirrorlist_with_per-mirror_usedirs_settings.mdwn
@@ -0,0 +1,24 @@
+I've got a wiki that is built at two places:
+
+* a static copy, aimed at being viewed without any web server, using
+ a web browser's `file:///` urls => usedirs is disabled to get nice
+ and working links
+* an online copy, with usedirs enabled in order to benefit from the
+ language negotiation using the po plugin
+
+I need to use mirrorlist on the static copy, so that one can easily
+reach the online, possibly updated, pages. But as documented, "pages are
+assumed to exist in the same location under the specified url on each
+mirror", so the generated urls are wrong.
+
+My `mirrorlist` branch contains a patch that allows one to configure usedirs
+per-mirror. Note: the old configuration format is still supported, so this should
+not break existing wikis.
+
+OT: as a bonus, this branch contains a patch to support {hashes,arrays} of
+{hashes,arrays} in `$config`, which I missed a bit when writing the po plugin,
+and decided this time it was really needed to implement this feature.
+
+--[[intrigeri]]
+
+[[!tag patch]]
diff --git a/doc/todo/more_flexible_inline_postform.mdwn b/doc/todo/more_flexible_inline_postform.mdwn
index bc8bc0809..414476bd7 100644
--- a/doc/todo/more_flexible_inline_postform.mdwn
+++ b/doc/todo/more_flexible_inline_postform.mdwn
@@ -16,3 +16,8 @@ logical first step towards doing comment-like things with inlined pages).
> Perhaps what we need is a `postform` plugin/directive that inline depends
> on (automatically enables); its preprocess method could automatically be
> invoked from preprocess_inline when needed. --[[smcv]]
+
+>> I've been looking at this stuff again. I think you are right, this would
+>> be the right approach. The comments plugin could use it similarly, allowing
+>> sites which desire it to have an inline comment submission form on all
+>> pages with comments enabled. I'm going to take a look. -- [[Jon]]
diff --git a/doc/todo/multiple_template_directories.mdwn b/doc/todo/multiple_template_directories.mdwn
index c09a9595f..6a474b4f3 100644
--- a/doc/todo/multiple_template_directories.mdwn
+++ b/doc/todo/multiple_template_directories.mdwn
@@ -11,3 +11,63 @@ ought to do the trick.
> global dir when it cannot find a template. For me, this is good enough.
> And it is even documented in the man page. Sigh. I guess this could be
> considered [[done]].
+
+I have a use case for this, a site composed of blogs and wikis, templates divided in three categories : common, blog and wiki. The only solution I found is maintaining hard links, being able to have multiple template dirs would obviously be better. -- Changaco
+
+> [[plugins/underlay]] used to allow adding extra templatedirs, but Joey
+> removed that functionality when he made templates search the wiki's
+> own `templates` directory.
+>
+> You can get a 3-level hierarchy like this:
+>
+> * instance-specific overrides: $srcdir/templates
+> * common to the entire site: a directory that is the value of all
+> instances' `templatedir` parameters
+> * common to every ikiwiki in the world: /usr/share/ikiwiki/templates
+> (implicitly searched)
+>
+> (by "instance" I mean an instance of ikiwiki - a .setup file, basically.)
+>
+> For a more complex hierarchy you'd need the old [[plugins/underlay]]
+> functionality, i.e. you'd need to (ask Joey to) revert the patch that
+> removed it. For instance, if anyone has a hierarchy like this, then
+> they need the old functionality back in order to split the template
+> search path for the things marked `(???)`:
+>
+> every ikiwiki in the world (/usr/share/ikiwiki/templates)
+> \--- your site (???)
+> \--- your blogs (???)
+> \--- travel blog ($srcdir/templates)
+> \--- code blog ($srcdir/templates)
+> \--- your wikis (???)
+> \--- travel wiki ($srcdir/templates)
+> \--- code wiki ($srcdir/templates)
+>
+> This looks pretty hypothetical to me, though...
+> --[[smcv]]
+
+>> The reason I removed it is because the same functionality of having
+>> multiple template directories is still present. Just put them in
+>> the templates/ subdirectory of multiple underlay directories instead.
+>> --[[Joey]]
+
+>>>Thanks, I didn't realize this was possible. Problem solved. -- Changaco
+
+>>>> We can consider this [[done]], then. For reference, the solution
+>>>> to the hierarchy I mentioned above would be:
+>>>>
+>>>> all your sites have $your_underlay as an underlay
+>>>>
+>>>> the blogs and wikis all have $blog_underlay or $wiki_underlay
+>>>> (as appropriate) as a higher priority underlay
+>>>>
+>>>> every ikiwiki in the world (/usr/share/ikiwiki/templates)
+>>>> \--- your site ($your_underlay/templates, or templatedir)
+>>>> \--- your blogs ($blog_underlay/templates)
+>>>> \--- travel blog ($srcdir/templates)
+>>>> \--- code blog ($srcdir/templates)
+>>>> \--- your wikis ($wiki_underlay/templates)
+>>>> \--- travel wiki ($srcdir/templates)
+>>>> \--- code wiki ($srcdir/templates)
+>>>>
+>>>> --[[smcv]]
diff --git a/doc/todo/multiple_templates.mdwn b/doc/todo/multiple_templates.mdwn
index 72783c556..30fb8d6ee 100644
--- a/doc/todo/multiple_templates.mdwn
+++ b/doc/todo/multiple_templates.mdwn
@@ -1,4 +1,4 @@
-> Another useful feature might be to be able to choose a different [[template|wikitemplates]]
+> Another useful feature might be to be able to choose a different [[template|templates]]
> file for some pages; [[blog]] pages would use a template different from the
> home page, even if both are managed in the same repository, etc.
diff --git a/doc/todo/openid_user_filtering.mdwn b/doc/todo/openid_user_filtering.mdwn
index 8b2d0082e..6a318c4c0 100644
--- a/doc/todo/openid_user_filtering.mdwn
+++ b/doc/todo/openid_user_filtering.mdwn
@@ -7,3 +7,7 @@ So I suggest an ikiwiki configuration like:
users => ["*.webvm.net"],
Would only allow edits from openIDs of that form.
+
+> This kind of thing can be [[done]] now: --[[Joey]]
+>
+> locked_pages => "* and !user(http://*.webvm.net/)"
diff --git a/doc/todo/optimize_simple_dependencies.mdwn b/doc/todo/optimize_simple_dependencies.mdwn
new file mode 100644
index 000000000..6f6284303
--- /dev/null
+++ b/doc/todo/optimize_simple_dependencies.mdwn
@@ -0,0 +1,95 @@
+I'm still trying to optimize ikiwiki for a site using
+[[plugins/contrib/album]], and checking which pages depend on which pages
+is still taking too long. Here's another go at fixing that, using [[Will]]'s
+suggestion from [[todo/should_optimise_pagespecs]]:
+
+> A hash, by itself, is not optimal because
+> the dependency list holds two things: page names and page specs. The hash would
+> work well for the page names, but you'll still need to iterate through the page specs.
+> I was thinking of keeping a list and a hash. You use the list for pagespecs
+> and the hash for individual page names. To make this work you need to adjust the
+> API so it knows which you're adding. -- [[Will]]
+
+If you have P pages and refresh after changing C of them, where an average
+page has E dependencies on exact page names and D other dependencies, this
+branch should drop the complexity of checking dependencies from
+O(P * (D+E) * C) to O(C + P*E + P*D*C). Pages that use inline or map have
+a large value for E (e.g. one per inlined page) and a small value for D (e.g.
+one per inline).
+
+Benchmarking:
+
+Test 1: a wiki with about 3500 pages and 3500 photos, and a change that
+touches about 350 pages and 350 photos
+
+Test 2: the docwiki (about 700 objects not excluded by docwiki.setup, mostly
+pages), docwiki.setup modified to turn off verbose, and a change that touches
+the 98 pages of plugins/*.mdwn
+
+In both tests I rebuilt the wiki with the target ikiwiki version, then touched
+the appropriate pages and refreshed.
+
+Results of test 1: without this branch it took around 5:45 to rebuild and
+around 5:45 again to refresh (so rebuilding 10% of the pages, then deciding
+that most of the remaining 90% didn't need to change, took about as long as
+rebuilding everything). With this branch it took 5:47 to rebuild and 1:16
+to refresh.
+
+Results of test 2: rebuilding took 14.11s without, 13.96s with; refreshing
+three times took 7.29/7.40/7.37s without, 6.62/6.56/6.63s with.
+
+(This benchmarking was actually done with my [[plugins/contrib/album]] branch,
+since that's what the huge wiki needs; that branch doesn't alter core code
+beyond the ready/depends-exact branch point, so the results should be
+equally valid.)
+
+--[[smcv]]
+
+> Now [[merged|done]] --[[smcv]]
+
+----
+
+> We discussed this on irc; I had some worries that things may have been
+> switched to `add_depends_exact` that were not pure page names. My current
+> feeling is it's all safe, but who knows. It's easy to miss something.
+> Which makes me think this is not a good interface.
+>
+> Why not, instead, make `add_depends` smart. If it's passed something
+> that is clearly a raw page name, it can add it to the exact depends hash.
+> Else, add it to the pagespec hash. You can tell if it's a pure page name
+> by matching on `$config{wiki_file_regexp}`.
+
+>> Good thinking. Done in commit 68ce514a 'Auto-detect "simple dependencies"',
+>> with a related bugfix in e8b43825 "Force %depends_exact to lower case".
+>>
+>> Performance impact: Test 2 above takes 0.2s longer to rebuild (probably
+>> from all the calls to lc, which are, however, necessary for correctness)
+>> and has indistinguishable performance for a refresh.
+>>
+>> Test 1 took about 6 minutes to rebuild and 1:25 to refresh; those are
+>> pessimistic figures, since I've added 90 more photos and 90 more pages
+>> (both to the wiki as a whole, and the number touched before refreshing)
+>> since testing the previous version of this branch. --[[smcv]]
+
+> Also I think there may be little optimisation value left in
+> 7227c2debfeef94b35f7d81f42900aa01820caa3, since the "regular" dependency
+> lists will be much shorter.
+
+>> You're probably right, but IMO it's not worth reverting it - a set (hash with
+>> dummy values) is still the right data structure. --[[smcv]]
+
+> Sounds like inline pagenames has an already exstant bug WRT
+> pages moving, which this should not make worse. Would be good to verify.
+
+>> If you mean the standard "add a better match for a link-like construct" bug
+>> that also affects sidebar, then yes, it does have the bug, but I'm pretty
+>> sure this branch doesn't make it any worse. I could solve this at the cost
+>> of making pagenames less useful for interactive use, by making it not
+>> respect [[ikiwiki/subpage/LinkingRules]], but instead always interpret
+>> its paths as relative to the top of the wiki - that's actually all that
+>> [[plugins/contrib/album]] needs. --[[smcv]]
+
+> Re coding, it would be nice if `refresh()` could avoid duplicating
+> the debug message, etc in the two cases. --[[Joey]]
+
+>> Fixed in commit f805d566 "Avoid duplicating debug message..." --[[smcv]]
diff --git a/doc/todo/optional_underlaydir_prefix.mdwn b/doc/todo/optional_underlaydir_prefix.mdwn
new file mode 100644
index 000000000..06900a904
--- /dev/null
+++ b/doc/todo/optional_underlaydir_prefix.mdwn
@@ -0,0 +1,46 @@
+For security reasons, symlinks are disabled in IkiWiki. That's fair enough, but that means that some problems, which one could otherwise solve by using a symlink, cannot be solved. The specfic problem in this case is that all underlays are placed at the root of the wiki, when it could be more convenient to place some underlays in specific sub-directories.
+
+Use-case 1 (to keep things tidy):
+
+Currently IkiWiki has some javascript files in `underlays/javascript`; that directory is given as one of the underlay directories. Thus, all the javascript files appear in the root of the generated site. But it would be tidier if one could say "put the contents of *this* underlaydir under the `js` directory".
+
+> Of course, this could be accomplished, if we wanted to, by moving the
+> files to `underlays/javascript/js`. --[[Joey]]
+
+Use-case 2 (a read-only external dir):
+
+Suppose I want to include a subset of `/usr/local/share/docs` on my wiki, say the docs about `foo`. But I want them to be under the `docs/foo` sub-directory on the generated site. Currently I can't do that. If I give `/usr/local/share/docs/foo` as an underlaydir, then the contents of that will be in the root of the site, rather than under `docs/foo`. And if I give `/usr/local/share/docs` as an underlaydir, then the contents of the `foo` dir will be under `foo`, but it will also include every other thing in `/usr/local/share/docs`.
+
+Since we can't use symlinks in an underlay dir to link to these directories, then perhaps one could give a specific underlay dir a specific prefix, which defines the sub-directory that the underlay should appear in.
+
+I'm not sure how this would be implemented, but I guess it could be configured something like this:
+
+ prefixed_underlay => {
+ 'js' => '/usr/local/share/ikiwiki/javascript',
+ 'docs/foo' => '/usr/local/share/docs/foo',
+ }
+
+> So, let me review why symlinks are an issue. For normal, non-underlay
+> pages, users who do not have filesystem access to the server may have
+> commit access, and so could commit eg, a symlink to `/etc/passwd` (or
+> to `/` !). The guards are there to prevent ikiwiki either exposing the
+> symlink target's contents, or potentially overwriting it.
+>
+> Is this a concern for underlays? Most of the time, certianly not;
+> the underlay tends to be something only the site admin controls.
+> Not all the security checks that are done on the srcdir are done
+> on the underlays, either. Most checks done on files in the underlay
+> are only done because the same code handles srcdir files. The one
+> exception is the test that skips processing symlinks in the underlay dir.
+> (But note that the underlay directory can be a symlinkt to elsewhere
+> which the srcdir, by default, cannot.)
+>
+> So, one way to approach this is to make ikiwiki follow directory symlinks
+> inside the underlay directory. Just a matter of passing `follow => 1` to
+> find. (This would still not allow individual files to be symlinks, because
+> `readfile` does not allow reading symlinks. But I don't see much need
+> for that.) --[[Joey]]
+
+>> If you think that enabling symlinks in underlay directories wouldn't be a security issue, then I'm all for it! That would be much simpler to implement, I'm sure. --[[KathrynAndersen]]
+
+[[!taglink wishlist]]
diff --git a/doc/todo/pagespec_to_disable_ikiwiki_directives.mdwn b/doc/todo/pagespec_to_disable_ikiwiki_directives.mdwn
new file mode 100644
index 000000000..4211c2d10
--- /dev/null
+++ b/doc/todo/pagespec_to_disable_ikiwiki_directives.mdwn
@@ -0,0 +1,5 @@
+I would like some pages (identified by pagespec) to not expand ikiwiki directives (wikilinks, or \[[!, or both, or perhaps either).
+
+I will tag this [[wishlist]]. It's something I might try myself. It's part of my thinking about how to handle [[comments]], as I'm still ruminating on alternatives to [[smcv]]'s approach. (with the greatest of respect to smcv!) (Perhaps my attempt will try to factor out the no-directives-allowed logic from the comments plugin).
+
+ -- [[Jon]]
diff --git a/doc/todo/pagestats_among_a_subset_of_pages.mdwn b/doc/todo/pagestats_among_a_subset_of_pages.mdwn
index fd15d6a42..33f9258fd 100644
--- a/doc/todo/pagestats_among_a_subset_of_pages.mdwn
+++ b/doc/todo/pagestats_among_a_subset_of_pages.mdwn
@@ -1,5 +1,4 @@
[[!tag patch plugins/pagestats]]
-[[!template id=gitbranch branch=smcv/ready/among author="[[smcv]]"]]
My `among` branch fixes [[todo/backlinks_result_is_lossy]], then uses that
to provide pagestats for links from a subset of pages. From the docs included
diff --git a/doc/todo/paste_plugin.mdwn b/doc/todo/paste_plugin.mdwn
new file mode 100644
index 000000000..83384a8d7
--- /dev/null
+++ b/doc/todo/paste_plugin.mdwn
@@ -0,0 +1,36 @@
+It was suggested that using ikiwiki as an alternative to pastebin services
+could be useful, especially if you want pastes to not expire and be
+cloneable.
+
+All you really need is a special purpose ikiwiki instance that you commit
+to by git. But a web interface for pasting could also be nice.
+
+There could be a directive that inserts a paste form onto a page. The form
+would have a big textarea for pasting into, and might also have a file
+upload button (for uploading instead of pasting). It could also copy the
+page edit form's dropdown of markup types, which would be especially useful
+if using the highlight plugin to support programming languages. The default
+should probably be txt, not mdwn, if the txt plugin is enabled.
+
+(There's a lot of overlap between that and editpage of course .. similar
+to the overlap between the comment form and editpage.)
+
+When posted, the form would just come up with a new, numeric subpage
+of the page it appears on, and save the paste there.
+
+Another thing that might be useful is a "copy" (or "paste as new") action
+on the action bar. This would take an existing paste and copy it into the
+paste edit form, for editing and saving under a new id.
+
+---
+
+A sample wiki configuration using this might be:
+
+* enable highlight and txt
+* enable anonok so anyone can paste; lock anonymous users down to only
+ creating new pastes, not editing other pages
+* disable modification of existing pastes (how? disabling editpage would
+ work, but that would disallow setting up anonymous git push)
+* enable comments, so that each paste can be commented on
+* enable getsource, so the source to a paste can easily be downloaded
+* optionally, enable untrusted git push
diff --git a/doc/todo/po:_avoid_rebuilding_to_fix_meta_titles.mdwn b/doc/todo/po:_avoid_rebuilding_to_fix_meta_titles.mdwn
new file mode 100644
index 000000000..402f70b79
--- /dev/null
+++ b/doc/todo/po:_avoid_rebuilding_to_fix_meta_titles.mdwn
@@ -0,0 +1,45 @@
+Re the meta title escaping issue worked around by `change`.
+
+> I suppose this does not only affect meta, but other things
+> at scan time too. Also, handling it only on rebuild feels
+> suspicious -- a refresh could involve changes to multiple
+> pages and trigger the same problem, I think. Also, exposing
+> this rebuild to the user seems really ugly, not confidence inducing.
+>
+> So I wonder if there's a better way. Such as making po, at scan time,
+> re-run the scan hooks, passing them modified content (either converted
+> from po to mdwn or with the escaped stuff cheaply de-escaped). (Of
+> course the scan hook would need to avoid calling itself!)
+>
+> (This doesn't need to block the merge, but I hope it can be addressed
+> eventually..)
+>
+> --[[Joey]]
+>>
+>> I'll think about it soon.
+>>
+>> --[[intrigeri]]
+>>
+>>> Did you get a chance to? --[[Joey]]
+
+>>>> I eventually did, and got rid of the ugly double rebuild of pages
+>>>> at build time. This involved adding a `rescan` hook. Rationale
+>>>> and details are in my po branch commit messages. I believe this
+>>>> new way of handling meta title escaping to be far more robust.
+>>>> Moreover this new implementation is more generic, feels more
+>>>> logical to me, and probably fixes other similar bugs outside the
+>>>> meta plugin scope. Please have a look when you can.
+>>>> --[[intrigeri]]
+
+>>>>> Glad you have tackled this. Looking at
+>>>>> 25447bccae0439ea56da7a788482a4807c7c459d,
+>>>>> I wonder how this rescan hook is different from a scan hook
+>>>>> with `last => 1` ? Ah, it comes *after* the preprocess hook
+>>>>> in scan mode. Hmm, I wonder if there's any reason to have
+>>>>> the scan hook called before those as it does now. Reordering
+>>>>> those 2 lines could avoid adding a new hook. --[[Joey]]
+
+>>>>>> Sure. I was fearing to break other plugins if I did so, so I
+>>>>>> did not dare to. I'll try this. --[[intrigeri]]
+
+>>>>>>> Done in my po branch, please have a look. --[[intrigeri]]
diff --git a/doc/todo/po:_better_documentation.mdwn b/doc/todo/po:_better_documentation.mdwn
new file mode 100644
index 000000000..6e9804df4
--- /dev/null
+++ b/doc/todo/po:_better_documentation.mdwn
@@ -0,0 +1,3 @@
+Maybe write separate documentation for the po plugin, depending on the
+people it targets: translators, wiki administrators, hackers. This
+plugin may be complex enough to deserve this.
diff --git a/doc/todo/po:_better_links.mdwn b/doc/todo/po:_better_links.mdwn
new file mode 100644
index 000000000..af879a56a
--- /dev/null
+++ b/doc/todo/po:_better_links.mdwn
@@ -0,0 +1,12 @@
+Once the fix to
+[[bugs/pagetitle_function_does_not_respect_meta_titles]] from
+[[intrigeri]]'s `meta` branch is merged into ikiwiki upstream, the
+generated links' text will be optionally based on the page titles set
+with the [[meta|plugins/meta]] plugin, and will thus be translatable.
+It will also allow displaying the translation status in links to slave
+pages. Both were implemented, and reverted in commit
+ea753782b222bf4ba2fb4683b6363afdd9055b64, which should be reverted
+once [[intrigeri]]'s `meta` branch is merged.
+
+An integration branch, called `meta-po`, merges [[intrigeri]]'s `po`
+and `meta` branches, and thus has this additional features.
diff --git a/doc/todo/po:_better_translation_interface.mdwn b/doc/todo/po:_better_translation_interface.mdwn
new file mode 100644
index 000000000..e66a77b85
--- /dev/null
+++ b/doc/todo/po:_better_translation_interface.mdwn
@@ -0,0 +1,2 @@
+Add a message-by-message translation interface to the PO plugin,
+with automatic escaping of special chars.
diff --git a/doc/todo/po:_remove_po_files_when_disabling_plugin.mdwn b/doc/todo/po:_remove_po_files_when_disabling_plugin.mdwn
new file mode 100644
index 000000000..0801f7fcd
--- /dev/null
+++ b/doc/todo/po:_remove_po_files_when_disabling_plugin.mdwn
@@ -0,0 +1,11 @@
+ikiwiki now has a `disable` hook. Should the po plugin remove the po
+files from the source repository when it has been disabled?
+
+> pot files, possibly, but the po files contain work, so no. --[[Joey]]
+
+>> I tried to implement this in my `po-disable` branch, but AFAIK, the
+>> current rcs plugins interface provides no way to tell whether a
+>> given file (e.g. a POT file in my case) is under version control;
+>> in most cases, it is not, thanks to .gitignore or similar, but we
+>> can't be sure. So I just can't decide it is needed to call
+>> `rcs_remove` rather than a good old `unlink`. --[[intrigeri]]
diff --git a/doc/todo/po:_rethink_pagespecs.mdwn b/doc/todo/po:_rethink_pagespecs.mdwn
new file mode 100644
index 000000000..50c10c5db
--- /dev/null
+++ b/doc/todo/po:_rethink_pagespecs.mdwn
@@ -0,0 +1,11 @@
+I was suprised that, when using the map directive, a pagespec of "*"
+listed all the translated pages as well as regular pages. That can
+make a big difference to an existing wiki when po is turned on,
+and seems generally not wanted.
+(OTOH, you do want to match translated pages by
+default when locking pages.) --[[Joey]]
+
+> Seems hard to me to sort apart the pagespec whose matching pages
+> list must be restricted to pages in the master (or current?)
+> language, and the ones that should not. The only solution I can see
+> to this surprising behaviour is: documentation. --[[intrigeri]]
diff --git a/doc/todo/po:_translation_of_directives.mdwn b/doc/todo/po:_translation_of_directives.mdwn
new file mode 100644
index 000000000..89fc93620
--- /dev/null
+++ b/doc/todo/po:_translation_of_directives.mdwn
@@ -0,0 +1,8 @@
+If a translated page contains a directive, it may expand to some english
+text, or text in whatever single language ikiwiki is configured to "speak".
+
+Maybe there could be a way to switch ikiwiki to speaking another language
+when building a non-english page? Then the directives would get translated.
+
+(We also will need this in order to use translated templates, when they are
+available.)
diff --git a/doc/todo/po_needstranslation_pagespec.mdwn b/doc/todo/po_needstranslation_pagespec.mdwn
new file mode 100644
index 000000000..45b7377ea
--- /dev/null
+++ b/doc/todo/po_needstranslation_pagespec.mdwn
@@ -0,0 +1,12 @@
+Commit b225fdc44d4b3d in my po branch adds a `needstranslation()`
+PageSpec. It makes it easy to list pages that need translation work.
+Please review. --[[intrigeri]]
+
+> Looks good, cherry-picked. The only improvment I can
+> think of is that `needstranslation(50)` could match
+> only pages less than 50% translated. --[[Joey]]
+
+>> This improvement has been implemented as 98cc946 in my po branch.
+>> --[[intrigeri]]
+
+[[!tag patch done]]
diff --git a/doc/todo/preview_changes_before_git_commit.mdwn b/doc/todo/preview_changes_before_git_commit.mdwn
new file mode 100644
index 000000000..187497cf4
--- /dev/null
+++ b/doc/todo/preview_changes_before_git_commit.mdwn
@@ -0,0 +1,17 @@
+ikiwiki allows to commit changes to the doc wiki over the `git://...` protocol.
+It would be nice if there'd be a uniform way to view these changes before `git
+push`ing. For the GNU Hurd's web pages, we include a *render_locally* script,
+<http://www.gnu.org/software/hurd/render_locally>, with instructions on
+<http://www.gnu.org/software/hurd/contributing/web_pages.html>, section
+*Preview Changes*. With ikiwiki, one can use `make docwiki`, but that excludes
+a set of pages, as per `docwiki.setup`. --[[tschwinge]]
+
+> `ikiwiki -setup some.setup --render file.mdwn` will build the page and
+> dump it to stdout. So, for example:
+
+ ikiwiki -setup docwiki.setup --render doc/todo/preview_changes_before_git_commit.mdwn | w3m -T text/html
+
+> You have to have a setup file, though it suffices to make up your own
+> if you don't have the real one. Using ikiwiki.info's real setup file
+> won't actually work since it uses a search plugin that gets unhappy
+> if this is not in `/srv/web/ikiwiki.info`. --[[Joey]]
diff --git a/doc/todo/rewrite_ikiwiki_in_haskell.mdwn b/doc/todo/rewrite_ikiwiki_in_haskell.mdwn
index 204c48cd7..48ed744b1 100644
--- a/doc/todo/rewrite_ikiwiki_in_haskell.mdwn
+++ b/doc/todo/rewrite_ikiwiki_in_haskell.mdwn
@@ -29,6 +29,7 @@ It's appealing for a lot of reasons, including:
edit in html editors currently.
- This would be a chance to make WikiLinks with link texts read
"the right way round" (ie, vaguely wiki creole compatably).
+ *[See also [[todo/link_plugin_perhaps_too_general?]] --[[smcv]]]*
- The data structures would probably be quite different.
- I might want to drop a lot of the command-line flags, either
requiring a setup file be used for those things, or leaving the
diff --git a/doc/todo/salmon_protocol_for_comment_sharing.mdwn b/doc/todo/salmon_protocol_for_comment_sharing.mdwn
new file mode 100644
index 000000000..1e56b0a8b
--- /dev/null
+++ b/doc/todo/salmon_protocol_for_comment_sharing.mdwn
@@ -0,0 +1,21 @@
+The <a href="http://www.salmon-protocol.org/home">Salmon protocol</a>
+provides for aggregating comments across sites. If a site that syndicates
+a feed receives a comment on an item in that feed, it can re-post the
+comment to the original source.
+
+> Ikiwiki does not allow comments to be posted on items it aggregates.
+> So salmon protocol support would only need to handle the comment
+> receiving side of the protocol.
+>
+> The current draft protocol document confuses me when it starts talking
+> about using OAuth in the abuse prevention section, since their example
+> does not show use of OAuth, and it's not at all clear to me where the
+> OAuth relationship between aggregator and original source is supposed
+> to come from.
+>
+> Their security model, which goes on to include Webfinger,
+> thirdparty validation services, XRD, and Magic Signatures, looks sorta
+> like they kept throwing technology, at it, hoping something will stick. :-P
+> --[[Joey]]
+
+[[!tag wishlist]]
diff --git a/doc/todo/should_optimise_pagespecs.mdwn b/doc/todo/should_optimise_pagespecs.mdwn
index 3ccef62fe..728ab8994 100644
--- a/doc/todo/should_optimise_pagespecs.mdwn
+++ b/doc/todo/should_optimise_pagespecs.mdwn
@@ -79,8 +79,6 @@ I can think about reducung the size of my wiki source and making it available on
>
> --[[Joey]]
-[[!template id=gitbranch branch=smcv/ready/optimize-depends author="[[smcv]]"]]
-
>> I've been looking at optimizing ikiwiki for a site using
>> [[plugins/contrib/album]] (which produces a lot of pages) and it seems
>> that checking which pages depend on which pages does take a significant
@@ -90,6 +88,8 @@ I can think about reducung the size of my wiki source and making it available on
>> rather than a single pagespec. This does turn out to be faster, although
>> not as much as I'd like. --[[smcv]]
+>>> [[Merged|done]] --[[smcv]]
+
>>> I just wanted to note that there is a whole long discussion of dependencies and pagespecs on the [[todo/tracking_bugs_with_dependencies]] page. -- [[Will]]
>>>> Yeah, I had a look at that (as the only other mention of `pagespec_merge`).
@@ -98,11 +98,216 @@ I can think about reducung the size of my wiki source and making it available on
>>>> I haven't actually deleted it), because the "or" operation is now done in
>>>> the Perl code, rather than by merging pagespecs and translating. --[[smcv]]
-[[!template id=gitbranch branch=smcv/ready/remove-pagespec-merge author="[[smcv]]"]]
-
>>>>> I've now added a patch to the end of that branch that deletes
>>>>> `pagespec_merge` almost entirely (we do need to keep a copy around, in
>>>>> ikiwiki-transition, but that copy doesn't have to be optimal or support
>>>>> future features like [[tracking_bugs_with_dependencies]]). --[[smcv]]
+---
+
+Some questions on your optimize-depends branch. --[[Joey]]
+
+In saveindex it still or'd together the depends list, but the `{depends}`
+field seems only useful for backwards compatability (ie, ikiwiki-transition
+uses it still), and otherwise just bloats the index.
+
+> If it's acceptable to declare that downgrading IkiWiki requires a complete
+> rebuild, I'm happy with that. I'd prefer to keep the (simple form of the)
+> transition done automatically during a load/save cycle, rather than
+> requiring ikiwiki-transition to be run; we should probably say in NEWS
+> that the performance increase won't fully apply until the next
+> rebuild. --[[smcv]]
+
+>> It is acceptable not to support downgrades.
+>> I don't think we need a NEWS file update since any sort of refresh,
+>> not just a full rebuild, will cause the indexdb to be loaded and saved,
+>> enabling the optimisation. --[[Joey]]
+
+>>> A refresh will load the current dependencies from `{depends}` and save
+>>> them as-is as a one-element `{dependslist}`; only a rebuild will replace
+>>> the single complex pagespec with a long list of simpler pagespecs.
+>>> --[[smcv]]
+
+Is an array the right data structure? `add_depends` has to loop through the
+array to avoid dups, it would be better if a hash were used there. Since
+inline (and other plugins) explicitly add all linked pages, each as a
+separate item, the list can get rather long, and that single add_depends
+loop has suddenly become O(N^2) to the number of pages, which is something
+to avoid..
+
+> I was also thinking about this (I've been playing with some stuff based on the
+> `remove-pagespec-merge` branch). A hash, by itself, is not optimal because
+> the dependency list holds two things: page names and page specs. The hash would
+> work well for the page names, but you'll still need to iterate through the page specs.
+> I was thinking of keeping a list and a hash. You use the list for pagespecs
+> and the hash for individual page names. To make this work you need to adjust the
+> API so it knows which you're adding. -- [[Will]]
+
+> I wasn't thinking about a lookup hash, just a dedup hash, FWIW.
+> --[[Joey]]
+
+>> I was under the impression from previous code review that you preferred
+>> to represent unordered sets as lists, rather than hashes with dummy
+>> values. If I was wrong, great, I'll fix that and it'll probably go
+>> a bit faster. --[[smcv]]
+
+>>> It depends, really. And it'd certianly make sense to benchmark such a
+>>> change. --[[Joey]]
+
+>>>> Benchmarked, below. --[[smcv]]
+
+Also, since a lot of places are calling add_depends in a loop, it probably
+makes sense to just make it accept a list of dependencies to add. It'll be
+marginally faster, probably, and should allow for better optimisation
+when adding a lot of depends at once.
+
+> That'd be an API change; perhaps marginally faster, but I don't
+> see how it would allow better optimisation if we're de-duplicating
+> anyway? --[[smcv]]
+
+>> Well, I was thinking that it might be sufficient to build a `%seen`
+>> hash of dependencies inside `add_depends`, if the places that call
+>> it lots were changed to just call it once. Of course the only way to
+>> tell is benchmarking. --[[Joey]]
+
+>>> It doesn't seem that it significantly affects performance either way.
+>>> --[[smcv]]
+
+In Render.pm, we now have a triply nested loop, which is a bit
+scary for efficiency. It seems there should be a way to
+rework this code so it can use the optimised `pagespec_match_list`,
+and/or hoist some of the inner loop calculations (like the `pagename`)
+out.
+
+> I don't think the complexity is any greater than it was: I've just
+> moved one level of "loop" out of the generated Perl, to be
+> in visible code. I'll see whether some of it can be hoisted, though.
+> --[[smcv]]
+
+>> The call to `pagename` is the only part I can see that's clearly
+>> run more often than before. That function is pretty inexpensive, but..
+>> --[[Joey]]
+
+>>> I don't see anything that can be hoisted without significant refactoring,
+>>> actually. Beware that there are two pagename calls in the loop: one for
+>>> `$f` (which is the page we might want to rebuild), and one for `$file`
+>>> (which is the changed page that it might depend on). Note that I didn't
+>>> choose those names!
+>>>
+>>> The three loops are over source files, their lists of dependency pagespecs,
+>>> and files that might have changed. I see the following things we might be
+>>> doing redundantly:
+>>>
+>>> * If `$file` is considered as a potential dependency for more than
+>>> one `$f`, we evaluate `pagename($file)` more than once. Potential fix:
+>>> cache them (this turns out to save about half a second on the docwiki,
+>>> see below).
+>>> * If several pages depend on the same pagespec, we evaluate whether each
+>>> changed page matches that pagespec more than once: however, we do so
+>>> with a different location parameter every time, so repeated calls are,
+>>> in the general case, the only correct thing to do. Potential fix:
+>>> perhaps special-case "page x depends on page y and nothing else"
+>>> (i.e. globs that have no wildcards) into a separate hash? I haven't
+>>> done anything in this direction.
+>>> * Any preparatory work done by pagespec_match (converting the pagespec
+>>> into Perl, mostly?) is done in the inner loop; switching to
+>>> pagespec_match_list (significant refactoring) saves more than half a
+>>> second on the docwiki.
+>>>
+>>> --[[smcv]]
+
+Very good catch on img/meta using the wrong dependency; verified in the wild!
+(I've cherry-picked those bug fixes.)
+
+----
+
+Benchmarking results: I benchmarked by altering docwiki.setup to switch off
+verbose, running "make clean && ./Makefile.PL && make", and timing one rebuild
+of the docwiki followed by three refreshes. Before each refresh I used
+`touch plugins/*.mdwn` to have something significant to refresh.
+
+I'm assuming that "user" CPU time is the important thing here (system time was
+relatively small in all cases, up to 0.35 seconds per run).
+
+master at the time of rebasing: 14.20s to rebuild, 10.04/12.07/14.01s to
+refresh. I think you can see the bug clearly here - the pagespecs are getting
+more complicated every time!
+
+> I can totally see a bug here, and it's one I didn't think existed. Ie,
+> I thought that after the first refresh, the pagespec should stabalize,
+> and what it stabalized to was probably unnecessarily long, but not
+> growing w/o bounds!
+>
+> a) Explains why ikiwiki.info has been so slow lately. Well that and some
+> other things that overloaded the system.
+> b) Suggests to me we will probably want to force a rebuild on upgrade
+> when fixing this (via the mechanism in the postinst).
+>
+> I've investigated why the pagespecs keep growing: When page A changes,
+> its old depends are cleared. Then
+> page B that inlines A gets rebuilt, and its old depends are also cleared.
+> But page B also inlines page C; which means C gets re-rendered. And this
+> happens w/o its old depends being cleared, so C's depends are doubled.
+> --[[Joey]]
+
+After the initial optimization: 14.27s to rebuild, 8.26/8.33/8.26 to refresh.
+Success!
+
+Not pre-joining dependencies actually took about ~0.2s more; I don't know why.
+I'm worried that duplicates will just build up (again) in less simple cases,
+though, so 0.2s is probably a small price to pay for that not happening (it
+might well be experimental error, for that matter).
+
+> It's weird that the suggested optimisations to
+> `add_depends` had no effect. So, the commit message to
+> b6fcb1cb0ef27e5a63184440675d465fad652acf is actually wrong.. ? --[[Joey]]
+
+>> I'll try benchmarking again on the non-public wiki where I had the 4%
+>> speedup. The docwiki is so small that 4% is hard to measure... --[[smcv]]
+
+Not saving {depends} to the index, using a hash instead of a list to
+de-duplicate, and allowing add_depends to take an arrayref instead of a single
+pagespec had no noticable positive or negative effect on this test.
+
+> I see e4cd168ebedd95585290c97ff42234344bfed46c is still in your branch
+> though. I don't like using an arrayref, it could just take `($page, @depends)`.
+> and I don't see the need to keep it if it doesn't currently help.
+
+>> I'll drop it. --[[smcv]]
+
+> Is there any reason to keep 7227c2debfeef94b35f7d81f42900aa01820caa3
+> if it doesn't improve speed?
+> --[[Joey]]
+
+>> I'll try benchmarking on a more complex wiki and see whether it has a
+>> positive or negative effect. It does avoid being O(n**2) in number of
+>> dependencies. --[[smcv]]
+
+Memoizing the results of pagename brought the rebuild time down to 14.06s
+and the refresh time down to 7.96/7.92/7.92, a significant win.
+
+> Ok, that seems safe to memoize. (It's a real function and it isn't
+> called with a great many inputs.) Why did you chose to memoize it
+> explicitly rather than adding it to the memoize list at the top?
+
+>> It does depend on global variables, so using Memoize seemed like asking for
+>> trouble. I suppose what I did is equivalent to Memoize though... --[[smcv]]
+
+Refactoring to use pagespec_match_list looks more risky from a code churn
+point of view; rebuild now takes 14.35s, but refresh is only 7.30/7.29/7.28,
+another significant win.
+
+--[[smcv]]
+
+> I had mostly convinced myself that
+> `pagespec_match_list` would not lead to a speed gain here. My reasoning
+> was that you want to stop after finding one match, while `pagespec_match_list`
+> checks all pages for matches. So what we're seeing is that
+> on a rebuild, `@changed` is all pages, and not short-circuiting leads
+> to unnecessary work. OTOH, on refresh, `@changed` is small and I suppose
+> `pagespec_match_list`'s other slight efficiencies win out somehow.
+>
+> Welcome to the "I made ikiwiki twice as fast
+> and all I got was this lousy git sha1sum" club BTW :-) --[[Joey]]
+
[[!tag wishlist patch patch/core]]
diff --git a/doc/todo/smarter_sorting.mdwn b/doc/todo/smarter_sorting.mdwn
new file mode 100644
index 000000000..901e143a7
--- /dev/null
+++ b/doc/todo/smarter_sorting.mdwn
@@ -0,0 +1,141 @@
+I benchmarked a build of a large wiki (my home wiki), and it was spending
+quite a lot of time sorting; `CORE::sort` was called only 1138 times, but
+still flagged as the #1 time sink. (I'm not sure I trust NYTProf fully
+about that FWIW, since it also said 27238263 calls to `cmp_age` were
+the #3 timesink, and I suspect it may not entirely accurately measure
+the overhead of so many short function calls.)
+
+`pagespec_match_list` currently always sorts *all* pages first, and then
+finds the top M that match the pagespec. That's innefficient when M is
+small (as for example in a typical blog, where only 20 posts are shown,
+out of maybe thousands).
+
+As [[smcv]] noted, It could be flipped, so the pagespec is applied first,
+and then sort the smaller matching set. But, checking pagespecs is likely
+more expensive than sorting. (Also, influence calculation complicates
+doing that.)
+
+Another option, when there is a limit on M pages to return, might be to
+cull the M top pages without sorting the rest.
+
+> The patch below implements this.
+>
+> But, I have not thought enough about influence calculation.
+> I need to figure out which pagespec matches influences need to be
+> accumulated for in order to determine all possible influences of a
+> pagespec are known.
+>
+> The old code accumulates influences from matching all successful pages
+> up to the num cutoff, as well as influences from an arbitrary (sometimes
+> zero) number of failed matches. New code does not accumulate influences
+> from all the top successful matches, only an arbitrary group of
+> successes and some failures.
+>
+> Also, by the time I finished this, it was not measuarably faster than
+> the old method. At least not with a few thousand pages; it
+> might be worth revisiting this sometime for many more pages? [[done]]
+> --[[Joey]]
+
+<pre>
+diff --git a/IkiWiki.pm b/IkiWiki.pm
+index 1730e47..bc8b23d 100644
+--- a/IkiWiki.pm
++++ b/IkiWiki.pm
+@@ -2122,36 +2122,54 @@ sub pagespec_match_list ($$;@) {
+ my $num=$params{num};
+ delete @params{qw{num deptype reverse sort filter list}};
+
+- # when only the top matches will be returned, it's efficient to
+- # sort before matching to pagespec,
+- if (defined $num && defined $sort) {
+- @candidates=IkiWiki::SortSpec::sort_pages(
+- $sort, @candidates);
+- }
+-
++ # Find the first num matches (or all), before sorting.
+ my @matches;
+- my $firstfail;
+ my $count=0;
+ my $accum=IkiWiki::SuccessReason->new();
+- foreach my $p (@candidates) {
+- my $r=$sub->($p, %params, location => $page);
++ my $i;
++ for ($i=0; $i < @candidates; $i++) {
++ my $r=$sub->($candidates[$i], %params, location => $page);
+ error(sprintf(gettext("cannot match pages: %s"), $r))
+ if $r->isa("IkiWiki::ErrorReason");
+ $accum |= $r;
+ if ($r) {
+- push @matches, $p;
++ push @matches, $candidates[$i];
+ last if defined $num && ++$count == $num;
+ }
+ }
+
++ # We have num natches, but they may not be the best.
++ # Efficiently find and add the rest, without sorting the full list of
++ # candidates.
++ if (defined $num && defined $sort) {
++ @matches=IkiWiki::SortSpec::sort_pages($sort, @matches);
++
++ for ($i++; $i < @candidates; $i++) {
++ # Comparing candidate with lowest match is cheaper,
++ # so it's done before testing against pagespec.
++ if (IkiWiki::SortSpec::cmptwo($candidates[$i], $matches[-1], $sort) < 0 &&
++ $sub->($candidates[$i], %params, location => $page)
++ ) {
++ # this could be done less expensively
++ # using a binary search
++ for (my $j=0; $j < @matches; $j++) {
++ if (IkiWiki::SortSpec::cmptwo($candidates[$i], $matches[$j], $sort) < 0) {
++ splice @matches, $j, $#matches-$j+1, $candidates[$i],
++ @matches[$j..$#matches-1];
++ last;
++ }
++ }
++ }
++ }
++ }
++
+ # Add simple dependencies for accumulated influences.
+- my $i=$accum->influences;
+- foreach my $k (keys %$i) {
+- $depends_simple{$page}{lc $k} |= $i->{$k};
++ my $inf=$accum->influences;
++ foreach my $k (keys %$inf) {
++ $depends_simple{$page}{lc $k} |= $inf->{$k};
+ }
+
+- # when all matches will be returned, it's efficient to
+- # sort after matching
++ # Sort if we didn't already.
+ if (! defined $num && defined $sort) {
+ return IkiWiki::SortSpec::sort_pages(
+ $sort, @matches);
+@@ -2455,6 +2473,12 @@ sub sort_pages {
+ sort $f @_
+ }
+
++sub cmptwo {
++ $a=$_[0];
++ $b=$_[1];
++ $_[2]->();
++}
++
+ sub cmp_title {
+ IkiWiki::pagetitle(IkiWiki::basename($a))
+ cmp
+</pre>
+
+This would be bad when M is very large, and particularly, of course, when
+there is no limit and all pages are being matched on. (For example, an
+archive page shows all pages that match a pagespec specifying a creation
+date range.) Well, in this case, it *does* make sense to flip it, limit by
+pagespe first, and do a (quick)sort second. (No influence complications,
+either.)
+
+> Flipping when there's no limit implemented, and it knocked 1/3 off
+> the rebuild time of my blog's archive pages. --[[Joey]]
+
+Adding these special cases will be more complicated, but I think the best
+of both worlds. --[[Joey]]
diff --git a/doc/todo/sort_parameter_for_map_plugin_and_directive.mdwn b/doc/todo/sort_parameter_for_map_plugin_and_directive.mdwn
new file mode 100644
index 000000000..f6ccaf538
--- /dev/null
+++ b/doc/todo/sort_parameter_for_map_plugin_and_directive.mdwn
@@ -0,0 +1,7 @@
+## sort= parameter
+
+Having a `sort=` parameter for the map plugin/directive would be real nice; like `inline`'s parameter, with `age`, `title`, etc.
+
+I may hack one in from `inline` if it seem within my skill level.
+
+[[!tag wishlist]]
diff --git a/doc/todo/source_link.mdwn b/doc/todo/source_link.mdwn
index 4a8285948..cf3e69487 100644
--- a/doc/todo/source_link.mdwn
+++ b/doc/todo/source_link.mdwn
@@ -11,8 +11,7 @@ All of this code is licensed under the GPLv2+. -- [[Will]]
> by loading the index with IkiWiki::loadindex, like [[plugins/goto]] does?
> --[[smcv]]
-[[!template id=gitbranch branch=smcv/ready/getsource
- author="[[Will]]/[[smcv]]"]]
+[[done]]
>> I've applied the patch below in a git branch, fixed my earlier criticism,
>> and also fixed a couple of other issues I noticed:
@@ -132,3 +131,5 @@ All of this code is licensed under the GPLv2+. -- [[Will]]
}
1
+
+[[done]] --[[smcv]]
diff --git a/doc/todo/structured_page_data.mdwn b/doc/todo/structured_page_data.mdwn
index 72bfd8dea..9f21fab7f 100644
--- a/doc/todo/structured_page_data.mdwn
+++ b/doc/todo/structured_page_data.mdwn
@@ -1,5 +1,7 @@
This is an idea from [[JoshTriplett]]. --[[Joey]]
+* See further discussion at [[forum/an_alternative_approach_to_structured_data]].
+
Some uses of ikiwiki, such as for a bug-tracking system (BTS), move a bit away from the wiki end
of the spectrum, and toward storing structured data about a page or instead
of a page.
@@ -251,6 +253,9 @@ in a large number of other cases.
> dependencies between bugs from arbitrary links.
>> This issue (the need for distinguished kinds of links) has also been brought up in other discussions: [[tracking_bugs_with_dependencies#another_kind_of_links]] (deps vs. links) and [[tag_pagespec_function]] (tags vs. links). --Ivan Z.
+>>> And multiple link types are now supported; plugins can set the link
+>>> type when registering a link, and pagespec functions can match on them. --[[Joey]]
+
----
#!/usr/bin/perl
diff --git a/doc/todo/support_link__40__.__41___in_pagespec.mdwn b/doc/todo/support_link__40__.__41___in_pagespec.mdwn
new file mode 100644
index 000000000..79809662a
--- /dev/null
+++ b/doc/todo/support_link__40__.__41___in_pagespec.mdwn
@@ -0,0 +1,13 @@
+[[!tag wishlist]]
+
+It would be nice to have pagespecs support "link(.)" as syntax.
+This would match pages that link to the page that invokes the pagespec.
+The use case is a blog with tags, and having a page for each tag
+which uses !inline to list all posts with the tag.
+
+Joey said on IRC that "probably changing the derel() function in
+IkiWiki.pm is the best way to do it".
+
+> I implemented this suggestion in the simplest possible way, [[!taglink patch]] available [[here|http://git.oblomov.eu/ikiwiki/patch/f4a52de556436fdee00fd92ca9a3b46e876450fa]].
+> An alternative approach, very similar, would be to make the empty page parameter mean current page (e.g. `link()` would mean pages linking here). The patch would be very similar.
+> -- GB
diff --git a/doc/todo/svg.mdwn b/doc/todo/svg.mdwn
index 0a15af4cd..274ebf3e3 100644
--- a/doc/todo/svg.mdwn
+++ b/doc/todo/svg.mdwn
@@ -3,6 +3,7 @@ We should support SVG. In particular:
* We could support rendering SVGs to PNGs when compiling the wiki. Not all browsers support SVG yet.
* We could support editing SVGs via the web interface. SVG can contain unsafe content such as scripting, so we would need to whitelist safe markup.
+ * I am interested in seeing [svg-edit](http://code.google.com/p/svg-edit/) integrated -- [[EricDrechsel]]
--[[JoshTriplett]]
@@ -56,3 +57,21 @@ in the trunk if other people think it's useful.
[htmlscrubber.pm]:http://xbeta.org/gitweb/?p=xbeta/ikiwiki.git;a=blob;f=IkiWiki/Plugin/htmlscrubber.pm;h=3c0ddc8f25bd8cb863634a9d54b40e299e60f7df;hb=fe333c8e5b4a5f374a059596ee698dacd755182d
[diff]: http://xbeta.org/gitweb/?p=xbeta/ikiwiki.git;a=blobdiff;f=IkiWiki/Plugin/htmlscrubber.pm;h=3c0ddc8f25bd8cb863634a9d54b40e299e60f7df;hp=3bdaccea119ec0e1b289a0da2f6d90e2219b8d66;hb=fe333c8e5b4a5f374a059596ee698dacd755182d;hpb=be0b4f603f918444b906e42825908ddac78b7073
+
+> Unfortuantly these links are broken. --[[Joey]]
+
+* * *
+
+Actually, there's a way to embed SVG into MarkDown sources using the [data: URI scheme][rfc2397], [like this](data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBzdGFuZGFsb25lPSJubyI/Pgo8c3ZnIHdpZHRoPSIxOTIiIGhlaWdodD0iMTkyIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KIDwhLS0gQ3JlYXRlZCB3aXRoIFNWRy1lZGl0IC0gaHR0cDovL3N2Zy1lZGl0Lmdvb2dsZWNvZGUuY29tLyAtLT4KIDx0aXRsZT5IZWxsbywgd29ybGQhPC90aXRsZT4KIDxnPgogIDx0aXRsZT5MYXllciAxPC90aXRsZT4KICA8ZyB0cmFuc2Zvcm09InJvdGF0ZSgtNDUsIDk3LjY3MTksIDk3LjY2OCkiIGlkPSJzdmdfNyI+CiAgIDxyZWN0IHN0cm9rZS13aWR0aD0iNSIgc3Ryb2tlPSIjMDAwMDAwIiBmaWxsPSIjRkYwMDAwIiBpZD0ic3ZnXzUiIGhlaWdodD0iNTYuMDAwMDAzIiB3aWR0aD0iMTc1IiB5PSI2OS42Njc5NjkiIHg9IjEwLjE3MTg3NSIvPgogICA8dGV4dCB4bWw6c3BhY2U9InByZXNlcnZlIiB0ZXh0LWFuY2hvcj0ibWlkZGxlIiBmb250LWZhbWlseT0ic2VyaWYiIGZvbnQtc2l6ZT0iMjQiIHN0cm9rZS13aWR0aD0iMCIgc3Ryb2tlPSIjMDAwMDAwIiBmaWxsPSIjZmZmZjAwIiBpZD0ic3ZnXzYiIHk9IjEwNS42NjgiIHg9Ijk5LjY3MTkiPkhlbGxvLCB3b3JsZCE8L3RleHQ+CiAgPC9nPgogPC9nPgo8L3N2Zz4=).
+Of course, this way to display an image one needs to click a link, but it may be considered a feature.
+&mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+[rfc2397]: http://tools.ietf.org/html/rfc2397
+
+> You can do the same with img src actually.
+>
+> If svg markup allows unsafe elements (ie, javascript),
+> which it appears to,
+> then this is a security hole, and the htmlscrubber
+> needs to lock it down more. Darn, now I have to spend my afternoon making
+> security releases! --[[Joey]]
diff --git a/doc/todo/tagging_with_a_publication_date.mdwn b/doc/todo/tagging_with_a_publication_date.mdwn
index 80240ec5a..39fc4e220 100644
--- a/doc/todo/tagging_with_a_publication_date.mdwn
+++ b/doc/todo/tagging_with_a_publication_date.mdwn
@@ -38,3 +38,34 @@ on vacation".
> >
> > I no longer have the original wiki for which I wanted this feature, but I can
> > see using it on future ones. -- [[DonMarti]]
+
+>>> FWIW, for the case where one wants to update a site offline,
+>>> using an ikiwiki instance on a laptop, and include some deffered
+>>> posts in the push, the ad-hoc cron job type approach will be annoying.
+>>>
+>>> In modern ikiwiki, I guess the way to accomplish this would be to
+>>> add a pagespec that matches only pages posted in the present or past.
+>>> Then a page can have its post date set to the future, using meta date,
+>>> and only show up when its post date rolls around.
+>>>
+>>> Ikiwiki will need to somehow notice that a pagespec began matching
+>>> a page it did not match previously, despite said page not actually
+>>> changing. I'm not sure what the best way is.
+>>>
+>>> * One way could be to
+>>> use a needsbuild hook and some stored data about which pagespecs
+>>> exclude pages in the future. (But I'm not sure how evaluating the
+>>> pagespec could lead to that metadata and hook being set up.)
+>>> * Another way would be to use an explicit directive to delay a
+>>> page being posted. Then the directive stores the metadata and
+>>> sets up the needsbuild hook.
+>>> * Another way would be for ikiwiki to remember the last
+>>> time it ran. It could then easily find pages that have a post
+>>> date after that time, and treat them the same as it treats actually
+>>> modified files. Or a plugin could do this via a needsbuild hook,
+>>> probably. (Only downside to this is it would probably need to do
+>>> a O(n) walk of the list of pages -- but only running an integer
+>>> compare per page.)
+>>>
+>>> You'd still need a cron job to run ikiwiki -refresh every hour, or
+>>> whatever, so it can update. --[[Joey]]
diff --git a/doc/todo/tmplvars_plugin/discussion.mdwn b/doc/todo/tmplvars_plugin/discussion.mdwn
new file mode 100644
index 000000000..93cb9b414
--- /dev/null
+++ b/doc/todo/tmplvars_plugin/discussion.mdwn
@@ -0,0 +1 @@
+I find this plugin quite usefull. But one thing, I would like to be able to do is set a tmplvar e.g. in a sidebar so that it affects all Pages this sidebar is used in. --martin
diff --git a/doc/todo/toc_plugin:_set_a_header_ceiling___40__opposite_of_levels__61____41__.mdwn b/doc/todo/toc_plugin:_set_a_header_ceiling___40__opposite_of_levels__61____41__.mdwn
index 547c7a80a..07d2d383c 100644
--- a/doc/todo/toc_plugin:_set_a_header_ceiling___40__opposite_of_levels__61____41__.mdwn
+++ b/doc/todo/toc_plugin:_set_a_header_ceiling___40__opposite_of_levels__61____41__.mdwn
@@ -1,3 +1,48 @@
It would be nice if the [[plugins/toc]] plugin let you specify a header level "ceiling" above which (or above and including which) the headers would not be incorporated into the toc.
Currently, the levels=X parameter lets you tweak how deep it will go for small headers, but I'd like to chop off the h1's (as I use them for my page title) -- [[Jon]]
+
+> This change to toc.pm should do it. --[[KathrynAndersen]]
+
+> > The patch looks vaguely OK to me but it's hard to tell without
+> > context. It'd be much easier to review if you used unified diff
+> > (`diff -u`), which is what `git diff` defaults to - almost all
+> > projects prefer to receive changes as unified diffs (or as
+> > branches in their chosen VCS, which is [[git]] here). --[[smcv]]
+
+> > > Done. -- [[KathrynAndersen]]
+
+> > > > Looks like Joey has now [[merged|done]] this. Thanks! --[[smcv]]
+
+ --- /files/git/other/ikiwiki/IkiWiki/Plugin/toc.pm 2009-11-16 12:44:00.352050178 +1100
+ +++ toc.pm 2009-12-26 06:36:06.686512552 +1100
+ @@ -53,8 +53,8 @@
+ my $page="";
+ my $index="";
+ my %anchors;
+ - my $curlevel;
+ - my $startlevel=0;
+ + my $startlevel=($params{startlevel} ? $params{startlevel} : 0);
+ + my $curlevel=$startlevel-1;
+ my $liststarted=0;
+ my $indent=sub { "\t" x $curlevel };
+ $p->handler(start => sub {
+ @@ -67,10 +67,16 @@
+
+ # Take the first header level seen as the topmost level,
+ # even if there are higher levels seen later on.
+ + # unless we're given startlevel as a parameter
+ if (! $startlevel) {
+ $startlevel=$level;
+ $curlevel=$startlevel-1;
+ }
+ + elsif (defined $params{startlevel}
+ + and $level < $params{startlevel})
+ + {
+ + return;
+ + }
+ elsif ($level < $startlevel) {
+ $level=$startlevel;
+ }
+
+[[!tag patch]]
diff --git a/doc/todo/tracking_bugs_with_dependencies.mdwn b/doc/todo/tracking_bugs_with_dependencies.mdwn
index a198530fc..456dadad0 100644
--- a/doc/todo/tracking_bugs_with_dependencies.mdwn
+++ b/doc/todo/tracking_bugs_with_dependencies.mdwn
@@ -81,6 +81,9 @@ I like the idea of [[tips/integrated_issue_tracking_with_ikiwiki]], and I do so
>> I saw that this issue is targeted at by the work on [[structured page data#another_kind_of_links]]. --Ivan Z.
+>>> It's fixed now; links can have a type, such as "tag", or "dependency",
+>>> and pagespecs can match links of a given typo. --[[Joey]]
+
Okie - I've had a quick attempt at this. Initial patch attached. This one doesn't quite work.
And there is still a lot of debugging stuff in there.
@@ -360,7 +363,10 @@ account all comments above (which doesn't mean it is above reproach :) ). --[[W
> --[[Joey]]
>> There is one issue that I've been thinking about that I haven't raised anywhere (or checked myself), and that is how this all interacts with page dependencies.
->> Firstly, I'm not sure anymore that the `pagespec_merge` function will continue to work in all cases.
+>>
+>>> I've moved the discussion of that to [[dependency_types]]. --[[Joey]]
+>>
+>> I'm not sure anymore that the `pagespec_merge` function will continue to work in all cases.
>>> The problem I can see there is that if two pagespecs
>>> get merged and both use `~foo` but define it differently,
@@ -410,92 +416,10 @@ account all comments above (which doesn't mean it is above reproach :) ). --[[W
>>>>> then the last definition (baz) takes precedence.
>>>>> In the process of writing this I think I've come up with a way to change this back the way it was, still using closures. -- [[Will]]
->>> Alternatively, my [[remove-pagespec-merge|should_optimise_pagespecs]]
->>> branch solves this, in a Gordian knot sort of way :-) --[[smcv]]
-
->> Secondly, it seems that there are two types of dependency, and ikiwiki
->> currently only handles one of them. The first type is "Rebuild this
->> page when any of these other pages changes" - ikiwiki handles this.
->> The second type is "rebuild this page when set of pages referred to by
->> this pagespec changes" - ikiwiki doesn't seem to handle this. I
->> suspect that named pagespecs would make that second type of dependency
->> more important. I'll try to come up with a good example. -- [[Will]]
-
->>> Hrm, I was going to build an example of this with backlinks, but it
->>> looks like that is handled as a special case at the moment (line 458 of
->>> render.pm). I'll see if I can breapk
->>> things another way. Fixing this properly would allow removal of that special case. -- [[Will]]
-
->>>> I can't quite understand the distinction you're trying to draw
->>>> between the two types of dependencies. Backlinks are a very special
->>>> case though and I'll be suprised if they fit well into pagespecs.
->>>> --[[Joey]]
-
->>>>> The issue is that the existential pagespec matching allows you to build things that have similar
->>>>> problems to backlinks.
->>>>> e.g. the following inline:
-
- \[[!inline pages="define(~done, link(done)) and link(~done)" archive=yes]]
-
->>>>> includes any page that links to a page that links to done. Now imagine I add a new link to 'done' on
->>>>> some random page somewhere - a page which some other page links to which didn't previously get included - the set of pages accepted by the pagespec, and hence the set of
->>>>> pages inlined, will change. But, there is no dependency anywhere on the page that I altered, so
->>>>> ikiwiki will not rebuild the page with the inline in it. What is happening is that the page that I altered affects
->>>>> the set of pages matched by the pagespec without itself being matched by the pagespec, and hence included in the dependency list.
-
->>>>> To make this work well, I think you need to recognise two types of dependencies for each page (and no
->>>>> special cases for particular types of links, eg backlinks). The first type of dependency says, "The content of
->>>>> this page depends upon the content of these other pages". The `add_depends()` in the shortcuts
->>>>> plugin is of this form: any time the shortcuts page is edited, any page with a shortcut on it
->>>>> is rebuilt. The inline plugin also needs to add dependencies of this form to detect when the inlined
->>>>> content changes. By contrast, the map plugin does not need a dependency of this form, because it
->>>>> doesn't actually care about the content of any pages, just which pages it needs to include (which we'll handle next).
-
->>>>> The second type of dependency says, "The content of this page depends upon the exact set of pages matched
->>>>> by this pagespec". The first type of dependency was about the content of some pages, the second type is about
->>>>> which pages get matched by a pagespec. This is the type of dependency tracking that the map plugin needs.
->>>>> If the set of pages matched by map pagespec changes, then the page with the map on it needs to be rebuilt to show a different list of pages.
->>>>> Inline needs this type of dependency as well as the previous type - This type handles a change in which pages
->>>>> are inlined, the previous type handles a change in the content of any of those pages. Shortcut does not need this type of
->>>>> dependency. Most of the places that use `add_depends()` seem to need this type of dependency rather than the first type.
-
->>>>>> Note that inline and map currently achieve the second type of dependency by
->>>>>> explicitly calling `add_depends` for each page the displayed.
->>>>>> If any of those pages are removed, the regular pagespec would not
->>>>>> match them -- since they're gone. However, the explicit dependency
->>>>>> on them does cause them to match. It's an ugly corner I'd like to
->>>>>> get rid of. --[[Joey]]
-
->>>>> Implementation Details: The first type of dependency can be handled very similarly to the current
->>>>> dependency system. You just need to keep a list of pages that the content depends upon. You could
->>>>> keep that list as a pagespec, but if you do this you might want to check that the pagespec doesn't change,
->>>>> possibly by adding a dependency of the second type along with the dependency of the first type.
-
->>>>>> An example of the current system not tracking enough data is
->>>>>> where A inlines B which inlines C. A change to C will cause B to
->>>>>> rebuild, but A will not "notice" that B has implicitly changed.
->>>>>> That example suggests it might be fixable without explicitly storing
->>>>>> data, by causing a rebuild of B to be treated as a change to B.
->>>>>> --[[Joey]]
-
->>>>> The second type of dependency is a little more tricky. For each page, we'd need a list of pagespecs that
->>>>> the page depended on, and for each pagespec you'd want to store the list of pages that currently match it.
->>>>> On refresh, you'd need to check each pagespec to see if the set of pages that match it has changed, and if
->>>>> that set has changed, then rebuild the dependent page(s). Oh, and for this second type of dependency, I
->>>>> don't think you can merge pagespecs. If I wanted to know if either "\*" or "link(done)" changes, then just checking
->>>>> to see if the set of pages matched by "\* or link(done)" changes doesn't work.
-
->>>>> The current system works because even though you usually want dependencies of the second type, the set of pages
->>>>> referred to by a pagespec can only change if one of those pages itself changes. i.e. A dependency check of the
->>>>> first type will catch a dependency change of the second type with current pagespecs.
->>>>> This doesn't work with backlinks, and it doesn't work with existential matching. Backlinks are currently special-cased. I don't know
->>>>> how to special-case existential matching - I suspect you're better off just getting the dependency tracking right.
-
->>>>> I also tried to come up with other possible solutions: e.g. can we find the dependencies for a pagespec? That
->>>>> would be the set of pages where a change on one of those pages could lead to a change in the set of pages matched by the pagespec.
->>>>> For old-style pagespecs without backlinks, the dependency set for a pagespec is the same as the set of pages the pagespec matches.
->>>>> Unfortunately, with existential matching, the set of pages that each
->>>>> pagespec depends upon can quickly become "*", which is not very useful. -- [[Will]]
+>>> My [[remove-pagespec-merge|should_optimise_pagespecs]] branch has now
+>>> solved all this by deleting the offending function :-) --[[smcv]]
+
+
Patch updated to use closures rather than inline generated code for named pagespecs. Also includes some new use of ErrorReason where appropriate. -- [[Will]]
diff --git a/doc/todo/two-way_convert_of_wikis.mdwn b/doc/todo/two-way_convert_of_wikis.mdwn
new file mode 100644
index 000000000..61f02a30b
--- /dev/null
+++ b/doc/todo/two-way_convert_of_wikis.mdwn
@@ -0,0 +1,15 @@
+
+[[!tag wishlist]]
+
+Ok, the vision is this: Some of you will know git-svn. I want something like
+git-svn,, but for wikis. I want to be able to do the following:
+
+1. Convert a moinmoin (or whatever) wiki to a local ikiwiki on my laptop.
+2. Edit my local copy (offline).
+3. Preview the changes with my local ikiwki installation + browser.
+4. Push the changes back to moinmoin (or whatever) wiki.
+
+I know, I know, ikiwiki wasn't designed for that, but it would be really cool,
+and useful and people ask for that kind of thing too.
+
+--[[David_Riebenbauer]]
diff --git a/doc/todo/user-defined_templates_outside_the_wiki.mdwn b/doc/todo/user-defined_templates_outside_the_wiki.mdwn
new file mode 100644
index 000000000..1d72aa6a7
--- /dev/null
+++ b/doc/todo/user-defined_templates_outside_the_wiki.mdwn
@@ -0,0 +1,10 @@
+[[!tag wishlist]]
+
+The [[plugins/contrib/ftemplate]] plugin looks for templates inside the wiki
+source, but also looks in the system templates directory (the one with
+`page.tmpl`). This means the wiki admin can provide templates that can be
+invoked via `\[[!template]]`, but don't have to "work" as wiki pages in their
+own right. I think the normal [[plugins/template]] plugin could benefit from
+this functionality.
+
+[[done]] --[[Joey]]
diff --git a/doc/todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both.mdwn b/doc/todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both.mdwn
index 65b7cd96a..5f17e3ae0 100644
--- a/doc/todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both.mdwn
+++ b/doc/todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both.mdwn
@@ -26,4 +26,6 @@ which seems to do the right thing in page.tmpl, but not for change.tmpl. Where i
> The use of an absolute baseurl in change.tmpl is a special case. --[[Joey]]
+So I'm facing this same issue. I have a wiki which needs to be accessed on three different URLs(!) and the hard coding of the URL from the setup file is becoming a problem for me. Is there anything I can do here? --[[Perry]]
+
[[wishlist]]
diff --git a/doc/todo/wrapperuser.mdwn b/doc/todo/wrapperuser.mdwn
new file mode 100644
index 000000000..4c42b046f
--- /dev/null
+++ b/doc/todo/wrapperuser.mdwn
@@ -0,0 +1,7 @@
+ikiwiki's .setup file can specify wrappergroup, and ikiwiki will set the group
+of the wrappers accordingly. Having had people encounter difficulty before
+when trying to do the same thing with users (for instance, making all wrappers
+6755 ikiwiki:ikiwiki), I think it would help to have "wrapperuser". This could
+only actually take effect if building the wrappers as root (not really the best
+plan), but ikiwiki could at least warn if wrapperuser does not match the user
+the wrapper will end up with.