summaryrefslogtreecommitdiff
path: root/doc/todo
diff options
context:
space:
mode:
Diffstat (limited to 'doc/todo')
-rw-r--r--doc/todo/ACL.mdwn27
-rw-r--r--doc/todo/Add_HTML_support_to_po_plugin.mdwn7
-rw-r--r--doc/todo/Add_label_to_search_form_input_field.mdwn8
-rw-r--r--doc/todo/Add_nicer_math_formatting.mdwn26
-rw-r--r--doc/todo/CSS_classes_for_links.mdwn35
-rw-r--r--doc/todo/Extensible_inlining.mdwn263
-rw-r--r--doc/todo/Fix_selflink_in_po_plugin.mdwn21
-rw-r--r--doc/todo/Google_Analytics_support.mdwn31
-rw-r--r--doc/todo/Google_Sitemap_protocol.mdwn13
-rw-r--r--doc/todo/Improving_the_efficiency_of_match__95__glob.mdwn228
-rw-r--r--doc/todo/Mailing_list.mdwn16
-rw-r--r--doc/todo/More_flexible_po-plugin_for_translation.mdwn5
-rw-r--r--doc/todo/Multiple_categorization_namespaces.mdwn103
-rw-r--r--doc/todo/OpenSearch.mdwn20
-rw-r--r--doc/todo/Option_to_make_title_an_h1__63__.mdwn2
-rw-r--r--doc/todo/Resolve_native_reStructuredText_links_to_ikiwiki_pages.mdwn9
-rw-r--r--doc/todo/Separate_OpenIDs_and_usernames.mdwn44
-rw-r--r--doc/todo/Set_templates_for_whole_sections_of_the_site.mdwn2
-rw-r--r--doc/todo/Support_XML-RPC-based_blogging.mdwn3
-rw-r--r--doc/todo/Tags_list_in_page_footer_uses_basename.mdwn3
-rw-r--r--doc/todo/__42__forward__42__ing_functionality_for_the_meta_plugin.mdwn2
-rw-r--r--doc/todo/abbreviation.mdwn2
-rw-r--r--doc/todo/adjust_commit_message_for_rename__44___remove.mdwn5
-rw-r--r--doc/todo/alias_directive.mdwn72
-rw-r--r--doc/todo/allow_displaying_number_of_comments.mdwn30
-rw-r--r--doc/todo/allow_plugins_to_add_sorting_methods.mdwn304
-rw-r--r--doc/todo/allow_site-wide_meta_definitions.mdwn208
-rw-r--r--doc/todo/anon_push_of_comments.mdwn14
-rw-r--r--doc/todo/auto-create_tag_pages_according_to_a_template.mdwn309
-rw-r--r--doc/todo/auto_getctime_on_fresh_build.mdwn13
-rw-r--r--doc/todo/auto_publish_expire.mdwn33
-rw-r--r--doc/todo/auto_rebuild_on_template_change.mdwn78
-rw-r--r--doc/todo/autoindex_should_use_add__95__autofile.mdwn120
-rw-r--r--doc/todo/avatar.mdwn34
-rw-r--r--doc/todo/avoid_attachement_ui_if_upload_not_allowed.mdwn25
-rw-r--r--doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn108
-rw-r--r--doc/todo/beef_up_signin_page.mdwn17
-rw-r--r--doc/todo/capitalize_title.mdwn31
-rw-r--r--doc/todo/cas_authentication.mdwn13
-rw-r--r--doc/todo/cdate_and_mdate_available_for_templates.mdwn15
-rw-r--r--doc/todo/comment_moderation_feed.mdwn9
-rw-r--r--doc/todo/configurable_markdown_path.mdwn64
-rw-r--r--doc/todo/configurable_tidy_command_for_htmltidy.mdwn8
-rw-r--r--doc/todo/configurable_timezones.mdwn5
-rw-r--r--doc/todo/conflict_free_comment_merges.mdwn23
-rw-r--r--doc/todo/countdown_directive.mdwn5
-rw-r--r--doc/todo/credentials_page.mdwn33
-rw-r--r--doc/todo/dependency_types.mdwn29
-rw-r--r--doc/todo/description_meta_param_passed_to_templates.mdwn10
-rw-r--r--doc/todo/double-click_protection_for_form_buttons.mdwn5
-rw-r--r--doc/todo/edit_form:_no_fixed_size_for_textarea.mdwn20
-rw-r--r--doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn8
-rw-r--r--doc/todo/enable-htaccess-files.mdwn19
-rw-r--r--doc/todo/enable_arbitrary_markup_for_directives.mdwn47
-rw-r--r--doc/todo/feed_enhancements_for_inline_pages.mdwn132
-rw-r--r--doc/todo/finer_control_over___60__object___47____62__s.mdwn98
-rw-r--r--doc/todo/generic_insert_links.mdwn24
-rw-r--r--doc/todo/git_attribution/discussion.mdwn6
-rw-r--r--doc/todo/headless_git_branches.mdwn74
-rw-r--r--doc/todo/html.mdwn2
-rw-r--r--doc/todo/htpasswd_mirror_of_the_userdb.mdwn29
-rw-r--r--doc/todo/http_bl_support.mdwn67
-rw-r--r--doc/todo/inline_raw_files.mdwn115
-rw-r--r--doc/todo/latex.mdwn9
-rw-r--r--doc/todo/link_plugin_perhaps_too_general__63__.mdwn25
-rw-r--r--doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn11
-rw-r--r--doc/todo/matching_different_kinds_of_links.mdwn149
-rw-r--r--doc/todo/mdwn_itex.mdwn22
-rw-r--r--doc/todo/mercurial.mdwn8
-rw-r--r--doc/todo/mirrorlist_with_per-mirror_usedirs_settings.mdwn72
-rw-r--r--doc/todo/more_flexible_inline_postform.mdwn5
-rw-r--r--doc/todo/multiple_template_directories.mdwn60
-rw-r--r--doc/todo/multiple_templates.mdwn2
-rw-r--r--doc/todo/nested_preprocessor_directives.mdwn47
-rw-r--r--doc/todo/openid_user_filtering.mdwn4
-rw-r--r--doc/todo/optional_underlaydir_prefix.mdwn46
-rw-r--r--doc/todo/org_mode.mdwn24
-rw-r--r--doc/todo/pagespec_aliases.mdwn93
-rw-r--r--doc/todo/pagespec_aliases/discussion.mdwn13
-rw-r--r--doc/todo/passwordauth:_sendmail_interface.mdwn2
-rw-r--r--doc/todo/pingback_support.mdwn2
-rw-r--r--doc/todo/po:_avoid_rebuilding_to_fix_meta_titles.mdwn60
-rw-r--r--doc/todo/po:_better_documentation.mdwn3
-rw-r--r--doc/todo/po:_better_links.mdwn12
-rw-r--r--doc/todo/po:_better_translation_interface.mdwn2
-rw-r--r--doc/todo/po:_remove_po_files_when_disabling_plugin.mdwn13
-rw-r--r--doc/todo/po:_rethink_pagespecs.mdwn40
-rw-r--r--doc/todo/po:_translation_of_directives.mdwn8
-rw-r--r--doc/todo/po_needstranslation_pagespec.mdwn12
-rw-r--r--doc/todo/preview_changes_before_git_commit.mdwn17
-rw-r--r--doc/todo/replace_HTML::Template_with_Template_Toolkit.mdwn27
-rw-r--r--doc/todo/rewrite_ikiwiki_in_haskell.mdwn1
-rw-r--r--doc/todo/salmon_protocol_for_comment_sharing.mdwn21
-rw-r--r--doc/todo/selective_more_directive.mdwn28
-rw-r--r--doc/todo/smarter_sorting.mdwn141
-rw-r--r--doc/todo/structured_page_data.mdwn5
-rw-r--r--doc/todo/support_includes_in_setup_files.mdwn10
-rw-r--r--doc/todo/support_link__40__.__41___in_pagespec.mdwn21
-rw-r--r--doc/todo/svg.mdwn19
-rw-r--r--doc/todo/tagging_with_a_publication_date.mdwn31
-rw-r--r--doc/todo/toc_plugin:_set_a_header_ceiling___40__opposite_of_levels__61____41__.mdwn45
-rw-r--r--doc/todo/toplevel_index.mdwn2
-rw-r--r--doc/todo/tracking_bugs_with_dependencies.mdwn3
-rw-r--r--doc/todo/transient_pages.mdwn318
-rw-r--r--doc/todo/two-way_convert_of_wikis.mdwn15
-rw-r--r--doc/todo/untrusted_git_push_hooks.mdwn12
-rw-r--r--doc/todo/use_secure_cookies_for_ssl_logins.mdwn36
-rw-r--r--doc/todo/use_templates_for_the_img_plugin.mdwn29
-rw-r--r--doc/todo/user-defined_templates_outside_the_wiki.mdwn10
-rw-r--r--doc/todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both.mdwn357
-rw-r--r--doc/todo/web_reversion.mdwn73
-rw-r--r--doc/todo/wrapperuser.mdwn7
112 files changed, 4936 insertions, 167 deletions
diff --git a/doc/todo/ACL.mdwn b/doc/todo/ACL.mdwn
index e9fb2717f..dd9793233 100644
--- a/doc/todo/ACL.mdwn
+++ b/doc/todo/ACL.mdwn
@@ -21,6 +21,11 @@ something, that I think is very valuable.
>>>> Which would rule out openid, or other fun forms of auth. And routing all access
>>>> through the CGI sort of defeats the purpose of ikiwiki. --[[Ethan]]
+>>>>> I think what Joey is suggesting is to use apache ACLs in conjunction
+>>>>> with basic HTTP auth to control read access, and ikiwiki can use the
+>>>>> information via the httpauth plugin for other ACLs (write, admin). But
+>>>>> yes, that would rule out non-httpauth mechanisms. -- [[Jon]]
+
Also see [[!debbug 443346]].
> Just a few quick thoughts about this:
@@ -69,3 +74,25 @@ Here is how I see it:
<pre>
\[[!acl user=* page=/subsite/* acl=/subsite/acl.mdwn]]
</pre>
+
+Any idea when this is going to be finished? If you want, I am happy to beta test.
+
+> It's already done, though that is sorta hidden in the above. :-)
+> Example of use to only allow two users to edit the tipjar page:
+> locked_pages => 'tipjar and !(user(joey) or user(bob))',
+> --[[Joey]]
+
+> > Thank you for the hint but I am being still confused (read: dense)... What I am trying to do is this:
+
+> > * No anonymous access.
+> > * Logged in users can edit and create pages.
+> > * Users can set who can edit their pages.
+> > * Some pages are only viewable by admins.
+
+> > Is it possible? If so how?...
+
+>>> I don't believe this is currently possible. What is missing is the concept
+>>> of page 'ownership'. -- [[Jon]]
+
+>>>> GAH! That is really a shame... Any chance of adding that? No, I do not really expect it to be added, after all my requirements are pushing the boundary of what a wikiwiki
+ should be. Nonetheless, thanks for your help!
diff --git a/doc/todo/Add_HTML_support_to_po_plugin.mdwn b/doc/todo/Add_HTML_support_to_po_plugin.mdwn
new file mode 100644
index 000000000..ec29e4f61
--- /dev/null
+++ b/doc/todo/Add_HTML_support_to_po_plugin.mdwn
@@ -0,0 +1,7 @@
+The HTML page type should be fully supported by the PO plugin: po4a's
+HTML support is able to extract translatable strings and to disregard
+the rest.
+
+This is implemented in my po branch, please review. --[[intrigeri]]
+
+[[!tag patch]]
diff --git a/doc/todo/Add_label_to_search_form_input_field.mdwn b/doc/todo/Add_label_to_search_form_input_field.mdwn
index e4e83428c..514108fba 100644
--- a/doc/todo/Add_label_to_search_form_input_field.mdwn
+++ b/doc/todo/Add_label_to_search_form_input_field.mdwn
@@ -47,4 +47,10 @@ The patch below adds a label for the field to improve usability:
> to get it to appear higher up is to put it first, or to use Evil absolute
> positioning. (CSS sucks.) --[[Joey]]
-[[!tag done wishlist]]
+> Update: html5 allows just adding `placeholder="Search"` to the input
+> element. already works in eg, chromium. However, ikiwiki does not use
+> html5 yet. --[[Joey]]
+
+>> [[Done]], placeholder added, in html5 mode only.
+
+[[!tag wishlist bugs/html5_support]]
diff --git a/doc/todo/Add_nicer_math_formatting.mdwn b/doc/todo/Add_nicer_math_formatting.mdwn
new file mode 100644
index 000000000..3a5e94a14
--- /dev/null
+++ b/doc/todo/Add_nicer_math_formatting.mdwn
@@ -0,0 +1,26 @@
+It would be nice to add nicer math formatting. I currently use the
+[[plugins/teximg]] plugin, but I wonder if
+[jsMath](http://www.math.union.edu/~dpvc/jsMath/) wouldn't be a better option.
+
+[[Will]]
+
+> I've looked at jsmath (which is nicely packaged in Debian), and
+> I agree that this is nicer than TeX images. That text-mode browsers
+> get to see LaTeX as a fallback is actually a nice feature (better
+> than nothing, right? :) That browsers w/o javascript will not be able to
+> see the math either is probably ok.
+>
+> A plugin would probably be a pretty trivial thing to write.
+> It just needs to include the javascript files,
+> and slap a `<div class="math"> avound the user's code`, then
+> call `jsMath.Process(document);` at the end of the page.
+>
+> My only concern is security: Has jsMath's parser been written
+> to be safe when processing untrusted input? Could a user abuse the
+> parser to cause it to emit/run arbitrary javascript code?
+> I've posted a question about this to its forum: --[[Joey]]
+> <https://sourceforge.net/projects/jsmath/forums/forum/592273/topic/3831574>
+
+I think [mathjax](http://www.mathjax.org/) would be the best option. This is the math rendering engine used in mathoverflow.
+
+[[!tag wishlist]]
diff --git a/doc/todo/CSS_classes_for_links.mdwn b/doc/todo/CSS_classes_for_links.mdwn
index 38db87724..29ed3770e 100644
--- a/doc/todo/CSS_classes_for_links.mdwn
+++ b/doc/todo/CSS_classes_for_links.mdwn
@@ -101,3 +101,38 @@ I find CSS3 support still spotty... Here are some notes on how to do this in Ik
>>>
>>> `htmllink` can never be used to generate an external link. So,
>>> patching it seems the best approach. --[[Joey]]
+
+>>>> I had a quick look to this issue. Internal links are generated at
+>>>> 11 places in the Perl code and would need to be patched (this
+>>>> number could be lowered a bit if a htmllink-like function existed
+>>>> for CGI urls; such a function would use `cgiurl`, and be used in
+>>>> most places where `cgiurl` is currently called by plugins).
+>>>>
+>>>> Also, more than 30 `<a>` links appear in templates, most of those
+>>>> being internal links.
+>>>>
+>>>> Sure, patching those few dozen places is trivial. On the other
+>>>> hand, I'm wondering how doable it would be to make sure, on the
+>>>> long run, any generated internal link has the right CSS class
+>>>> applied. One would need to write tests running against the code
+>>>> with all plugins enabled, all templates put to work, in order to
+>>>> ensure consistency is maintained. --[[intrigeri]]
+
+-----
+If you're going to be patching htmllink anyway, might I suggest something more flexible, like being able to configure the link format?
+(Yes, PmWiki allows this, that's where I got the idea)
+That is, rather than having "&lt;a href=". blah . blah ...
+one could use a sprintf with a default format which could be configured in the setup file.
+
+For example:
+
+ $format = ($config{createlink_format}
+ ? $config{createlink_format}
+ : '<span class=\"createlink\"><a href="%s" rel="nofollow">?</a>%s</span>');
+ return sprintf($format,
+ cgiurl(do => "create", page => lc($link), from => $lpage),
+ $linktext);
+
+I admit, I've been wanting something like this for a long time, because I dislike the existing createlink format...
+
+--[[KathrynAndersen]]
diff --git a/doc/todo/Extensible_inlining.mdwn b/doc/todo/Extensible_inlining.mdwn
new file mode 100644
index 000000000..994ed0759
--- /dev/null
+++ b/doc/todo/Extensible_inlining.mdwn
@@ -0,0 +1,263 @@
+Here's an idea with [[patch]] for extending inline in two directions:
+
+1. Permit the content-fetching function to return undef to skip a page. The limiting of @list to a set size is performed after that filtering.
+2. Permit other directive plugins to pass a function to generate content via an inliner_ parameter. The current patch doesn't try to remove that key from the parameters, so hilarity might ensue if someone is too clever. I suppose I should fix that... My *intent* is that other, custom directives can add inliner_.
+
+The diff looks large because the first requires switching some loops.
+
+I'm using this along with a custom BibTeX formatter (one item per file) to generate larger pages and tiny listings. I still need to hammer the templates for that, but I think that's possible without further patches.
+
+(Setting up a git branch for a single plugin is a pain, but I can if necessary. I also could separate this into some sequence rather than all at once, but I won't have time for a week or two.)
+
+-- [[JasonRiedy]]
+
+<pre><code>
+--- /home/ejr/src/git.ikiwiki.info/IkiWiki/Plugin/inline.pm 2011-03-05 14:18:30.261293808 -0500
++++ inline.pm 2011-03-06 21:44:18.887903638 -0500
+@@ -185,6 +185,7 @@
+ }
+
+ my @list;
++ my $num = 0;
+
+ if (exists $params{pagenames}) {
+ foreach my $p (qw(sort pages)) {
+@@ -213,23 +214,121 @@
+ if ($params{feedshow} && $num < $params{feedshow} && $num > 0) {
+ $num=$params{feedshow};
+ }
+- if ($params{skip} && $num) {
+- $num+=$params{skip};
+- }
+
+ @list = pagespec_match_list($params{page}, $params{pages},
+ deptype => deptype($quick ? "presence" : "content"),
+ filter => sub { $_[0] eq $params{page} },
+ sort => exists $params{sort} ? $params{sort} : "age",
+ reverse => yesno($params{reverse}),
+- ($num ? (num => $num) : ()),
+ );
+ }
+
+ if (exists $params{skip}) {
+ @list=@list[$params{skip} .. $#list];
+ }
++
++ if ($params{show} && $params{show} > $num) {
++ $num = $params{show}
++ }
++
++ my $ret="";
++ my @displist;
++ if ($feedonly) {
++ @displist = @list;
++ } else {
++ my $template;
++ if (! $raw) {
++ # cannot use wiki pages as templates; template not sanitized due to
++ # format hook hack
++ eval {
++ $template=template_depends($params{template}.".tmpl", $params{page},
++ blind_cache => 1);
++ };
++ if ($@) {
++ error sprintf(gettext("failed to process template %s"), $params{template}.".tmpl").": $@";
++ }
++ }
++ my $needcontent=$raw || (!($archive && $quick) && $template->query(name => 'content'));
++
++ foreach my $page (@list) {
++ last if ($num && scalar @displist >= $num);
++ my $file = $pagesources{$page};
++ my $type = pagetype($file);
++ if (! $raw) {
++ # Get the content before populating the
++ # template, since getting the content uses
++ # the same template if inlines are nested.
++ if ($needcontent) {
++ my $content;
++ if (exists $params{inliner_} && defined $params{inliner_}) {
++ $content = &{$params{inliner_}}($page, $template, %params);
++ } else {
++ $content=get_inline_content($page, $params{destpage});
++ }
++ next if !defined $content;
++ $template->param(content => $content);
++ push @displist, $page;
++ }
++ $template->param(pageurl => urlto($page, $params{destpage}));
++ $template->param(inlinepage => $page);
++ $template->param(title => pagetitle(basename($page)));
++ $template->param(ctime => displaytime($pagectime{$page}, $params{timeformat}, 1));
++ $template->param(mtime => displaytime($pagemtime{$page}, $params{timeformat}));
++ $template->param(first => 1) if $page eq $list[0];
++ $template->param(last => 1) if ($num && scalar @displist == $num);
++ $template->param(html5 => $config{html5});
+
++ if ($actions) {
++ my $file = $pagesources{$page};
++ my $type = pagetype($file);
++ if ($config{discussion}) {
++ if ($page !~ /.*\/\Q$config{discussionpage}\E$/i &&
++ (length $config{cgiurl} ||
++ exists $pagesources{$page."/".lc($config{discussionpage})})) {
++ $template->param(have_actions => 1);
++ $template->param(discussionlink =>
++ htmllink($page,
++ $params{destpage},
++ $config{discussionpage},
++ noimageinline => 1,
++ forcesubpage => 1));
++ }
++ }
++ if (length $config{cgiurl} &&
++ defined $type &&
++ IkiWiki->can("cgi_editpage")) {
++ $template->param(have_actions => 1);
++ $template->param(editurl => cgiurl(do => "edit", page => $page));
++
++ }
++ }
++
++ run_hooks(pagetemplate => sub {
++ shift->(page => $page, destpage => $params{destpage},
++ template => $template,);
++ });
++
++ $ret.=$template->output;
++ $template->clear_params;
++ }
++ else {
++ if (defined $type) {
++ $ret.="\n".
++ linkify($page, $params{destpage},
++ preprocess($page, $params{destpage},
++ filter($page, $params{destpage},
++ readfile(srcfile($file)))));
++ }
++ else {
++ $ret.="\n".
++ readfile(srcfile($file));
++ }
++ push @displist, $page;
++ }
++ }
++ }
++ @list = @displist;
++
+ my @feedlist;
+ if ($feeds) {
+ if (exists $params{feedshow} &&
+@@ -241,10 +340,6 @@
+ }
+ }
+
+- if ($params{show} && @list > $params{show}) {
+- @list=@list[0..$params{show} - 1];
+- }
+-
+ if ($feeds && exists $params{feedpages}) {
+ @feedlist = pagespec_match_list(
+ $params{page}, "($params{pages}) and ($params{feedpages})",
+@@ -302,8 +397,6 @@
+ }
+ }
+
+- my $ret="";
+-
+ if (length $config{cgiurl} && ! $params{preview} && (exists $params{rootpage} ||
+ (exists $params{postform} && yesno($params{postform}))) &&
+ IkiWiki->can("cgi_editpage")) {
+@@ -355,91 +448,7 @@
+ }
+ $ret.=$linktemplate->output;
+ }
+-
+- if (! $feedonly) {
+- my $template;
+- if (! $raw) {
+- # cannot use wiki pages as templates; template not sanitized due to
+- # format hook hack
+- eval {
+- $template=template_depends($params{template}.".tmpl", $params{page},
+- blind_cache => 1);
+- };
+- if ($@) {
+- error sprintf(gettext("failed to process template %s"), $params{template}.".tmpl").": $@";
+- }
+- }
+- my $needcontent=$raw || (!($archive && $quick) && $template->query(name => 'content'));
+-
+- foreach my $page (@list) {
+- my $file = $pagesources{$page};
+- my $type = pagetype($file);
+- if (! $raw) {
+- if ($needcontent) {
+- # Get the content before populating the
+- # template, since getting the content uses
+- # the same template if inlines are nested.
+- my $content=get_inline_content($page, $params{destpage});
+- $template->param(content => $content);
+- }
+- $template->param(pageurl => urlto($page, $params{destpage}));
+- $template->param(inlinepage => $page);
+- $template->param(title => pagetitle(basename($page)));
+- $template->param(ctime => displaytime($pagectime{$page}, $params{timeformat}, 1));
+- $template->param(mtime => displaytime($pagemtime{$page}, $params{timeformat}));
+- $template->param(first => 1) if $page eq $list[0];
+- $template->param(last => 1) if $page eq $list[$#list];
+- $template->param(html5 => $config{html5});
+-
+- if ($actions) {
+- my $file = $pagesources{$page};
+- my $type = pagetype($file);
+- if ($config{discussion}) {
+- if ($page !~ /.*\/\Q$config{discussionpage}\E$/i &&
+- (length $config{cgiurl} ||
+- exists $pagesources{$page."/".lc($config{discussionpage})})) {
+- $template->param(have_actions => 1);
+- $template->param(discussionlink =>
+- htmllink($page,
+- $params{destpage},
+- $config{discussionpage},
+- noimageinline => 1,
+- forcesubpage => 1));
+- }
+- }
+- if (length $config{cgiurl} &&
+- defined $type &&
+- IkiWiki->can("cgi_editpage")) {
+- $template->param(have_actions => 1);
+- $template->param(editurl => cgiurl(do => "edit", page => $page));
+
+- }
+- }
+-
+- run_hooks(pagetemplate => sub {
+- shift->(page => $page, destpage => $params{destpage},
+- template => $template,);
+- });
+-
+- $ret.=$template->output;
+- $template->clear_params;
+- }
+- else {
+- if (defined $type) {
+- $ret.="\n".
+- linkify($page, $params{destpage},
+- preprocess($page, $params{destpage},
+- filter($page, $params{destpage},
+- readfile(srcfile($file)))));
+- }
+- else {
+- $ret.="\n".
+- readfile(srcfile($file));
+- }
+- }
+- }
+- }
+-
+ if ($feeds && ($emptyfeeds || @feedlist)) {
+ if ($rss) {
+ my $rssp=$feedbase."rss".$feednum;
+</code></pre>
diff --git a/doc/todo/Fix_selflink_in_po_plugin.mdwn b/doc/todo/Fix_selflink_in_po_plugin.mdwn
new file mode 100644
index 000000000..b276c075d
--- /dev/null
+++ b/doc/todo/Fix_selflink_in_po_plugin.mdwn
@@ -0,0 +1,21 @@
+Using the po plugin, a link to /bla is present in the sidebar.
+When viewing /bla in the default language, this link is detected as
+a selflink. When viewing a translation of /bla, it
+isn't. --[[intrigeri]]
+
+Fixed in my po branch. --[[intrigeri]]
+
+[[!tag patch done]]
+
+> bump?
+
+>> I know I've looked at 88c6e2891593fd508701d728602515e47284180c
+>> before, and something about it just seemed wrong. Maybe it's
+>> the triviality of the sub, which it would seem to be easy to
+>> decide to refactor back into its one caller (which would reintroduce the
+>> bug). --[[Joey]]
+
+>>> Well, I can hear and understand this. Apart of adding a comment to
+>>> the sub, explaining the rationale (which is now done in my po
+>>> branch), I don't know what I can do to make it not seem wrong.
+>>> --[[intrigeri]]
diff --git a/doc/todo/Google_Analytics_support.mdwn b/doc/todo/Google_Analytics_support.mdwn
new file mode 100644
index 000000000..8bbb1c69b
--- /dev/null
+++ b/doc/todo/Google_Analytics_support.mdwn
@@ -0,0 +1,31 @@
+[[!template id=gitbranch branch=GiuseppeBilotta/google-analytics
+author="[[GiuseppeBilotta]]"]]
+
+I've extended the google plugin to add support for Google Analytics.
+This is done in two steps:
+
+* a `google_sitesearch` config option is introduced, to allow disabling
+ sitesearch even when the `google` plugin is loaded
+* a `google_analytics_account` config option is introduced. When it's
+ defined, its value is assumed to be a Google Analytics account ID
+ and the corresponding JavaScript code is automatically inserted in all
+ documents. The way this is done is shamelessy stolen from the flattr
+ plugin
+
+> Putting this in the google plugin does not seem to be a good approach.
+> That this "functionality" is offered by the same company as google search
+> is really of no consequence.
+
+Well, my idea was to put all Google-related functionality (in the sense
+of support for any service provided by Google) into the google plugin.
+The alternative would have been to have one separate plugin per feature,
+but that doesn't sound particularly nice to me. I can split it in a
+separate plugin if you believe it's cleaner that way
+
+> Also, can't this be easily accomplished by editing page.tmpl? --[[Joey]]
+
+Yes, and so would flattr. But precisely because this kind of code would require
+editing page.tmpl, doing it the manual way carries the burden of keeping it in
+sync across Ikiwiki updates (I'm sure I don't need to mention the number of
+help requests that essentially boil down to "oops, I was using custom templates
+and hadn't updated them").
diff --git a/doc/todo/Google_Sitemap_protocol.mdwn b/doc/todo/Google_Sitemap_protocol.mdwn
index 057a88b72..ea8ee7f03 100644
--- a/doc/todo/Google_Sitemap_protocol.mdwn
+++ b/doc/todo/Google_Sitemap_protocol.mdwn
@@ -34,6 +34,9 @@ for an example. You will probably need to strip out the metadata variables I
>>>[xtermin.us rather than localhost](http://xtermin.us/git/?p=website.git;a=blob;f=plugins/googlesitemap.pm) is 404 now.
>>> -- weakish
+
+Although it is not able to read the meta-data from files, using google-sitemapgen [works well for me](http://bzed.de/posts/2010/06/creating_a_google_sitemap_for_ikiwiki/) to create a sitemap for my ikiwiki installation. -- [[bzed|BerndZeimetz]]
+
There is a [sitemap XML standard](http://www.sitemaps.org/protocol.php) that ikiwiki needs to generate for.
# Google Webmaster tools and RSS
@@ -45,3 +48,13 @@ On [Google Webmaster tools](https://www.google.com/webmasters/tools) you can sub
[Google should grok feeds as sitemaps.](http://www.google.com/support/webmasters/bin/answer.py?answer=34654) Or rather [[plugins/inline]] should be improved to support the [sitemap protocol](http://sitemaps.org/protocol.php) natively.
-- [[Hendry]]
+
+
+Took me a minute to figure this out so I figured I'd share the steps I took:
+
+* Added rss=>1 and allowrss=>1 to my setup file
+* Created a new page where the RSS would be created with this content, replacing "first_page" with the page in my wiki with the earliest date:
+
+<pre>
+\[[!inline pages="* and !*/Discussion and created_after(first_page)" archive="yes" rss="yes" ]]
+</pre>
diff --git a/doc/todo/Improving_the_efficiency_of_match__95__glob.mdwn b/doc/todo/Improving_the_efficiency_of_match__95__glob.mdwn
new file mode 100644
index 000000000..4e1df3381
--- /dev/null
+++ b/doc/todo/Improving_the_efficiency_of_match__95__glob.mdwn
@@ -0,0 +1,228 @@
+[[!template id=gitbranch branch=smcv/ready/glob-cache
+ author="[[KathrynAndersen]], [[smcv]]"]]
+[[!tag patch]]
+
+I've been profiling my IkiWiki to try to improve speed (with many pages makes speed even more important) and I've written a patch to improve the speed of match_glob. This matcher is a good one to improve the speed of, because it gets called so many times.
+
+Here's my patch - please consider it! -- [[KathrynAndersen]]
+
+> It seems to me as though changing `glob2re` to return qr/$re/, and calling
+> `memoize(glob2re)` next to the other memoize calls, would be a less
+> verbose way to do this? --[[smcv]]
+
+>> I think so, yeah. Anyway, do you have any benchmark results handy,
+>> Kathryn? --[[Joey]]
+
+>>> See below.
+>>> Also, would it make more sense for glob2re to return qr/^$re$/i rather than qr/$re/? Everything that uses glob2re seems to use
+ $foo =~ /^$re$/i
+>>> rather than /$re/ so I think that would make sense.
+>>> -- [[KathrynAndersen]]
+
+>>>> Git branch `smcv/ka-glob-cache` has Kathryn's patch. Git
+>>>> branch `smcv/memoize-glob2re` does as I suggested, which
+>>>> is less verbose than Kathryn's patch but also not as
+>>>> fast; I'm not sure why, tbh. --[[smcv]]
+
+>>>>> I think it's because my patch focuses on match_glob while the memoize patch focuses on `glob2re`, and `glob2re` is called in `filecheck`, `meta` and `po` as well as in `match_glob` and `match_user`; thus the memoized `glob2re` is dealing with a bigger set of globs to look up, and thus could be just that little bit slower. -- [[KathrynAndersen]]
+
+>>>>>> What may be going on is that glob2re is already a fairly fast
+>>>>>> function, so the overhead of memoizing it with the very generic
+>>>>>> `_memoizer` (see its source) swamps the memoization gain. Note
+>>>>>> that the few functions memoized with the Memoizer before were much
+>>>>>> more expensive, so that little overhead was acceptable then.
+>>>>>>
+>>>>>> It also may be that Kathryn's patch is slightly faster due to using
+>>>>>> the construct `$foo =~ $regexp` rather than `$foo =~ /$regexp/`
+>>>>>> (probably avoids a copy or something like that internally) --
+>>>>>> this despite checking both `exists` and `defined` on the hash, which
+>>>>>> should be reundant AFAICS.
+>>>>>>
+>>>>>> My guess is that the best of both worlds would be to move
+>>>>>> the byhand memoization to glob2re and have it return a compiled
+>>>>>> `/^/i` regexp that can be used without further modifiction in most
+>>>>>> cases. --[[Joey]]
+
+>>>>>>> Done, see `smcv/ready/glob-cache` and `smcv/glob-cache-too-far`.
+>>>>>>>
+>>>>>>> Kathryn's patch is a significant improvement; my first patch on top of
+>>>>>>> that is a trivial cleanup that speeds it up a little, and the next two
+>>>>>>> patches (using precompiled regexes) have surprisingly little effect
+>>>>>>> (they don't slow it down either though, so either omit them or merge
+>>>>>>> them, whichever). Detailed benchmark results below.
+>>>>>>>
+>>>>>>> Moving the memoization to `glob2re` actually seems to slow things down
+>>>>>>> again - I suspect the docwiki has few enough mentions of `user()` etc.
+>>>>>>> that caching them is a waste of time, but perhaps it's not the most
+>>>>>>> representative.
+>>>>>>> --[[smcv]]
+
+[[done]] --[[Joey]]
+
+--------------------------------------------------------------
+
+[[!toggle id="smcv-benchmark" text="current benchmarks"]]
+
+[[!toggleable id="smcv-benchmark" text="""
+master at time of branch:
+
+ time elapsed (wall): 29.6348
+ time running program: 24.9212 (84.09%)
+ time profiling (est.): 4.7136 (15.91%)
+ number of calls: 1360181
+ number of exceptions: 13
+
+ %Time Sec. #calls sec/call F name
+ 13.24 3.2986 3408 0.000968 Text::Balanced::_match_tagged
+ 10.94 2.7253 79514 0.000034 IkiWiki::PageSpec::match_glob
+ 3.19 0.7952 59454 0.000013 <anon>:IkiWiki/Plugin/inline.pm:223
+
+`Improve the speed of match_glob`:
+
+ time elapsed (wall): 27.9755
+ time running program: 23.5293 (84.11%)
+ time profiling (est.): 4.4461 (15.89%)
+ number of calls: 1280875
+ number of exceptions: 13
+
+ %Time Sec. #calls sec/call F name
+ 14.56 3.4257 3408 0.001005 Text::Balanced::_match_tagged
+ 7.82 1.8403 79514 0.000023 IkiWiki::PageSpec::match_glob
+ 3.27 0.7698 59454 0.000013 <anon>:IkiWiki/Plugin/inline.pm:223
+
+`match_glob: streamline glob cache slightly`:
+
+ time elapsed (wall): 27.5753
+ time running program: 23.1714 (84.03%)
+ time profiling (est.): 4.4039 (15.97%)
+ number of calls: 1280875
+ number of exceptions: 13
+
+ %Time Sec. #calls sec/call F name
+ 14.09 3.2637 3408 0.000958 Text::Balanced::_match_tagged
+ 7.74 1.7926 79514 0.000023 IkiWiki::PageSpec::match_glob
+ 3.30 0.7646 59454 0.000013 <anon>:IkiWiki/Plugin/inline.pm:223
+
+`glob2re: return a precompiled, anchored case-insensitiv...`:
+
+ time elapsed (wall): 27.5656
+ time running program: 23.1464 (83.97%)
+ time profiling (est.): 4.4192 (16.03%)
+ number of calls: 1282189
+ number of exceptions: 13
+
+ %Time Sec. #calls sec/call F name
+ 14.21 3.2891 3408 0.000965 Text::Balanced::_match_tagged
+ 7.72 1.7872 79514 0.000022 IkiWiki::PageSpec::match_glob
+ 3.32 0.7678 59454 0.000013 <anon>:IkiWiki/Plugin/inline.pm:223
+
+`make use of precompiled regex objects`:
+
+ time elapsed (wall): 27.5357
+ time running program: 23.1289 (84.00%)
+ time profiling (est.): 4.4068 (16.00%)
+ number of calls: 1281981
+ number of exceptions: 13
+
+ %Time Sec. #calls sec/call F name
+ 14.17 3.2776 3408 0.000962 Text::Balanced::_match_tagged
+ 7.70 1.7814 79514 0.000022 IkiWiki::PageSpec::match_glob
+ 3.35 0.7756 59454 0.000013 <anon>:IkiWiki/Plugin/inline.pm:223
+
+`move memoization from match_glob to glob2re`:
+
+ time elapsed (wall): 28.7677
+ time running program: 23.9473 (83.24%)
+ time profiling (est.): 4.8205 (16.76%)
+ number of calls: 1360181
+ number of exceptions: 13
+
+ %Time Sec. #calls sec/call F name
+ 13.98 3.3469 3408 0.000982 Text::Balanced::_match_tagged
+ 8.85 2.1194 79514 0.000027 IkiWiki::PageSpec::match_glob
+ 3.24 0.7750 59454 0.000013 <anon>:IkiWiki/Plugin/inline.pm:223
+
+--[[smcv]]
+"""]]
+
+--------------------------------------------------------------
+
+[[!toggle id="ka-benchmarks" text="Kathryn's benchmarks"]]
+
+[[!toggleable id="ka-benchmarks" text="""
+Benchmarks done with Devel::Profile on the same testbed IkiWiki setup. I'm just showing the start of the profile output, since that's what's relevant.
+
+Before:
+<pre>
+time elapsed (wall): 27.4173
+time running program: 22.5909 (82.40%)
+time profiling (est.): 4.8264 (17.60%)
+number of calls: 1314729
+number of exceptions: 65
+
+%Time Sec. #calls sec/call F name
+11.05 2.4969 62333 0.000040 IkiWiki::PageSpec::match_glob
+ 4.10 0.9261 679 0.001364 Text::Balanced::_match_tagged
+ 2.72 0.6139 59812 0.000010 IkiWiki::SuccessReason::merge_influences
+</pre>
+
+After:
+<pre>
+time elapsed (wall): 26.1843
+time running program: 21.5673 (82.37%)
+time profiling (est.): 4.6170 (17.63%)
+number of calls: 1252433
+number of exceptions: 65
+
+%Time Sec. #calls sec/call F name
+ 7.66 1.6521 62333 0.000027 IkiWiki::PageSpec::match_glob
+ 4.33 0.9336 679 0.001375 Text::Balanced::_match_tagged
+ 2.81 0.6057 59812 0.000010 IkiWiki::SuccessReason::merge_influences
+</pre>
+
+Note that the seconds per call for match_glob in the "after" case has gone down by about a third.
+
+K.A.
+"""]]
+
+--------------------------------------------------------------
+
+[[!toggle id="ka-patch" text="Kathryn's original patch"]]
+
+[[!toggleable id="ka-patch" text="""
+
+<pre>
+diff --git a/IkiWiki.pm b/IkiWiki.pm
+index 08a3d78..c187b98 100644
+--- a/IkiWiki.pm
++++ b/IkiWiki.pm
+@@ -2482,6 +2482,8 @@ sub derel ($$) {
+ return $path;
+ }
+
++my %glob_cache;
++
+ sub match_glob ($$;@) {
+ my $page=shift;
+ my $glob=shift;
+@@ -2489,8 +2491,15 @@ sub match_glob ($$;@) {
+
+ $glob=derel($glob, $params{location});
+
+- my $regexp=IkiWiki::glob2re($glob);
+- if ($page=~/^$regexp$/i) {
++ # Instead of converting the glob to a regex every time,
++ # cache the compiled regex to save time.
++ if (!exists $glob_cache{$glob}
++ or !defined $glob_cache{$glob})
++ {
++ my $re=IkiWiki::glob2re($glob);
++ $glob_cache{$glob} = qr/^$re$/i;
++ }
++ if ($page =~ $glob_cache{$glob}) {
+ if (! IkiWiki::isinternal($page) || $params{internal}) {
+ return IkiWiki::SuccessReason->new("$glob matches $page");
+ }
+</pre>
+"""]]
+--------------------------------------------------------------
diff --git a/doc/todo/Mailing_list.mdwn b/doc/todo/Mailing_list.mdwn
index b6a207420..67cbbb00b 100644
--- a/doc/todo/Mailing_list.mdwn
+++ b/doc/todo/Mailing_list.mdwn
@@ -18,3 +18,19 @@ Does this sound okay?
> todo/bugs/forum feeds, or to some other feed they create on their user page.
> And there's work on making the discussion pages more structured, on
> accepting comments sent via mail, etc. --[[Joey]]
+
+>>I was going to make the very same request, so I'm glad to know I'm not the only one who felt the need for it.
+
+>>I can see your reasoning, though I don't think ikiwiki has reached the level (yet) of facilitating discussion as well as a mailing list does.
+>>You've already pointed out the need for (a) more structured discussion pages, (b) comments sent via mail, but I'm not sure whether that will be enough. This is because the nature of a wiki means that discussions are scattered all over the site, as people discuss in discussion pages about the given topic - and so they should. The consequence of this, however, is that one has a choice (in regard to RSS feeds) of having too much or too little. Too little, if one only feeds on news/todo/bugs/forum, since one misses out on discussions elsewhere. Too much, because the only other option appears to be subscribing to recentchanges, which will give one *everything*, whether it is relevant or not.
+>>Unfortunately, I'm not really sure what the best solution is for this problem.
+
+>> For those who might be interested, I've added the following RSS feeds to <http://www.dreamwidth.org>:
+*ikiwiki_bugs_feed,
+ikiwiki_forum_feed,
+ikiwiki_news_feed,
+ikiwiki_recent_feed,
+ikiwiki_todo_feed,
+ikiwiki_wishlist_feed*
+
+>>--[[KathrynAndersen]]
diff --git a/doc/todo/More_flexible_po-plugin_for_translation.mdwn b/doc/todo/More_flexible_po-plugin_for_translation.mdwn
new file mode 100644
index 000000000..3399f7834
--- /dev/null
+++ b/doc/todo/More_flexible_po-plugin_for_translation.mdwn
@@ -0,0 +1,5 @@
+I have a website with multi-language content, where some content is only in English, some in German, and some is available in both languages.
+
+The po-module currently has only one master-language, with slave languages, and a PageSpec should be considered.
+
+It would be nice to flag the content which should have a translation on a file-by-file basis (with some inline directive?) which could contain the information of the master-language for that file and the desired target-languages.
diff --git a/doc/todo/Multiple_categorization_namespaces.mdwn b/doc/todo/Multiple_categorization_namespaces.mdwn
new file mode 100644
index 000000000..3e9f8feaa
--- /dev/null
+++ b/doc/todo/Multiple_categorization_namespaces.mdwn
@@ -0,0 +1,103 @@
+I came across this when working on converting my old blog into an ikiwiki, but I think it could be of more general use.
+
+The background: I have a (currently suspended, waiting to be converted) blog on the [il Cannocchiale](http://www.ilcannocchiale.it) hosting platform. Aside from the usual metatadata (title, author), il Cannocchiale also provides tags and two additional categorization namespaces: a blog-specific user-defind "column" (Rubrica) and a platform-wide "category" (Categoria). The latter is used to group and label a couple of platform-wide lists of latest posts, the former may be used in many different ways (e.g. multi-author blogs could have one column per author or so, or as a form of 'macro-tagging'). Columns are also a little more sophisticated than classical tags because you can assign them a subtitle too.
+
+When I started working on the conversion, my first idea was to convert Rubriche to subdirectories of an ikiwiki blog. However, this left me with a few annoying things: when rebuilding links from the import, I had to (programmatically) dive into each subdirectory to see where each post was; this would also be problematic for future posting, too. It also meant that moving a post from a Rubrica to the other would break all links (unless ikiwiki has a way to fix this automagically). And I wasn't too keen on the fact that the Rubrica would come up in the URL of the post. And finally, of course, I couldn't use this to preserve the Categoria metadata.
+
+Another solution I thought about was to use special deeper tags for the Rubrica and Categoria (like: `\[[!tag "Rubrica/Some name"]]`), but this is horrible, clumsy, and makes special treatment of these tags a PITN (for example you wouldn't want the Rubrica to be displayed together with the other tags, and you would want it displayed somewhere else like next to the title of the post). This solution however looks to me as the proper path, as long as tags could support totally separate namespaces. I have a tentative implementation of this `tagtype` feature at [my git clone of ikiwiki](http://git.oblomov.eu/ikiwiki).
+
+The feature is currently implemented as follows: a `tagtypes` config options takes an array of strings: the tag types to be defined _aside from the usual tags_. Each tag type automatically provides a new directive which sets up tags that different from standard tags by having a different tagbase (the same as the tagtype) and link type (again, the same as the tagtype) (a TODO item for this would to make the directive, tagbase and link type customizable). For example, for my imported blog I would define
+
+ tagtypes => [qw{Categoria Rubrica}]
+
+and then in the blog posts I would have stuff like
+
+ \[[!Categoria "LAVORO/Vita da impiegato"]]
+ \[[!Rubrica "Il mio mondo"]]
+ \[[!meta title="Blah blah"]]
+ \[[!meta author="oblomov"]]
+
+ The body of the article
+
+ \[[!tag a bunch of tags]]
+
+and the tags would appear at the bottom of the post, the Rubrica next to the title, etc. All of this information would end up as categories in the feeds (although I would like to rework that code to make use of namespaces, terms and labels in a different way).
+
+> Note [[plugins/contrib/report/discussion]]. To quote myself from the latter page:
+> *I find tags as they currently exist to be too limiting. I prefer something that can be used for Faceted Tagging http://en.wikipedia.org/wiki/Faceted_classification; that is, things like Author:Fred Nurk, Genre:Historical, Rating:Good, and so on. Of course, that doesn't mean that each tag is limited to only one value, either; just to take the above examples, something might have more than one author, or have multiple genres (such as Historical + Romance).*
+
+> So you aren't the only one who wants to do more with tags, but I don't think that adding a new directive for each tag type is the way to go; I think it would be simpler to just have one directive, and take advantage of the new [[matching different kinds of links]] functionality, and enhance the tag directive.
+> Perhaps something like this:
+
+ \[[!tag categorica="LAVORO/Vita da impiegato" rubrica="Il mio mondo"]]
+
+> Part of my thinking in this is to also combine tags with [[plugins/contrib/field]], so that the tags for a page could be queried and displayed; that way, one could put them wherever you wanted on the page, using any of [[plugins/contrib/getfield]], [[plugins/contrib/ftemplate]], or [[plugins/contrib/report]].
+> --[[KathrynAndersen]]
+
+>> A very generic metadata framework could cover all possible usages of fields, tags, and related metadata, but keeping its _user interface_ generic would only make it hard to use. Note that this is not an objection to the idea of collapsing the fields and tags functionality (at quick glance, I cannot see a real difference between single-valued custom tagtypes and fields, but see below), but more about the syntax.
+
+>> I had thought about the `\[[!tag type1=value1 type2=value2]]` syntax myself, but ultimately decided against it for a number of reasons, most importantly the fact that (1) it's harder to type, (2) it's harder to spot errors in the tag types (so for example if one misspelled `categoria` as `categorica`, he might not notice it as quickly as seeing the un-parsed `\[[!categorica ]]` directive in the output html) and (3) it encourages collapsing possibly unrelated metadata together (for example, I would never consider putting the categoria information together with the rubrica one; of course with your syntax it's perfectly possible to keep them separate as well).
+
+>> Point (2) may be considered a downside as well as an upside, depending on perspective, of course. And it would be possible to have a set of predefined tag types to match against, like in my tagtype directive approach but with your syntax.
+
+>>> You seem to have answered your own objections already. -- K.A.
+
+>>Point (3) is of course entirely in the hands of the user, but that's exactly what syntax should be about. There is nothing functionally wrong with e.g. `\[[!meta tag=sometag author=someauthor title=sometitle rubrica=somecolumn]]`, but I honestly find it horrible.
+
+>>> So, really, point 3 comes down to differing aesthetics. -- K.A.
+
+>> A solution could be to allow both syntaxes, getting to have for example `\[[!sometagtype "blah"]]` as a shortcut for `\[[!tag sometagtype="blah"]]` (or, in the more general case, `\[[!somefieldname "blah"]]` as a shortcut for `\[[!meta fieldname="blah"]]`).
+
+>> I would like to point out however that there are some functional differences between categorization metadata vs other metadata that might suggest to keep fields and (my extended) tags separate. For examples, in feeds you'd want all categorization metadata to fall in one place, with some appropriate manipulation (which I still have to implement, by the way), while things like author or title would go to the corresponding feed item properties. Although it all would be possible with appropriate report or template juggling, having such default metadata handled natively looks like a bonus to me.
+
+>>> Whereas I prefer being able to control such things with templates, because it gives more flexibility AND control. - K.A.
+
+>>>> Flexibility and control is good for tuning and power-usage, but sensible defaults are a must for a platform to be usable out of the box without much intervention. Moreover, there's a possible problem with what kind of data must be passed over to templates.
+
+Aside from the name of the plugin (and thus of the main directive), which could be `tag`, `meta`, `field` or whatever (maybe extending `meta` would be the most sensible choice), the features we want are
+
+1. allow multiple values per type/attribute/field/whatever (fields currently only allows one)
+ * Agreed about multiple values; I've been considering whether I should add that to `field`. -- K.A.
+2. allow both hidden and visible references (a la tag vs taglink)
+ * Hidden and visible references; that's fair enough too. My approach with `ymlfront` and `getfield` is that the YAML code is hidden, and the display is done with `getfield`, but there's no reason not to use additional approaches. -- K.A.
+3. allow each type/attribute/field to be exposed under multiple queries (e.g. tags and categories; this is mostly important for backwards compatibility, not sure if it might have other uses too)
+ * I'm not sure what you mean here. -- K.A.
+ * Typical example is tags: they are accessible both as `tags` and as `categories`, although the way they are presented changes a little -- G.B.
+4. allow arbitrary types/attributes/fields/whatever (even 'undefined' ones)
+ * Are you saying that these must be typed, or are you saying that they can be user-defined? -- K.A.
+ * I am saying that the user should be able to define (e.g. in the config) some set of types/fields/attributes/whatever, following the specification illustrated below, but also be able to use something like `\[[!meta somefield="somevalue"]]` where `somefield` was never defined before. In this case `somefield` will have some default values for the properties described in the spec below. -- G.B.
+
+Each type/attribute/field/whatever (predefined, user-defined, arbitrary) would thus have the following parameters:
+
+* `directive` : the name of the directive that can be used to set the value as a hidden reference; we can discuss whether, for pre- or user-defined types, it being undef means no directive or a default directive matching the attribute name would be defined.
+ * I still want there to be able to be enough flexibility in the concept to enable plugins such as `yamlfront`, which sets the data using YAML format, rather than using directives. -- K.A.
+ * The possibility to use a directive does not preclude other ways of defining the field values. IOW, even if the directive `somefield` is defined, the user would still be able to use the syntax `\[[!meta somefield="somevalue"]]`, or any other syntax (such as YAML). -- G.B.
+* `linkdirective` : the name of the directive that can be used for a visible reference; no such directive would be defined by default
+* `linktype` : link type for (hidden and visible) references
+ * Is this the equivalent to "field name"? -- K.A.
+ * This would be such by default, but it could be set to something different. [[Typed links|matching_different_kinds_of_links]] is a very recent ikiwiki feature. -- G.B.
+* `linkbase` : akin to the tagbase parameter
+ * Is this a field-name -> directory mapping? -- K.A.
+ * yes, with each directory having one page per value. It might not make sense for all fields, of course -- G.B.
+ * (nods) I've been working on something similar with my unreleased `tagger` module. In that, by default, the field-name maps to the closest wiki-page of the same name. Thus, if one had the field "genre=poetry" on the page fiction/stories/mary/lamb, then that would map to fiction/genre/poetry if fiction/genre existed. --K.A.
+ * that's the idea. In your case you could have the linkbase of genre be fiction/genre, and it would be created if it was missing. -- G.B.
+* `queries` : list of template queries this type/attribute/field/whatever is exposed to
+ * I'm not sure what you mean here. -- K.A.
+ * as mentioned before, some fields may be made accessible through different template queries, in different form. This is the case already for tags, that also come up in the `categories` query (used by Atom and RSS feeds). -- G.B.
+ * Ah, do you mean that the input value is the same, but the output format is different? Like the difference between TMPL_VAR NAME="FOO" and TMPL_VAR NAME="raw_FOO"; one is htmlized, and the other is not. -- K.A.
+ * Actually this is about the same information appearing in different queries (e.g. NAME="FOO" and NAME="BAR"). Example: say that I defined a "Rubrica" field. I would want both tags and categories to appear in `categories` template query, but only tags would appear in the `tags` query, and only Rubrica values to appear in `rubrica` queries. The issue of different output formats was presented in the next paragraph instead. -- G.B.
+
+Where this approach is limiting is on the kind of data that is passed to (template) queries. The value of the metadata fields might need some massaging (e.g. compare how tags are passed to tags queries vs cateogires queries, or also see what is done with the fields in the current `meta` plugin). I have problems on picturing an easy way to make this possible user-side (i.e. via templates and not in Perl modules). Suggestions welcome.
+
+One possibility could be to have the `queries` configuration allow a hash mapping query names to functions that would transform the data. Lacking that possibility, we might have to leave some predefined fields to have custom Perl-side treatment and leave custom fields to be untransformable.
+
+-----
+
+I've now updated the [[plugins/contrib/field]] plugin to have:
+
+* arrays (multi-valued fields)
+* the "linkbase" option as mentioned above (called field_tags), where the linktype is the field name.
+
+I've also updated [[plugins/contrib/ftemplate]] and [[plugins/contrib/report]] to be able to use multi-valued fields, and [[plugins/contrib/ymlfront]] to correctly return multi-valued fields when they are requested.
+
+--[[KathrynAndersen]]
diff --git a/doc/todo/OpenSearch.mdwn b/doc/todo/OpenSearch.mdwn
index e63ded688..c35da54e1 100644
--- a/doc/todo/OpenSearch.mdwn
+++ b/doc/todo/OpenSearch.mdwn
@@ -15,4 +15,24 @@ contain the wiki title from `ikiwiki.setup`.
--[[JoshTriplett]]
+> I support adding this. I think all that is needed, beyond the simple task
+> of adding the link header, is to make the search plugin write out
+> the xml file, probably based on a template.
+>
+> One problem is that the
+> [specification](http://www.opensearch.org/Specifications/OpenSearch/1.1#OpenSearch_description_document)
+> for the XML file contains a number of silly limits to field lenghs.
+> For example, it wants a "ShortName" that identifies the search engine,
+> to be 16 characters or less. The Description is limited to 1024,
+> the LongName to 48. This limits what existing config settings can be
+> reused for those.
+>
+> Another semi-problem is that the specification saz:
+>
+>> OpenSearch description documents should include at least one Query element of role="example" that is expected to return search results. Search clients may use this example query to validate that the search engine is working properly.
+>
+> How should ikiwiki know what example query will return actual results?
+> (How would a client know if a HTML page contains results or not, anyway?)
+> Sillyness. Ignore this? --[[Joey]]
+
[[wishlist]]
diff --git a/doc/todo/Option_to_make_title_an_h1__63__.mdwn b/doc/todo/Option_to_make_title_an_h1__63__.mdwn
index f4023d6dd..8345cd010 100644
--- a/doc/todo/Option_to_make_title_an_h1__63__.mdwn
+++ b/doc/todo/Option_to_make_title_an_h1__63__.mdwn
@@ -11,4 +11,4 @@ Currently, the page title (either the name of the page or the title specified wi
> latter, making `#` (only when on the first line) set the page title, removing it from
> the page body. --[[JasonBlevins]], October 22, 2008
- [h1title]: http://code.jblevins.org/ikiwiki/plugins.git/plain/h1title.pm
+ [h1title]: http://jblevins.org/git/ikiwiki/plugins.git/plain/h1title.pm
diff --git a/doc/todo/Resolve_native_reStructuredText_links_to_ikiwiki_pages.mdwn b/doc/todo/Resolve_native_reStructuredText_links_to_ikiwiki_pages.mdwn
index ca7b282fa..6e0f32fd5 100644
--- a/doc/todo/Resolve_native_reStructuredText_links_to_ikiwiki_pages.mdwn
+++ b/doc/todo/Resolve_native_reStructuredText_links_to_ikiwiki_pages.mdwn
@@ -322,3 +322,12 @@ The page is rST-parsed once in 'scan' and once in 'htmlize' (the first to genera
>> However, I think that if the cache does not work for a big load, it should
>> not work at all; small loads are small so they don't matter. --ulrik
+-----
+
+Another possiblity is using empty url for wikilinks (gitit uses this approach), for example:
+
+ `SomePage <>`_
+
+Since it uses *empty* url, I would like to call it *proposal 0* :-) --[weakish]
+
+[weakish]: http://weakish.pigro.net
diff --git a/doc/todo/Separate_OpenIDs_and_usernames.mdwn b/doc/todo/Separate_OpenIDs_and_usernames.mdwn
index 2cd52e8c4..a4940220a 100644
--- a/doc/todo/Separate_OpenIDs_and_usernames.mdwn
+++ b/doc/todo/Separate_OpenIDs_and_usernames.mdwn
@@ -6,6 +6,48 @@ I see this being implemented in one of two possible ways. The easiest seems like
A slightly more complex next step would be to request sreg from the provider and, if provided, automatically set the identity's username and email address from the provided persona. If username login to accounts with blank passwords is disabled, then you have the best of both worlds. Passwordless signin, human-friendly attribution, automatic setting of preferences.
+> Given that openids are a global user identifier, that can look as pretty
+> as the user cares to make it look via delegation, I am not a fan of
+> having a site-local identifier that layered on top of that. Perhaps
+> partly because every site that I have seen that does that has openid
+> implemented as a badly-done wart on the side of their regular login
+> system.
+>
+> The openid plugin now attempts to get an email and a username, and stores
+> them in the session database for later use (ie, when the user edits a
+> page).
+>
+> I am considering displaying the userid or fullname, if available,
+> instead of the munged openid url in recentchanges and comments.
+> It would be nice for those nasty [[google_openids|forum/google_openid_broken?]].
+> But, I first have to find a way to encode the name in the VCS commit log,
+> while still keeping the openid of the committer in there too.
+> Perhaps something like this (for git): --[[Joey]]
+>
+> Author: Joey Hess &lt;http://joey.kitenet.net/@web&gt;
+>
+> Only problem with the above is that the openid will still be displayed
+> by CIA. Other option is this, which solves that, but at the expense of
+> having to munge the username to fit inside the email address,
+> and generally seems backwards: --[[Joey]]
+>
+> Author: http://joey.kitenet.net/ &lt;Joey_Hess@web&gt;
+>
+> So, what needs to be done:
+>
+> * Change `rcs_commit` and `rcs_commit_staged` to take a session object,
+> instead of just a userid. (For back-compat, if the parameter is
+> not an object, it's a userid.) Bump ikiwiki plugin interface version.
+> (done)
+> * Modify all RCS plugins to include the session username somewhere
+> in the commit, and parse it back out in `rcs_recentchanges`.
+> (done for git only so far)
+> * Modify recentchanges plugin to display the username instead of the
+> `openiduser`.
+> (done)
+> * Modify comment plugin to put the session username in the comment
+> template instead of the `openiduser`. (done)
+
Unfortunately I don't speak Perl, so hopefully someone thinks these suggestions are good enough to code up. I've hacked on openid code in Ruby before, so hopefully these changes aren't all that difficult to implement. Even if you don't get any data via sreg, you're no worse off than where you are now, so I don't think there'd need to be much in the way of error/sanity-checking of returned data. If it's null or not available then no big deal, typing in a username is no sweat.
-[[!tag wishlist]]
+[[!tag wishlist done]]
diff --git a/doc/todo/Set_templates_for_whole_sections_of_the_site.mdwn b/doc/todo/Set_templates_for_whole_sections_of_the_site.mdwn
index d0c09796f..b130f4ec5 100644
--- a/doc/todo/Set_templates_for_whole_sections_of_the_site.mdwn
+++ b/doc/todo/Set_templates_for_whole_sections_of_the_site.mdwn
@@ -29,3 +29,5 @@ I've written a new plugin, sectiontemplate, available in the `page_tmpl` branch
>>>
>>> I do still think combining this with pagetemplate would be good.
>>> --[[Joey]]
+
+>>>> This is exactly what I was looking for and it took me a while to find it. I very much support the idea to provide this as a regular plugin, be it merged with pagetemplate or stand-alone. Thank you for your work and code! --BenTo
diff --git a/doc/todo/Support_XML-RPC-based_blogging.mdwn b/doc/todo/Support_XML-RPC-based_blogging.mdwn
index f9685be73..6a0593b17 100644
--- a/doc/todo/Support_XML-RPC-based_blogging.mdwn
+++ b/doc/todo/Support_XML-RPC-based_blogging.mdwn
@@ -9,6 +9,9 @@ blog names would work. --[[JoshTriplett]]
>> I'd love to see support for this and would be happy to contribute towards a bounty (say US$100) :-). [PmWiki](http://www.pmwiki.org/) has a plugin which [implements this](http://www.pmwiki.org/wiki/Cookbook/XMLRPC) in a way which seems fairly sensible as an end user. --[[AdamShand]]
+>>> Bump. This would be a nice feature, and with the talent on this project I'm sure it could be done safely, too.
+
+
[[!tag soc]]
[[!tag wishlist]]
diff --git a/doc/todo/Tags_list_in_page_footer_uses_basename.mdwn b/doc/todo/Tags_list_in_page_footer_uses_basename.mdwn
index e2221bb84..603e82b20 100644
--- a/doc/todo/Tags_list_in_page_footer_uses_basename.mdwn
+++ b/doc/todo/Tags_list_in_page_footer_uses_basename.mdwn
@@ -6,3 +6,6 @@ I think the tag list should always contain the full path to the tag, with the ta
> What if tagbase is not used? I know this would clutter up the display of
> my tags on several wikis, including this one. --[[Joey]]
+
+>> Since Giuseppe's patches to fix [[bugs/tag_behavior_changes_introduced_by_typed_link_feature]],
+>> the tag list has what Josh requested, but only if a tagbase is used. [[done]] --[[smcv]]
diff --git a/doc/todo/__42__forward__42__ing_functionality_for_the_meta_plugin.mdwn b/doc/todo/__42__forward__42__ing_functionality_for_the_meta_plugin.mdwn
index 61b19d302..b3804d652 100644
--- a/doc/todo/__42__forward__42__ing_functionality_for_the_meta_plugin.mdwn
+++ b/doc/todo/__42__forward__42__ing_functionality_for_the_meta_plugin.mdwn
@@ -4,7 +4,7 @@ to the [[`meta`_plugin|plugins/meta]].
> [[done]], with some changes --[[Joey]]
Find the most recent version at
-<http://www.schwinge.homeip.net/~thomas/tmp/meta_forward.patch>.
+<http://schwinge.homeip.net/~thomas/tmp/meta_forward.patch>.
I can't use `scrub(...)`, as that will strip out the forwarding HTML command.
How to deal with that?
diff --git a/doc/todo/abbreviation.mdwn b/doc/todo/abbreviation.mdwn
index d24166710..f2880091c 100644
--- a/doc/todo/abbreviation.mdwn
+++ b/doc/todo/abbreviation.mdwn
@@ -2,4 +2,6 @@ We might want some kind of abbreviation and acronym plugin. --[[JoshTriplett]]
* Not sure if this is what you mean, but I'd love a way to make works which match existing page names automatically like (eg. if there is a page called "MySQL" then any time the word MySQL is mentioned it should become a link to that page). -- [[AdamShand]]
+ * The python-markdown-extras package has support for [abbreviations](http://www.freewisdom.org/projects/python-markdown/Abbreviations), with the syntax that you just use the abbreviation in text (e.g. HTML) and then define the abbreviations at the end (like "footnote-style" links). For consistency, it might be good to use the same syntax, which apparently derives from [PHP-markdown-extra](http://michelf.com/projects/php-markdown/extra/#abbr).
+
[[wishlist]]
diff --git a/doc/todo/adjust_commit_message_for_rename__44___remove.mdwn b/doc/todo/adjust_commit_message_for_rename__44___remove.mdwn
new file mode 100644
index 000000000..3d0d1aff4
--- /dev/null
+++ b/doc/todo/adjust_commit_message_for_rename__44___remove.mdwn
@@ -0,0 +1,5 @@
+When you rename or remove pages using the relevant plugins, a commit message is generated automatically by the plugin.
+
+It would be nice to provide a text field in the remove/rename form, pre-populated with the automatic message, so that a user may customize or append to the message (modulo VCS support)
+
+-- [[Jon]]
diff --git a/doc/todo/alias_directive.mdwn b/doc/todo/alias_directive.mdwn
new file mode 100644
index 000000000..71a2efc76
--- /dev/null
+++ b/doc/todo/alias_directive.mdwn
@@ -0,0 +1,72 @@
+An alias directive could work like an inverse redirect, but in a more
+maintainable way. Currently, a page might have several redirects leading to it,
+without an easy way of enumerating them. Therefore, the following directive is
+suggested for addition (possibly by means of a plugin):
+
+> The `alias` and `aliastext` directives implicitly create
+> redirect pages to the page they are used on. If two or more pages claim a
+> non-existing page to be an alias, a disambiguation page will automatically
+> generated. If an existing page is claimed as an alias, it will be prefixed
+> with a note that its topic is also an alias for other pages.
+>
+> All aliases to a page are automatically listed below the backlink and tag
+> lists at the bottom of a page by default. This can be configured globally by
+> setting the `alias_list` configuration option to `false`, or set explicitly
+> per alias by specifying `list=true` or `list=false`.
+>
+> Similar to the `taglink` directive, `aliastext` produces the alias name as
+> well as registering it.
+>
+> ## Usage example
+>
+> `Greece.mdwn`:
+>
+> > Greece, also known as \[[!aliastext Hellas]] and officially the
+> > \[[!aliastext "Hellenic Republic"]], is a …
+> >
+> > <!-- there are so many people who misspell this, let's create a redirect -->
+> > \[[!alias Grece list=false]]
+>
+> This page by itself will redirect from the "Hellas", "Hellenic Republic" and
+> "Grece" pages as if they both contained just:
+>
+> > \[[!meta redir="Greece"]]
+>
+> If, on the other hand, `Hellas Planitia` also claims `[[!alias Hellas]]`, the
+> Hellas page will look like this:
+>
+> > **Hellas** is an alias for the following pages:
+> >
+> > * \[[Greece]]
+> > * \[[Hellas Planitia]]
+
+The proposed plugin/directive could be extended, eg. by also including
+old-style redirects in the alias list, but that might introduce unwanted
+coupling with the meta directive.
+
+-----------------
+
+On second thought, implementing this might have similarities with
+[[todo/auto-create tag pages according to a template]] -- the auto-created
+pages would, if the way of the alias directive is followed, not create physical
+files, though, but be created just when someone edits them.
+
+If multiple plugins do such a trick, they would have to fight over who comes
+first. If, for example, we have a setup where not yet created tag pages are
+automatically generated as "\[[!inline pages="link(<TMPL_VAR TAG>)"
+archive="yes"]]" and aliases are enabled, and a non-tag pages grabs a tag as an
+alias (as to redirect all taglinks of the tag to itself), there are two
+possibilities:
+
+* The autotag plugin comes first:
+ * autotag sees the missing tag and creates its "\[[!inline" stuff
+ * alias sees that there is already content and adds its prefix
+* The alias plugin comes first (this is the prefered way):
+ * alias sees the empty page, sees it is not contested by other alias
+ directives and creates its "\[[!meta" redirect
+ * autotag sees there is already content and doesn't do anything
+
+That issue could be handled with "priority number" on the hook, with plugins
+with a lower number being called first.
+
+[[!tag wishlist]]
diff --git a/doc/todo/allow_displaying_number_of_comments.mdwn b/doc/todo/allow_displaying_number_of_comments.mdwn
new file mode 100644
index 000000000..02d55fc9b
--- /dev/null
+++ b/doc/todo/allow_displaying_number_of_comments.mdwn
@@ -0,0 +1,30 @@
+My `numcomments` Git branch adds a `NUMCOMMENTS` `TMPL_VAR`, which is
+useful to add to the `forumpage.tmpl` template to emulate (the nice
+bits of) a more usual webforum.
+
+Please review... and pull :)
+
+-- [[intrigeri]]
+
+> How is having this variable for showing a count of the comments
+> better (or more forum-ish) than the COMMENTSLINK variable which
+> includes a count and a link to the comments, and is already displayed
+> in inlinepage.tmpl?
+>
+> `num_comments` will never return undef.
+>
+> I see no need to add a second pagetemplate hook.
+> The existing one can be added to. Probably inside its `if ($shown)`
+> block.
+>
+> It may also be a good idea to either combine the calls to `num_comments`
+> used for this and for the commentslink,
+> or to memoize it. I'm thinking generally memoizing it may be a good idea
+> since the comments for a page will typically be counted twice when it's
+> inlined.
+> --[[Joey]]
+
+[[patch]]
+
+>> Well, the COMMENTSLINK variable fits my needs. Sorry for
+>> the disturbance. [[done]] --[[intrigeri]]
diff --git a/doc/todo/allow_plugins_to_add_sorting_methods.mdwn b/doc/todo/allow_plugins_to_add_sorting_methods.mdwn
new file mode 100644
index 000000000..b523cd19f
--- /dev/null
+++ b/doc/todo/allow_plugins_to_add_sorting_methods.mdwn
@@ -0,0 +1,304 @@
+[[!tag patch]]
+
+The available [[ikiwiki/pagespec/sorting]] methods are currently hard-coded in
+IkiWiki.pm, making it difficult to add any extra sorting mechanisms. I've
+prepared a branch which adds 'sort' as a hook type and uses it to implement a
+new `meta_title` sort type.
+
+Someone could use this hook to make `\[[!inline sort=title]]` prefer the meta
+title over the page name, but for compatibility, I'm not going to (I do wonder
+whether it would be worth making sort=name an alias for the current sort=title,
+and changing the meaning of sort=title in 4.0, though).
+
+> What compatability concerns, exactly, are there that prevent making that
+> change now? --[[Joey]]
+
+*[sort-hooks branch now withdrawn in favour of sort-package --s]*
+
+I briefly tried to turn *all* the current sort types into hook functions, and
+have some of them pre-registered, but decided that probably wasn't a good idea.
+That earlier version of the branch is also available for comparison:
+
+*[also withdrawn in favour of sort-package --s]*
+
+>> I wonder if IkiWiki would benefit from the concept of a "sortspec", like a [[ikiwiki/PageSpec]] but dedicated to sorting lists of pages rather than defining lists of pages? Rather than defining a sort-hook, define a SortSpec class, and enable people to add their own sort methods as functions defined inside that class, similarly to the way they can add their own pagespec definitions. --[[KathrynAndersen]]
+
+>>> [[!template id=gitbranch branch=smcv/ready/sort-package author="[[Simon_McVittie|smcv]]"]]
+>>> I'd be inclined to think that's overkill, but it wasn't very hard to
+>>> implement, and in a way is more elegant. I set it up so sort mechanisms
+>>> share the `IkiWiki::PageSpec` package, but with a `cmp_` prefix. Gitweb:
+>>> <http://git.pseudorandom.co.uk/smcv/ikiwiki.git?a=shortlog;h=refs/heads/sort-package>
+
+>>>> I agree it seems more elegant, so I have focused on it.
+>>>>
+>>>> I don't know about reusing `IkiWiki::PageSpec` for this.
+>>>> --[[Joey]]
+
+>>>>> Fair enough, `IkiWiki::SortSpec::cmp_foo` would be just
+>>>>> as easy, or `IkiWiki::Sorting::cmp_foo` if you don't like
+>>>>> introducing "sort spec" in the API. I took a cue from
+>>>>> [[ikiwiki/pagespec/sorting]] being a subpage of
+>>>>> [[ikiwiki/pagespec]], and decided that yes, sorting is
+>>>>> a bit like a pagespec :-) Which name would you prefer? --s
+
+>>>>>> `SortSpec` --[[Joey]]
+
+>>>>>>> [[Done]]. --s
+
+>>>> I would be inclined to drop the `check_` stuff. --[[Joey]]
+
+>>>>> It basically exists to support `title_natural`, to avoid
+>>>>> firing up the whole import mechanism on every `cmp`
+>>>>> (although I suppose that could just be a call to a
+>>>>> memoized helper function). It also lets sort specs that
+>>>>> *must* have a parameter, like
+>>>>> [[field|plugins/contrib/field/discussion]], fail early
+>>>>> (again, not so valuable).
+>>>>>
+>>>>>> AFAIK, `use foo` has very low overhead when the module is already
+>>>>>> loaded. There could be some evalation overhead in `eval q{use foo}`,
+>>>>>> if so it would be worth addressing across the whole codebase.
+>>>>>> --[[Joey]]
+>>>>>>
+>>>>>>> check_cmp_foo now dropped. --s
+>>>>>
+>>>>> The former function could be achieved at a small
+>>>>> compatibility cost by putting `title_natural` in a new
+>>>>> `sortnatural` plugin (that fails to load if you don't
+>>>>> have `title_natural`), if you'd prefer - that's what would
+>>>>> have happened if `title_natural` was written after this
+>>>>> code had been merged, I suspect. Would you prefer this? --s
+
+>>>>>> Yes! (Assuming it does not make sense to support
+>>>>>> natural order sort of other keys than the title, at least..)
+>>>>>> --[[Joey]]
+
+>>>>>>> Done. I added some NEWS.Debian for it, too. --s
+
+>>>> Wouldn't it make sense to have `meta(title)` instead
+>>>> of `meta_title`? --[[Joey]]
+
+>>>>> Yes, you're right. I added parameters to support `field`,
+>>>>> and didn't think about making `meta` use them too.
+>>>>> However, `title` does need a special case to make it
+>>>>> default to the basename instead of the empty string.
+>>>>>
+>>>>> Another special case for `title` is to use `titlesort`
+>>>>> first (the name `titlesort` is derived from Ogg/FLAC
+>>>>> tags, which can have `titlesort` and `artistsort`).
+>>>>> I could easily extend that to other metas, though;
+>>>>> in fact, for e.g. book lists it would be nice for
+>>>>> `field(bookauthor)` to behave similarly, so you can
+>>>>> display "Douglas Adams" but sort by "Adams, Douglas".
+>>>>>
+>>>>> `meta_title` is also meant to be a prototype of how
+>>>>> `sort=title` could behave in 4.0 or something - sorting
+>>>>> by page name (which usually sorts in approximately the
+>>>>> same place as the meta-title, but occasionally not), while
+>>>>> displaying meta-titles, does look quite odd. --s
+
+>>>>>> Agreed. --[[Joey]]
+
+>>>>>>> I've implemented meta(title). meta(author) also has the
+>>>>>>> `sortas` special case; meta(updated) and meta(date)
+>>>>>>> should also work how you'd expect them to (but they're
+>>>>>>> earliest-first, unlike age). --s
+
+>>>> As I read the regexp in `cmpspec_translate`, the "command"
+>>>> is required to have params. They should be optional,
+>>>> to match the documentation and because most sort methods
+>>>> do not need parameters. --[[Joey]]
+
+>>>>> No, `$2` is either `\w+\([^\)]*\)` or `[^\s]+` (with the
+>>>>> latter causing an error later if it doesn't also match `\w+`).
+>>>>> This branch doesn't add any parameterized sort methods,
+>>>>> in fact, although I did provide one on
+>>>>> [[field's_discussion_page|plugins/contrib/report/discussion]]. --s
+
+>>>> I wonder if it would make sense to add some combining keywords, so
+>>>> a sortspec reads like `sort="age then ascending title"`
+>>>> In a way, this reduces the amount of syntax that needs to be learned.
+>>>> I like the "then" (and it could allow other operations than
+>>>> simple combination, if any others make sense). Not so sure about the
+>>>> "ascending", which could be "reverse" instead, but "descending age" and
+>>>> "ascending age" both seem useful to be able to explicitly specify.
+>>>> --[[Joey]]
+
+>>>>> Perhaps. I do like the simplicity of [[KathrynAndersen]]'s syntax
+>>>>> from [[plugins/contrib/report]] (which I copied verbatim, except for
+>>>>> turning sort-by-`field` into a parameterized spec).
+>>>>>
+>>>>> If we're getting into English-like (or at least SQL-like) queries,
+>>>>> it might make sense to change the signature of the hook function
+>>>>> so it's a function to return a key, e.g.
+>>>>> `sub key_age { return -%pagemtime{$_[0]) }`. Then we could sort like
+>>>>> this:
+>>>>>
+>>>>> field(artistsort) or field(artist) or constant(Various Artists) then meta(titlesort) or meta(title) or title
+>>>>>
+>>>>> with "or" binding more closely than "then". Does this seem valuable?
+>>>>> I think the implementation would be somewhat more difficult. and
+>>>>> it's probably getting too complicated to be worthwhile, though?
+>>>>> (The keys that actually benefit from this could just
+>>>>> have smarter cmp functions, I think.)
+>>>>>
+>>>>> If the hooks return keys rather than cmp results, then we could even
+>>>>> have "lowercase" as an adjective used like "ascending"... maybe.
+>>>>> However, there are two types of adjective here: "lowercase"
+>>>>> really applies to the keys, whereas "ascending" applies to the "cmp"
+>>>>> result. Again, I think this is getting too complex, and could just
+>>>>> be solved with smarter cmp functions.
+>>>>>
+>>>>>> I agree. (Also, I think returning keys may make it harder to write
+>>>>>> smarter cmp functions.) --[[Joey]]
+>>>>>
+>>>>> Unfortunately, `sort="ascending mtime"` actually sorts by *descending*
+>>>>> timestamp (but`sort=age` is fine, because `age` could be defined as
+>>>>> now minus `ctime`). `sort=freshness` isn't right either, because
+>>>>> "sort by freshness" seems as though it ought to mean freshest first,
+>>>>> but "sort by ascending freshness" means put the least fresh first. If
+>>>>> we have ascending and descending keywords which are optional, I don't
+>>>>> think we really want different sort types to have different default
+>>>>> directions - it seems clearer to have `ascending` always be a no-op,
+>>>>> and `descending` always negate.
+>>>>>
+>>>>>> I think you've convinced me that ascending/descending impose too
+>>>>>> much semantics on it, so "-" is better. --[[Joey]]
+
+>>>>>>> I've kept the semantics from `report` as-is, then:
+>>>>>>> e.g. `sort="age -title"`. --s
+
+>>>>> Perhaps we could borrow from `meta updated` and use `update_age`?
+>>>>> `updateage` would perhaps be a more normal IkiWiki style - but that
+>>>>> makes me think that updateage is a quantity analagous to tonnage or
+>>>>> voltage, with more or less recently updated pages being said to have
+>>>>> more or less updateage. I don't know whether that's good or bad :-)
+>>>>>
+>>>>> I'm sure there's a much better word, but I can't see it. Do you have
+>>>>> a better idea? --s
+
+[Regarding the `meta title=foo sort=bar` special case]
+
+> I feel it sould be clearer to call that "sortas", since "sort=" is used
+> to specify a sort method in other directives. --[[Joey]]
+>> Done. --[[smcv]]
+
+## speed
+
+I notice the implementation does not use the magic `$a` and `$b` globals.
+That nasty perl optimisation is still worthwhile:
+
+ perl -e 'use warnings; use strict; use Benchmark; sub a { $a <=> $b } sub b ($$) { $_[0] <=> $_[1] }; my @list=reverse(1..9999); timethese(10000, {a => sub {my @f=sort a @list}, b => sub {my @f=sort b @list}, c => => sub {my @f=sort { b($a,$b) } @list}})'
+ Benchmark: timing 10000 iterations of a, b, c...
+ a: 80 wallclock secs (76.74 usr + 0.05 sys = 76.79 CPU) @ 130.23/s (n=10000)
+ b: 112 wallclock secs (106.14 usr + 0.20 sys = 106.34 CPU) @ 94.04/s (n=10000)
+ c: 330 wallclock secs (320.25 usr + 0.17 sys = 320.42 CPU) @ 31.21/s (n=10000)
+
+Unfortunatly, I think that c is closest to the new implementation.
+--[[Joey]]
+
+> Unfortunately, `$a` isn't always `$main::a` - it's `$Package::a` where
+> `Package` is the call site of the sort call. This was a showstopper when
+> `sort` was a hook implemented in many packages, but now that it's a
+> `SortSpec`, I may be able to fix this by putting a `sort` wrapper in the
+> `SortSpec` namespace, so it's like this:
+>
+> sub sort ($@)
+> {
+> my $cmp = shift;
+> return sort $cmp @_;
+> }
+>
+> which would mean that the comparison used `$IkiWiki::SortSpec::a`.
+> --s
+
+>> I've now done this. On a wiki with many [[plugins/contrib/album]]s
+>> (a full rebuild takes half an hour!), I tested a refresh after
+>> `touch tags/*.mdwn` (my tag pages contain inlines of the form
+>> `tagged(foo)` sorted by date, so they exercise sorting).
+>> I also tried removing sorting from `pagespec_match_list`
+>> altogether, as an upper bound for how fast we can possibly make it.
+>>
+>> * `master` at branch point: 63.72user 0.29system
+>> * `master` at branch point: 63.91user 0.37system
+>> * my branch, with `@_`: 65.28user 0.29system
+>> * my branch, with `@_`: 65.21user 0.28system
+>> * my branch, with `$a`: 64.09user 0.28system
+>> * my branch, with `$a`: 63.83user 0.36system
+>> * not sorted at all: 58.99user 0.29system
+>> * not sorted at all: 58.92user 0.29system
+>>
+>> --s
+
+> I do notice that `pagespec_match_list` performs the sort before the
+> filter by pagespec. Is this a deliberate design choice, or
+> coincidence? I can see that when `limit` is used, this could be
+> used to only run the pagespec match function until `limit` pages
+> have been selected, but the cost is that every page in the wiki
+> is sorted. Or, it might be useful to do the filtering first, then
+> sort the sub-list thus produced, then finally apply the limit? --s
+
+>> Yes, it was deliberate, pagespec matching can be expensive enough that
+>> needing to sort a lot of pages seems likely to be less work. (I don't
+>> remember what benchmarking was done though.) --[[Joey]]
+
+>>> We discussed this on IRC and Joey pointed out that this also affects
+>>> dependency calculation, so I'm not going to get into this now... --s
+
+Joey pointed out on IRC that the `titlesort` feature duplicates all the
+meta titles. I did that in order to sort by the unescaped version, but
+I've now changed the branch to only store that if it makes a difference.
+--s
+
+## Documentation from sort-package branch
+
+### advanced sort orders (conditionally added to [[ikiwiki/pagespec/sorting]])
+
+* `title_natural` - Orders by title, but numbers in the title are treated
+ as such, ("1 2 9 10 20" instead of "1 10 2 20 9")
+* `meta(title)` - Order according to the `\[[!meta title="foo" sortas="bar"]]`
+ or `\[[!meta title="foo"]]` [[ikiwiki/directive]], or the page name if no
+ full title was set. `meta(author)`, `meta(date)`, `meta(updated)`, etc.
+ also work.
+
+### Multiple sort orders (added to [[ikiwiki/pagespec/sorting]])
+
+In addition, you can combine several sort orders and/or reverse the order of
+sorting, with a string like `age -title` (which would sort by age, then by
+title in reverse order if two pages have the same age).
+
+### meta sortas parameter (added to [[ikiwiki/directive/meta]])
+
+[in title]
+
+An optional `sort` parameter will be used preferentially when
+[[ikiwiki/pagespec/sorting]] by `meta(title)`:
+
+ \[[!meta title="The Beatles" sort="Beatles, The"]]
+
+ \[[!meta title="David Bowie" sort="Bowie, David"]]
+
+[in author]
+
+ An optional `sortas` parameter will be used preferentially when
+ [[ikiwiki/pagespec/sorting]] by `meta(author)`:
+
+ \[[!meta author="Joey Hess" sortas="Hess, Joey"]]
+
+### Sorting plugins (added to [[plugins/write]])
+
+Similarly, it's possible to write plugins that add new functions as
+[[ikiwiki/pagespec/sorting]] methods. To achieve this, add a function to
+the IkiWiki::SortSpec package named `cmp_foo`, which will be used when sorting
+by `foo` or `foo(...)` is requested.
+
+The names of pages to be compared are in the global variables `$a` and `$b`
+in the IkiWiki::SortSpec package. The function should return the same thing
+as Perl's `cmp` and `<=>` operators: negative if `$a` is less than `$b`,
+positive if `$a` is greater, or zero if they are considered equal. It may
+also raise an error using `error`, for instance if it needs a parameter but
+one isn't provided.
+
+The function will also be passed one or more parameters. The first is
+`undef` if invoked as `foo`, or the parameter `"bar"` if invoked as `foo(bar)`;
+it may also be passed additional, named parameters.
diff --git a/doc/todo/allow_site-wide_meta_definitions.mdwn b/doc/todo/allow_site-wide_meta_definitions.mdwn
index 70ccc2b68..82670250e 100644
--- a/doc/todo/allow_site-wide_meta_definitions.mdwn
+++ b/doc/todo/allow_site-wide_meta_definitions.mdwn
@@ -5,8 +5,159 @@ I'd like to define [[plugins/meta]] values to apply across all pages
site-wide unless the pages define their own: default values for meta
definitions essentially.
-Here's a patch to achieve this (also in the "defaultmeta" branch of
-my github ikiwiki fork):
+ <snip old patch, see below for latest>
+
+-- [[Jon]]
+
+> This doesn't support multiple-argument meta directives like
+> `link=x rel=y`, or meta directives with special side-effects like
+> `updated`.
+>
+> The first could be solved (if you care) by a syntax like this:
+>
+> meta_defaults => [
+> { copyright => "© me" },
+> { link => "about:blank", rel => "silly", },
+> ]
+>
+> The second could perhaps be solved by invoking `meta::preprocess` from within
+> `scan` (which might be a simplification anyway), although this is complicated
+> by the fact that some (but not all!) meta headers are idempotent.
+>
+> --[[smcv]]
+
+>> Thanks for your comment. I've revised the patch to use the config syntax
+>> you suggest. I need to perform some more testing to make sure I've
+>> addressed the issues you highlight.
+>>
+>> I had to patch part of IkiWiki core, the merge routine in Setup, because
+>> the use of `possibly_foolish_untaint` was causing the hashrefs at the deep
+>> end of the data structure to be converted into strings. The specific change
+>> I've made may not be acceptable, though -- I'd appreciate someone providing
+>> some feedback on that hunk!
+
+>>> Well, re that hunk, taint checking is currently disabled, but
+>>> if the perl bug that disallows it is fixed and it is turned back on,
+>>> the hash values will remain tainted, which will probably lead to
+>>> problems.
+>>>
+>>> I'm also leery of using such a complex data structure in config.
+>>> The websetup plugin would be hard pressed to provide a UI for such a
+>>> data structure. (It lacks even UI for a single hash ref yet, let alone
+>>> a list.)
+>>>
+>>> Also, it seems sorta wrong to have two so very different syntaxes to
+>>> represent the same meta data. A user without a lot of experience will
+>>> be hard pressed to map from a directive to this in the setup file.
+>>>
+>>> All of which leads me to think the setup file could just contain
+>>> a text that could hold meta directives. Which generalizes really to
+>>> a text that contains any directives, and is, perhaps appended to the
+>>> top of every page. Which nearly generalizes to the sidebar plugin,
+>>> or perhaps something more general than that...
+>>>
+>>> However, excessive generalization is the root of all evil, so
+>>> I'm not necessarily saying that's a good idea. Indeed, my memory
+>>> concerns below invalidate this idea pretty well. --[[Joey]]
+
+ diff --git a/IkiWiki/Plugin/meta.pm b/IkiWiki/Plugin/meta.pm
+ index 6fe9cda..2f8c098 100644
+ --- a/IkiWiki/Plugin/meta.pm
+ +++ b/IkiWiki/Plugin/meta.pm
+ @@ -13,6 +13,8 @@ sub import {
+ hook(type => "needsbuild", id => "meta", call => \&needsbuild);
+ hook(type => "preprocess", id => "meta", call => \&preprocess, scan => 1);
+ hook(type => "pagetemplate", id => "meta", call => \&pagetemplate);
+ + hook(type => "scan", id => "meta", call => \&scan)
+ + if $config{"meta_defaults"};
+ }
+
+ sub getsetup () {
+ @@ -305,6 +307,15 @@ sub match {
+ }
+ }
+
+ +sub scan() {
+ + my %params = @_;
+ + my $page = $params{page};
+ + foreach my $default (@{$config{"meta_defaults"}}) {
+ + preprocess(%$default, page => $page,
+ + destpage => $page, preview => 0);
+ + }
+ +}
+ +
+ package IkiWiki::PageSpec;
+
+ sub match_title ($$;@) {
+ diff --git a/IkiWiki/Setup.pm b/IkiWiki/Setup.pm
+ index 8a25ecc..e4d50c9 100644
+ --- a/IkiWiki/Setup.pm
+ +++ b/IkiWiki/Setup.pm
+ @@ -51,7 +51,13 @@ sub merge ($) {
+ $config{$c}=$setup{$c};
+ }
+ else {
+ - $config{$c}=[map { IkiWiki::possibly_foolish_untaint($_) } @{$setup{$c}}]
+ + $config{$c}=[map {
+ + if(ref $_ eq 'HASH') {
+ + $_
+ + } else {
+ + IkiWiki::possibly_foolish_untaint($_)
+ + }
+ + } @{$setup{$c}}];
+ }
+ }
+ elsif (ref $setup{$c} eq 'HASH') {
+ diff --git a/doc/ikiwiki/directive/meta.mdwn b/doc/ikiwiki/directive/meta.mdwn
+ index 000f461..8d34ee4 100644
+ --- a/doc/ikiwiki/directive/meta.mdwn
+ +++ b/doc/ikiwiki/directive/meta.mdwn
+ @@ -12,6 +12,16 @@ also specifies some additional sub-parameters.
+ The field values are treated as HTML entity-escaped text, so you can include
+ a quote in the text by writing `&quot;` and so on.
+
+ +You can also define site-wide defaults for meta values by including them
+ +in your setup file. The key used is `meta_defaults` and the value is a list
+ +of hashes, one per meta directive. e.g.:
+ +
+ + meta_defaults = [
+ + { copyright => "Copyright 2007 by Joey Hess" },
+ + { license => "GPL v2+" },
+ + { link => "somepage", rel => "site entrypoint", },
+ + ],
+ +
+ Supported fields:
+
+ * title
+
+-- [[Jon]]
+
+>> Ok, I've had a bit of a think about this. There are currently 15 supported
+>> meta fields. Of these: title, licence, copyright, author, authorurl,
+>> and robots might make sense to define globally and override on a per-page
+>> basis.
+>>
+>> Less so, description (due to its impact on map); openid (why would
+>> someone want more than one URI to act as an openid endpoint to the same
+>> place?); updated. I can almost see why someone might want to set a global
+>> updated value. Almost.
+>>
+>> Not useful are permalink, date, stylesheet (you already have a global
+>> stylesheet), link, redir, and guid.
+>>
+>> In other words, the limitations of my first patch that [[smcv]] outlined
+>> are only relevant to defined fields that you wouldn't want to specify a
+>> global default for anyway.
+>>
+>>> I generally agree with this. It is *possible* that meta would have a new
+>>> field added, that takes parameters and make sense to use globally.
+>>> --[[Joey]]
+>>
+>> Due to this, and the added complexity of the second patch (having to adjust
+>> `IkiWiki/Setup.pm`), I think the first patch makes more sense. I've thus
+>> reverted to it here.
+>>
+>> Is this merge-worthy?
diff --git a/IkiWiki/Plugin/meta.pm b/IkiWiki/Plugin/meta.pm
index b229592..3132257 100644
@@ -56,19 +207,40 @@ my github ikiwiki fork):
-- [[Jon]]
-> This doesn't support multiple-argument meta directives like
-> `link=x rel=y`, or meta directives with special side-effects like
-> `updated`.
->
-> The first could be solved (if you care) by a syntax like this:
->
-> meta_defaults => [
-> { copyright => "© me" },
-> { link => "about:blank", rel => "silly", },
-> ]
->
-> The second could perhaps be solved by invoking `meta::preprocess` from within
-> `scan` (which might be a simplification anyway), although this is complicated
-> by the fact that some (but not all!) meta headers are idempotent.
->
-> --[[smcv]]
+>>> Merry Christmas/festive season/happy new year folks. I've been away from
+>>> ikiwiki for the break, and now I've returned to watching recentchanges.
+>>> Hopefully I'll be back in the mix soon, too. In the meantime, Joey, have
+>>> you had a chance to look at this yet? -- [[Jon]]
+
+>>>> Ping :) Hi. [[Joey]], would you consider this patch for the next
+>>>> ikiwiki release? -- [[Jon]]
+
+>>> For this to work with websetup and --dumpsetup, it needs to define the
+>>> `meta_*` settings in the getsetup function.
+>>>>
+>>>> I think this will be problematic with the current implementation of this
+>>>> patch. The datatype here is an array of hash references, with each hash
+>>>> having a variable (and arbitrary) number of key/value pairs. I can't
+>>>> think of an intuitive way of implementing a way of editing such a
+>>>> datatype in the web interface, let alone registering the option in
+>>>> getsetup.
+>>>>
+>>>> Perhaps a limited set of defined meta values could be exposed via
+>>>> websetup (the obvious ones: author, copyright, license, etc.) -- [[Jon]]
+>>>
+>>> I also have some concerns about both these patches, since both throw
+>>> a lot of redundant data at meta, which then stores it in a very redundant
+>>> way. Specifically, meta populates a per-page `%metaheaders` hash
+>>> as well as storing per-page metadata in `%pagestate`. So, if you have
+>>> a wiki with 10 thousand pages, and you add a 1k site-wide license text,
+>>> that will bloat the memory usage of ikiwiki by in excess of 2
+>>> megabytes. It will also cause ikiwiki to write a similar amount more data
+>>> to its state file which has to be loaded back in each
+>>> run.
+>>>
+>>> Seems that this could be managed much more efficiently by having
+>>> meta special-case the site-wide settings, not store them in these
+>>> per-page data structures, and just make them be used if no per-page
+>>> metadata of the given type is present. --[[Joey]]
+>>>>
+>>>> that should be easy enough to do. I will work on a patch. -- [[Jon]]
diff --git a/doc/todo/anon_push_of_comments.mdwn b/doc/todo/anon_push_of_comments.mdwn
new file mode 100644
index 000000000..b472ea13f
--- /dev/null
+++ b/doc/todo/anon_push_of_comments.mdwn
@@ -0,0 +1,14 @@
+It should be possible to use anonymous git push to post comments
+(created, say, by a ikiwiki-comment program). Currently, that is not
+allowed, because users cannot edit, or create internal page files.
+But, comments in allowed locations are an exception to that rule, and
+that exception should be communicated somehow to `IkiWiki::Receive`.
+--[[Joey]]
+
+> Complications include:
+>
+> * Hard to see a way to prevent users from committing a comment that
+> claims to be written by someone else.
+> * `checkcontent` hooks need to be run, but can't accept a comment
+> for later moderation, since it's coming in as part of a commit.
+> Best they could do is reject the commit.
diff --git a/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn b/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn
index f1d33114f..7b65eba2e 100644
--- a/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn
+++ b/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn
@@ -4,7 +4,7 @@ Tags are mainly specific to the object to which they’re stuck. However, I ofte
Also see: <http://madduck.net/blog/2008.01.06:new-blog/> and <http://users.itk.ppke.hu/~cstamas/code/ikiwiki/autocreatetagpage/>
-[[!tag wishlist plugins/tag patch]]
+[[!tag wishlist plugins/tag patch patch/core]]
I would love to see this as well. -- dato
@@ -15,88 +15,9 @@ A new setting is used to enable or disable auto-create tag pages, `tag_autocreat
The new tag file is created during the preprocess phase.
The new tag file is then complied during the change phase.
-_tag.pm from version 3.01_
-
-
- --- tag.pm 2009-02-06 10:26:03.000000000 -0700
- +++ tag_new.pm 2009-02-06 12:17:19.000000000 -0700
- @@ -14,6 +14,7 @@
- hook(type => "preprocess", id => "tag", call => \&preprocess_tag, scan => 1);
- hook(type => "preprocess", id => "taglink", call => \&preprocess_taglink, scan => 1);
- hook(type => "pagetemplate", id => "tag", call => \&pagetemplate);
- + hook(type => "change", id => "tag", call => \&change);
- }
-
- sub getopt () {
- @@ -36,6 +37,36 @@
- safe => 1,
- rebuild => 1,
- },
- + tag_autocreate => {
- + type => "boolean",
- + example => 0,
- + description => "Auto-create the new tag pages, uses autotagpage.tmpl ",
- + safe => 1,
- + rebulid => 1,
- + },
- +}
- +
- +my $autocreated_page = 0;
- +
- +sub gen_tag_page($) {
- + my $tag=shift;
- +
- + my $tag_file=$tag.'.'.$config{default_pageext};
- + return if (-f $config{srcdir}.$tag_file);
- +
- + my $template=template("autotagpage.tmpl");
- + $template->param(tag => $tag);
- + writefile($tag_file, $config{srcdir}, $template->output);
- + $autocreated_page = 1;
- +
- + if ($config{rcs}) {
- + IkiWiki::disable_commit_hook();
- + IkiWiki::rcs_add($tag_file);
- + IkiWiki::rcs_commit_staged(
- + gettext("Automatic tag page generation"),
- + undef, undef);
- + IkiWiki::enable_commit_hook();
- + }
- }
-
- sub tagpage ($) {
- @@ -47,6 +78,10 @@
- $tag=~y#/#/#s; # squash dups
- }
-
- + if (defined $config{tag_autocreate} && $config{tag_autocreate} ) {
- + gen_tag_page($tag);
- + }
- +
- return $tag;
- }
-
- @@ -125,4 +160,18 @@
- }
- }
-
- +sub change(@) {
- + return unless($autocreated_page);
- + $autocreated_page = 0;
- +
- + # This refresh/saveindex is to complie the autocreated tag pages
- + IkiWiki::refresh();
- + IkiWiki::saveindex();
- +
- + # This refresh/saveindex is to fix the Tags link
- + # With out this additional refresh/saveindex the tag link displays ?tag
- + IkiWiki::refresh();
- + IkiWiki::saveindex();
- +}
- +
+*see git history of this page if you want the patch --[[smcv]]*
-
-This uses a [[template|wikitemplates]] called `autotagpage.tmpl`, here is my template file:
+This uses a [[template|templates]] called `autotagpage.tmpl`, here is my template file:
\[[!inline pages="link(<TMPL_VAR TAG>)" archive="yes"]]
@@ -123,3 +44,227 @@ On the second extra pass, it doesn't notice that it has to update the "?"-link.
}
is not satisfied for the newly created tag page. I shall put debug msgs into Render.pm to find out better how it works. --Ivan Z.
+
+---
+
+I've made another attempt at fixing this
+
+The current progress can be found at my [git repository][gitweb] on branch
+`autotag`:
+
+ git://git.liegesta.at/git/ikiwiki
+
+[gitweb]: http://git.liegesta.at/?p=ikiwiki.git;a=shortlog;h=refs/heads/autotag (gitweb for branch autotag)
+
+It's not entirely finished yet, but already quite usable. Testing and comments
+on code quality, implementation details, as well as other patches would be
+appreciated.
+
+Here's what it does right now:
+
+* enabled by setting `tag_autocreate=1` in the configuration.
+* Tag pages will be created in `tagbase` from the template `autotag.tmpl`.
+* Will correctly render all links, and dependencies. Well, AFAIK.
+* When a tag page is deleted it will automatically recreated from template. (I
+consider this a feature, not a bug)
+* Requires a rebuild on first use.
+* Adds a function `add_autofile()` to the plugin API, to do all this.
+
+Todo/Bugs:
+
+* Will still create a page even if there's a page other than `$tag` under
+`tagbase` satisfying the tag link. (details? --[[Joey]])
+* Call from `IkiWiki.pm` to `Render.pm`, which adds a module dependency in the
+wrong direction. (fixed --[[Joey]] )
+* Add files to RCS.
+* Unit tests.
+* Proper documentation. (fixed (mostly) --[[Joey]])
+
+--[[David_Riebenbauer]]
+
+> Starting review of this. Some of your commits are to very delicate,
+> optimised, and security-sensitive ground, so I have to look at them very
+> carefully. --[[Joey]]
+
+>> First of, sorry that it took me so damn long to answer. I didn't lose
+>> interest but it took a while for me to find the time and motivation
+>> to address you suggestions. --[[David_Riebenbauer]]
+
+> * In the refactoring in [f3abeac919c4736429bd3362af6edf51ede8e7fe][],
+> you introduced at least 2 bugs, one a possible security hole.
+> Now one part of the code tests `if ($file)` and the other
+> caller tests `if ($f)`. These two tests both tested `if (! defined $f)`
+> before. Notice that the variable needs to be the untainted variable
+> for both. Also notice that `if ($f)` fails if `$f` contains `0`,
+> which is a very common perl gotcha.
+> * Your refactored code changes `-l $_ || -d _` to `-l $file || -d $file`.
+> The latter makes one more stat system call; note the use of a
+> bare `_` in the first to make perl reuse the stat buffer.
+> * (As a matter of style, could you put a space after the commas in your
+> perl?)
+
+>> The first two points should be addressed in
+>> [da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0][]. And sure, I can add the
+>> spaces. --[[David_Riebenbauer]]
+
+> I'd like to cherry-pick the above commit, once it's in shape, before
+> looking at the rest in detail. So just a few other things that stood out.
+>
+> * Commit [4af4d26582f0c2b915d7102fb4a604b176385748][] seems unnecessary.
+> `srcfile($file, 1)` already is documented to return undef if the
+> file does not exist. (But without the second parameter, it throws
+> an error.)
+
+>> You're right. I must have been some confused by some other promplem I
+>> introduced then. Reverted. --[[David_Riebenbauer]]
+
+> * Commit [f58f3e1bec41ccf9316f37b014ce0b373c8e49e1][] adds a line
+> that is intented by a space, not a tab.
+
+>> Sorry, That one was reverted anyway. --[[David_Riebenbauer]]
+
+> * Commit [f58f3e1bec41ccf9316f37b014ce0b373c8e49e1][] says that auto-added
+> files will be recreated if the user deletes them. That seems bad.
+> `autoindex` goes to some trouble to not recreate deleted files.
+
+>> I reverted the commit and addressed the issue in
+>> [a358d74bef51dae31332ff27e897fe04834571e6][] and
+>> [981400177d68a279f485727be3f013e68f0bf691][].
+ --[[David_Riebenbauer]]
+
+>>> This doesn't seem to have all the refinements that autoindex has:
+>>>
+>>> * `autoindex` attaches the record of deletions to the `index` page, which
+>>> is (nearly) guaranteed to exist; this one attaches the record of
+>>> deletions to the deleted page's page state. Won't that tend to result
+>>> in losing the record along with the deleted page?
+
+>>>> This is probably on of the harder things to do, 'cause there are (most of the
+>>>> time) several pages that are responsible for the creation of a single tag page.
+>>>> Of course I could attach the info to all of them.
+
+>>>> With current behaviour I think the information in `%pagestate` is kept around
+>>>> regardless whether the corresponding page exists or not.
+>>>> --[[David_Riebenbauer]]
+
+>>>>> Sorry, I'll try to be clearer: `autoindex` hard-codes that the index page
+>>>>> of the entire wiki is the one responsible for storing the page state. That
+>>>>> page isn't responsible for the creation of the tag page, it's just an
+>>>>> arbitrary page that's (more or less) guaranteed to exist. --[[smcv]]
+
+>>>>> I don't like that [[plugins/autoindex]] has to do that,
+>>>>> but `%pagestate` values are only stored for pages that exist,
+>>>>> so it was necessary. (Another way to look at this is that
+>>>>> `%pagestate` is not the ideal data structure.) --[[Joey]]
+
+>>>>>> Aha! Having looked at [[plugins/write]] again, it turns out that what this
+>>>>>> feature should really use is `%wikistate`, I think? :-) --[[smcv]]
+
+>>>>>>> Ah, indeed, that came after I wrote autoindex. I've fixed autoindex to
+>>>>>>> use it. --[[Joey]]
+
+>>>>> Ok, now I know what you mean. --[[David_Riebenbauer]]
+
+>>> * `autoindex` forgets that a page was deleted when that page is
+>>> re-created
+
+>>>> Yes, I forgot about that and that is a bug. I'll fix that.
+>>>> --[[David_Riebenbauer]]
+
+>>>>> In my branch, it keeps a list of autofiles that were created,
+>>>>> not deleted. And I think that turns out to be necessary, really.
+>>>>> However, I see no way to clean out that list on deletion and
+>>>>> manual recreation -- it still needs to remember it was once an autofile,
+>>>>> in order to avoid recreating it if it's deleted yet again. --[[Joey]]
+
+>>>>>> Are these really the semantics we want? It seems strange to me
+>>>>>> that this:
+>>>>>>
+>>>>>> * tag a page as foo
+>>>>>> * tags/foo automatically appears
+>>>>>> * delete tags/foo
+>>>>>> * create tags/foo manually
+>>>>>> * delete tags/foo again
+>>>>>> * tags/foo isn't automatically created
+>>>>>>
+>>>>>> isn't the same as this:
+>>>>>>
+>>>>>> * create tags/foo
+>>>>>> * delete tags/foo
+>>>>>> * tag a page as foo
+>>>>>> * tags/foo automatically appears
+>>>>>>
+>>>>>> or even this:
+>>>>>>
+>>>>>> * create tags/foo
+>>>>>> * tag a page as foo
+>>>>>> * delete tags/foo
+>>>>>> * tags/foo automatically appears (?)
+>>>>>>
+>>>>>> --[[smcv]]
+
+>>>>>>> I agree that the last of these is not desired. It could be avoided
+>>>>>>> by extending the list of autofiles to include those that were not
+>>>>>>> created due to the file/page already existing.
+>>>>>>>
+>>>>>>> Hmm, that would fix the previous scenario too. --[[Joey]]
+
+>>> * `autoindex` forgets that a page was deleted when it's no longer needed
+>>> anyway (this may be harder for `autotag`?)
+
+>>>> I don't think so. AFAIK ikiwiki can detect whether there are taglinks to a page
+>>>> anyway, so it should be quite easy. I'll try to implement that too.
+>>>> --[[David_Riebenbauer]]
+
+>>> It'd probably be an interesting test of the core change to port
+>>> `autoindex` to use it? (Adding the file to the RCS would be
+>>> necessary to get parity with `autoindex`.) --[[smcv]]
+
+>>>> Good suggestion. Adding the files to RCS is on my todo list anyway.
+>>>> --[[David_Riebenbauer]]
+
+>>>>> I think it may be better to allow the `add_autofile` caller
+>>>>> to specify if it is added to RCS. In my branch, it can do
+>>>>> so by just making the callback it registers call `rcs_add`;
+>>>>> and I have tag do this. Other plugins might want autofiles
+>>>>> that do not get checked in, conceivably.
+>>>>> --[[Joey]]
+
+> Regarding the call from `IkiWiki.pm` to `Render.pm`, wouldn't this be
+> quite easy to solve by moving `verify_src_file` to IkiWiki.pm? --[[smcv]]
+
+>> True. I'll do that. --[[David_Riebenbauer]]
+>> Fixed in my branch --[[Joey]]
+
+[[!template id=gitbranch branch=origin/autotag author="[[Joey]]"]]
+I've pushed an autotag branch of my own, which refactors
+things a bit and fixes bugs around deletion/recreation.
+I've tested it fairly thouroughly. --[[Joey]]
+
+[f3abeac919c4736429bd3362af6edf51ede8e7fe]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=f3abeac919c4736429bd3362af6edf51ede8e7fe (commitdiff for f3abeac919c4736429bd3362af6edf51ede8e7fe)
+[4af4d26582f0c2b915d7102fb4a604b176385748]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=4af4d26582f0c2b915d7102fb4a604b176385748 (commitdiff for 4af4d26582f0c2b915d7102fb4a604b176385748)
+[f58f3e1bec41ccf9316f37b014ce0b373c8e49e1]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=f58f3e1bec41ccf9316f37b014ce0b373c8e49e1 (commitdiff for f58f3e1bec41ccf9316f37b014ce0b373c8e49e1)
+[da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0 (commitdiff for da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0)
+[a358d74bef51dae31332ff27e897fe04834571e6]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=a358d74bef51dae31332ff27e897fe04834571e6 (commitdiff for a358d74bef51dae31332ff27e897fe04834571e6)
+[981400177d68a279f485727be3f013e68f0bf691]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=981400177d68a279f485727be3f013e68f0bf691 (commitdiff for 981400177d68a279f485727be3f013e68f0bf691)
+
+-------------------
+
+Even if this is already marked as done, I'd like to suggest an alternative
+solution:
+
+Instead of creating a file that gets checked in into the RCS, the source files
+could be left out and the output files be written as long as there is no
+physical source file (think of a virtual underlay). Something similar would be
+required to implement [[todo/alias directive]], which couldn't be easily done
+by writing to the RCS as the page's contents can change depending on which
+other pages claim it as an alias. --[[chrysn]]
+
+I agree with [[chrysn]]. In fact, is there any good reason that the core tag
+plugin doesn't do this? The current usability is horrible, to the point that
+I have gone 2.5 years with Ikiwiki and haven't yet started using tags.
+--[Eric](http://wiki.pdxhub.org/people/eric)
+
+> See [[todo/transient_pages]] for progress on this. --[[smcv]]
+
+[[!tag done]]
diff --git a/doc/todo/auto_getctime_on_fresh_build.mdwn b/doc/todo/auto_getctime_on_fresh_build.mdwn
new file mode 100644
index 000000000..760c56fa1
--- /dev/null
+++ b/doc/todo/auto_getctime_on_fresh_build.mdwn
@@ -0,0 +1,13 @@
+[[!tag wishlist]]
+
+It might be a good idea to enable --gettime when `.ikiwiki` does not
+exist. This way a new checkout of a `srcdir` would automatically get
+ctimes right. (Running --gettime whenever a rebuild is done would be too
+slow.) --[[Joey]]
+
+Could this be too annoying in some cases, eg, checking out a large wiki
+that needs to get set up right away? --[[Joey]]
+
+> Not for git with the new, optimised --getctime. For other VCS.. well,
+> pity they're not as fast as git ;), but it is a one-time expense...
+> [[done]] --[[Joey]]
diff --git a/doc/todo/auto_publish_expire.mdwn b/doc/todo/auto_publish_expire.mdwn
new file mode 100644
index 000000000..7a5a17517
--- /dev/null
+++ b/doc/todo/auto_publish_expire.mdwn
@@ -0,0 +1,33 @@
+It could be nice to mark some page such that:
+
+* the page is automatically published on some date (i.e. build, linked, syndicated, inlined/mapped, etc.)
+* the page is automatically unpublished at some other date (i.e. removed)
+
+I know that ikiwiki is a wiki compiler so that something has to refresh the wiki periodically to enforce the rules (a cronjob for instance). It seems to me that the calendar plugin rely on something similar.
+
+The date for publishing and expiring could be set be using some new directives; an alternative could be to expand the [[plugin/meta]] plugin with [<span/>[!meta date="auto publish date"]] and [<span/>[!meta expires="auto expire date"]].
+
+--[[JeanPrivat]]
+
+> This is a duplicate, and expansion, of
+> [[todo/tagging_with_a_publication_date]].
+> There, I suggest using a branch to develop
+> prepublication versions of a site, and merge from it
+> when the thing is published.
+>
+> Another approach I've seen used is to keep such pages in a pending/
+> directory, and move them via cron job when their publication time comes.
+> But that requires some familiarity with, and access to, cron.
+>
+> On [[todo/tagging_with_a_publication_date]], I also suggested using meta
+> date to set a page's date into the future,
+> and adding a pagespec that matches only pages with dates in the past,
+> which would allow filtering out the unpublished ones.
+> Sounds like you are thinking along these lines, but possibly using
+> something other than the page's creation or modification date to do it.
+>
+> I do think the general problem with that approach is that you have to be
+> careful to prevent the unpublished pages from leaking out in any
+> inlines, maps, etc. --[[Joey]]
+
+[[!tag wishlist]]
diff --git a/doc/todo/auto_rebuild_on_template_change.mdwn b/doc/todo/auto_rebuild_on_template_change.mdwn
new file mode 100644
index 000000000..ea990b877
--- /dev/null
+++ b/doc/todo/auto_rebuild_on_template_change.mdwn
@@ -0,0 +1,78 @@
+If `page.tmpl` is changed, it would be nice if ikiwiki automatically
+noticed, and rebuilt all pages. If `inlinepage.tmpl` is changed, a rebuild
+of all pages using it in an inline would be stellar.
+
+This would allow setting:
+
+ templatedir => "$srcdir/templates",
+
+.. and then the [[templates]] are managed like other wiki files; and
+like other wiki files, a change to them automatically updates dependent
+pages.
+
+Originally, it made good sense not to have the templatedir inside the wiki.
+Those templates can be used to bypass the htmlscrubber, and you don't want
+just anyone to edit them. But the same can be said of `style.css` and
+`ikiwiki.js`, which *are* in the wiki. We rely on `allowed_attachments`
+being set to secure those to prevent users uploading replacements. And we
+assume that users who can directly (non-anon) commit *can* edit them, and
+that's ok.
+
+So, perhaps the easiest way to solve this [[wishlist]] would be to
+make templatedir *default* to "$srcdir/templates/, and make ikiwiki
+register dependencies on `page.tmpl`, `inlinepage.tmpl`, etc, as they're
+used. Although, having every page declare an explicit dep on `page.tmpl`
+is perhaps a bit much; might be better to implement a special case for that
+one. Also, having the templates be copied to `destdir` is not desirable.
+In a sense, these template would be like internal pages, except not wiki
+pages, but raw files.
+
+The risk is that a site might have `allowed_attachments` set to
+`templates/*` or `*.tmpl` something like that. I think such a configuration
+is the *only* risk, and it's unlikely enough that a NEWS warning should
+suffice.
+
+(This would also help to clear up the tricky disctinction between
+wikitemplates and in-wiki templates.)
+
+Note also that when using templates from "$srcdir/templates/", `no_includes`
+needs to be set. Currently this is done by the two plugins that use
+such templates, while includes are allowed in `templatedir`.
+
+Have started working on this.
+[[!template id=gitbranch branch=origin/templatemove author="[[Joey]]"]]
+
+> But would this require that templates be parseable as wiki pages? Because that would be a nuisance. --[[KathrynAndersen]]
+
+>> It would be better for them not to be rendered separately at all.
+>> --[[Joey]]
+
+>>> I don't follow you. --[[KathrynAndersen]]
+
+>>>> If they don't render to output files, they clearly don't
+>>>> need to be treated as wiki pages. (They need to be treated
+>>>> as raw files anyway, because you don't want random users editing them
+>>>> in the online editor.) --[[Joey]]
+
+>>>>> Just to be clear, the raw files would not be copied across to the output
+>>>>> directory? -- [[Jon]]
+
+>>>>>> Without modifying ikiwiki, they'd be copied to the output directory as
+>>>>>> (e.g.) http://ikiwiki.info/templates/inlinepage.tmpl; to not copy them,
+>>>>>> it'd either be necessary to make them be internal pages
+>>>>>> (templates/inlinepage._tmpl) or special-case them in some other way.
+>>>>>> --[[smcv]]
+
+>>>>>>> In my branch, I left in support for the templatedir, and also
+>>>>>>> /usr/share/ikiwiki/templates. So, users do not have to put their
+>>>>>>> custom templates in templates/ in the wiki. If they do,
+>>>>>>> the templates are copied to the destdir like other non-wiki page files
+>>>>>>> are. The templates are not wiki pages, except those used by a few
+>>>>>>> things like the [[plugins/template]] plugin.
+>>>>>>>
+>>>>>>> That seems acceptable, since users probably don't need to modify
+>>>>>>> many templates, so the clutter is small. (Especially when
+>>>>>>> compared to the other clutter the basewiki always puts in destdir.)
+>>>>>>> This could be revisted later. --[[Joey]]
+
+[[done]]
diff --git a/doc/todo/autoindex_should_use_add__95__autofile.mdwn b/doc/todo/autoindex_should_use_add__95__autofile.mdwn
new file mode 100644
index 000000000..f3fb24c16
--- /dev/null
+++ b/doc/todo/autoindex_should_use_add__95__autofile.mdwn
@@ -0,0 +1,120 @@
+`add_autofile` is a generic version of [[plugins/autoindex]]'s code,
+so the latter should probably use the former. --[[smcv]]
+
+> [[merged|done]] --[[Joey]]
+
+----
+
+[[!template id=gitbranch branch=smcv/ready/autoindex-autofile author="[[smcv]]"]]
+
+I'm having trouble fixing this:
+
+ # FIXME: some of this is probably redundant with add_autofile now, and
+ # the rest should perhaps be added to the autofile machinery
+
+By "a generic version of" above, it seems I mean "almost, but not
+quite, entirely unlike".
+
+> As long as it's not Tea. ;) --[[Joey]]
+
+I tried digging through the git history for the
+reasoning behind the autofile and autoindex implementations, but now I'm
+mostly confused.
+
+## autofile
+
+The autofile machinery records a list of every file that has ever been proposed
+as an autofile: for instance, the tag plugin has a list of every tag that
+has ever been named in a \[[!tag]] or \[[!taglink]], even if no file was
+actually needed (e.g. because it already existed). Checks for files that
+already exist (or whatever) are deferred until after this list has been
+updated, and files in this list are never auto-created again unless the wiki
+is rebuilt.
+
+This avoids re-creating the tag `create-del` in this situation, which is
+the third one that I noted on
+[[todo/auto-create tag pages according to a template]]:
+
+* create tags/create-del manually
+* tag a page as create-del
+* delete tags/create-del
+
+and also avoids re-creating `auto-del` in this similar situation (which I
+think is probably the most important one to get right):
+
+* tag a page as auto-del, which is created automatically
+* delete tags/auto-del
+
+I think both of these are desirable.
+
+However, this infrastructure also results in the tag page not being
+re-created in either of these situations (the first and second that I noted
+on the other page):
+
+* tag a page as auto-del-create-del, which is created automatically
+* delete tags/auto-del-create-del
+* create tags/auto-del-create-del manually
+* delete tags/auto-del-create-del again
+
+or
+
+* create tags/create-del-auto
+* delete tags/create-del-auto
+* tag a page as create-del-auto
+
+I'm less sure that these shouldn't create the tag page: we deleted the
+manually-created version, but that doesn't necessarily mean we don't want
+*something* to exist.
+
+> That could be argued, but it's a very DWIM thing. Probably best to keep
+> the behavior simple and predictable, so one only needs to remember that
+> when a page is deleted, nothing will ever re-create it behind ones back.
+> --[[Joey]]
+
+>> Fair enough, I'll make autoindex do that. --s
+
+## autoindex
+
+The autoindex machinery records a more complex set. Items are added to the
+set when they are deleted, but would otherwise have been added as an autoindex
+(don't exist, do have children (by which I mean subpages or attachments),
+and are a directory in the srcdir). They're removed if this particular run
+wouldn't have added them as an autoindex (they exist, or don't have children).
+
+Here's what happens in situations mirroring those above.
+
+The "create-del" case still doesn't create the page:
+
+* create create-del manually
+* create create-del/child
+* delete create-del
+* it's added to `%deleted` and not re-created
+
+Neither does the "auto-del" case:
+
+* create auto-del/child, resulting in auto-del being created automatically
+* delete auto-del
+* it's added to `%deleted` and not re-created
+
+However, unlike the generic autofile infrastructure, `autoindex` forgets
+that it shouldn't re-create the deleted page in the latter two situations:
+
+* create auto-del-create-del/child, resulting in auto-del-create-del being
+ created automatically
+* delete auto-del-create-del; it's added to `%deleted` and not re-created
+* create auto-del-create-del manually; it's removed from `%deleted`
+* delete auto-del-create-del again (it's re-created)
+
+and
+
+* create create-del-auto
+* delete create-del-auto; it's not added to `%deleted` because there's no
+ child that would cause it to exist
+* create create-del-auto/child
+
+> I doubt there is any good reason for this behavior. These are probably
+> bugs. --[[Joey]]
+
+>> OK, I believe my updated branch gives `autoindex` the same behaviour
+>> as auto-creation of tags. The `auto-del-create-del` and
+>> `create-del-auto` use cases work the same as for tags on my demo wiki. --s
diff --git a/doc/todo/avatar.mdwn b/doc/todo/avatar.mdwn
index b8aa2327f..7fa3762da 100644
--- a/doc/todo/avatar.mdwn
+++ b/doc/todo/avatar.mdwn
@@ -1,38 +1,22 @@
[[!tag wishlist]]
It would be nice if ikiwiki, particularly [[plugins/comments]]
-supported user avatar icons. I was considering adding a directive for this,
-as designed below.
+(but also, ideally, recentchanges) supported user avatar icons.
-However, there is no *good* service for mapping openids to avatars --
-openavatar has many issues, including not supporting delegated openids, and
-after trying it, I don't trust it to push users toward.
-Perhaps instead ikiwiki could get the email address from the openid
-provider, though I think the perl openid modules don't support the openid
-2.x feature that allows that.
+> Update: Done for comments, but not for anything else, and the directive
+> below would be a nice addition. --[[Joey]]
-At the moment, working on this doesn't feel like a good use of my time.
---[[Joey]]
-
-Hmm.. unless is just always used a single provider (gravatar) and hashed
-the openid. Then wavatars could be used to get a unique avatar per openid
-at least. --[[Joey]]
-
-----
-
-The directive displays a small avatar image for a user. Pass it the
-email address, openid, or wiki username of the user.
+Idea is to add a directive that displays a small avatar image for a user.
+Pass it a user's the email address, openid, username, or the md5 hash
+of their email address:
\[[!avatar user@example.com]]
\[[!avatar http://joey.kitenet.net/]]
\[[!avatar user]]
+ \[[!avatar hash]]
-The avatars are provided by various sites. For email addresses, it uses a
-[gravatar](http://gravatar.com/). For openid,
-[openavatar](http://www.openvatar.com/) is used. For a wiki username, the
-user's email address is looked up and the gravatar for that user is
-displayed. (Of course, the user has to have filled in their email address
-on their Preferences page for that to work.)
+These directives can then be hand-inserted onto pages, or more likely,
+included in eg, a comment post via a template.
An optional second parameter can be included, containing additional
options to pass in the
diff --git a/doc/todo/avoid_attachement_ui_if_upload_not_allowed.mdwn b/doc/todo/avoid_attachement_ui_if_upload_not_allowed.mdwn
new file mode 100644
index 000000000..487915850
--- /dev/null
+++ b/doc/todo/avoid_attachement_ui_if_upload_not_allowed.mdwn
@@ -0,0 +1,25 @@
+Any way to make it so an edit page doesn't offer the attachment capability
+unless it matches a specific user, is an admin, and/or is an allowed page?
+(For now, I have it on all pages, and then it prohibits after I submit
+based on the allowed_attachments.)
+
+> To do that, ikiwiki would have to try to match the `allowed_attachments`
+> pagespec against a sort of dummy upload to the current page. Then if it
+> failed, assume all real uploads would fail. Now consider a pagespec like
+> "user(joey) and mimetype(audio/mpeg)" -- it'd be hard to make a dummy
+> upload to test this pagespec against.
+>
+> So, there would need to be some sort of test mode, where terms like
+> `mimetype()` always succeed. But then consider a pagespec like
+> "user(joey) and !mimetype(video/mpeg)" -- if mimetype succeeds, this
+> fails.
+>
+> So, maybe we can instead just filter out all the pagespec terms aside
+> from `user()`, `ip()`, and `admin()`. Transforming that into just
+> "user(joey)", which would succeed in the test.
+>
+> That'd work, I guess. Pulling a pagespec apart, filtering out terms, and
+> putting it back together is nontrivial, but doable.
+>
+> Other approach would be to have a separate pagespec that explicitly
+> controlls what pages to show the attachment UI on. --[[Joey]]
diff --git a/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn b/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn
index fb942a495..fdaa09f26 100644
--- a/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn
+++ b/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn
@@ -13,5 +13,113 @@ those contents instead.
> In mine I just copied sidebar out and made some extra "sidebars", but they go elsewhere. Ugly hack, but it works. --[[simonraven]]
+>> Here a simple [[patch]] for multiple sidebars. Not too fancy but better than having multiple copies of the sidebar plugin. --[[jeanprivat]]
+
+>>> I made a [[git]] branch for it [[!template id=gitbranch branch="privat/multiple_sidebars" author="[[jeanprivat]]"]] --[[jeanprivat]]
+
+>>>> Ping for [[Joey]]. Do you have any comment? I could improve it if there is things you do not like. I prefer to have such a feature integrated upstream. --[[JeanPrivat]]
+
+>>>>> The code is fine.
+>>>>>
+>>>>> I did think about having it examine
+>>>>> the `page.tmpl` for parameters with names like `FOO_SIDEBAR`
+>>>>> and automatically enable page `foo` as a sidebar in that case,
+>>>>> instead of using the setup file to enable. But I'm not sure about
+>>>>> that idea..
+>>>>>
+>>>>> The full compliment of sidebars would be a header, a footer,
+>>>>> a left, and a right sidebar. It would make sense to go ahead
+>>>>> and add the parameters to `page.tmpl` so enabling each just works,
+>>>>> and add whatever basic CSS makes sense. Although I don't know
+>>>>> if I want to try to get a 3 column CSS going, so perhaps leave the
+>>>>> left sidebar out of that.
+
+-------------------
+
+<pre>
+--- /usr/share/perl5/IkiWiki/Plugin/sidebar.pm 2010-02-11 22:53:17.000000000 -0500
++++ plugins/IkiWiki/Plugin/sidebar.pm 2010-02-27 09:54:12.524412391 -0500
+@@ -19,12 +19,20 @@
+ safe => 1,
+ rebuild => 1,
+ },
++ active_sidebars => {
++ type => "string",
++ example => qw(sidebar banner footer),
++ description => "Which sidebars must be activated and processed.",
++ safe => 1,
++ rebuild => 1
++ },
+ }
+
+-sub sidebar_content ($) {
++sub sidebar_content ($$) {
+ my $page=shift;
++ my $sidebar=shift;
+
+- my $sidebar_page=bestlink($page, "sidebar") || return;
++ my $sidebar_page=bestlink($page, $sidebar) || return;
+ my $sidebar_file=$pagesources{$sidebar_page} || return;
+ my $sidebar_type=pagetype($sidebar_file);
+
+@@ -49,11 +57,17 @@
+
+ my $page=$params{page};
+ my $template=$params{template};
+-
+- if ($template->query(name => "sidebar")) {
+- my $content=sidebar_content($page);
+- if (defined $content && length $content) {
+- $template->param(sidebar => $content);
++
++ my @sidebars;
++ if (defined $config{active_sidebars} && length $config{active_sidebars}) { @sidebars = @{$config{active_sidebars}}; }
++ else { @sidebars = qw(sidebar); }
++
++ foreach my $sidebar (@sidebars) {
++ if ($template->query(name => $sidebar)) {
++ my $content=sidebar_content($page, $sidebar);
++ if (defined $content && length $content) {
++ $template->param($sidebar => $content);
++ }
+ }
+ }
+ }
+</pre>
+
+----------------------------------------
+## Further thoughts about this
+
+(since the indentation level was getting rather high.)
+
+What about using pagespecs in the config to map pages and sidebar pages together? Something like this:
+
+<pre>
+ sidebar_pagespec => {
+ "foo/*" => 'sidebars/foo_sidebar',
+ "bar/* and !bar/*/*' => 'bar/bar_top_sidebar',
+ "* and !foo/* and !bar/*" => 'sidebars/general_sidebar',
+ },
+</pre>
+
+One could do something similar for *pageheader*, *pagefooter* and *rightbar* if desired.
+
+Another thing which I find compelling - but probably because I am using [[plugins/contrib/field]] - is to be able to treat the included page as if it were *part* of the page it was included into, rather than as an included page. I mean things like \[[!if ...]] would test against the page name of the page it's included into rather than the name of the sidebar/header/footer page. It's even more powerful if one combines this with field/getfield/ftemplate/report, since one could make "generic" headers and footers that could apply to a whole set of pages.
+
+Header example:
+<pre>
+#{{$title}}
+\[[!ftemplate id="nice_data_table"]]
+</pre>
+
+Footer example:
+<pre>
+------------
+\[[!report template="footer_trail" trail="trailpage" here_only=1]]
+</pre>
+
+(Yes, I am already doing something like this on my own site. It's like the PmWiki concept of GroupHeader/GroupFooter)
+
+-- [[KathrynAndersen]]
[[!tag wishlist]]
diff --git a/doc/todo/beef_up_signin_page.mdwn b/doc/todo/beef_up_signin_page.mdwn
new file mode 100644
index 000000000..ee322b663
--- /dev/null
+++ b/doc/todo/beef_up_signin_page.mdwn
@@ -0,0 +1,17 @@
+ikiwiki's signin page is too sparse for people who don't live in the Web 2.0.
+
+We occasionally have GNU Hurd web pages contributors wonder what they have to
+do on that page. They don't know / the page doesn't explain what an *account
+provider* is, and that cross-indentification (using an existing OpenID account)
+is possible to begin with. And, if they don't have such an OpenID account,
+it's not easily understandable that the *other* option is for creating a local
+site-only account (like in the old days).
+
+--[[tschwinge]]
+
+> I agree that this would be good. It could be done by leaving
+> the compact widget at the top and adding some verbose explanations
+> and/or further forms below.
+>
+> All it takes is editing `templates/openid-selector.tmpl`,
+> so I welcome suggestions. --[[Joey]]
diff --git a/doc/todo/capitalize_title.mdwn b/doc/todo/capitalize_title.mdwn
new file mode 100644
index 000000000..3e8366dd3
--- /dev/null
+++ b/doc/todo/capitalize_title.mdwn
@@ -0,0 +1,31 @@
+Here I propose an option (with a [[patch]]) to capitalize the first letter (ucfirst) of default titles : filenames and urls can be lowercase but title are displayed with a capital first character (filename = "foo.mdwn", pagetitle = "Foo"). Note that \[[!meta title]] are unaffected (no automatic capitalization). Comments please :) --[[JeanPrivat]]
+<pre><code>
+diff --git a/IkiWiki.pm b/IkiWiki.pm
+index 6da2819..fd36ec4 100644
+--- a/IkiWiki.pm
++++ b/IkiWiki.pm
+@@ -281,6 +281,13 @@ sub getsetup () {
+ safe => 0,
+ rebuild => 1,
+ },
++ capitalize => {
++ type => "boolean",
++ default => undef,
++ description => "capitalize the first letter of page titles",
++ safe => 1,
++ rebuild => 1,
++ },
+ userdir => {
+ type => "string",
+ default => "",
+@@ -989,6 +996,10 @@ sub pagetitle ($;$) {
+ $page=~s/(__(\d+)__|_)/$1 eq '_' ? ' ' : "&#$2;"/eg;
+ }
+
++ if ($config{capitalize}) {
++ $page = ucfirst $page;
++ }
++
+ return $page;
+ }
+</code></pre>
diff --git a/doc/todo/cas_authentication.mdwn b/doc/todo/cas_authentication.mdwn
index 8bf7042df..ed8010518 100644
--- a/doc/todo/cas_authentication.mdwn
+++ b/doc/todo/cas_authentication.mdwn
@@ -21,6 +21,19 @@ follows) ?
> license statement at the top. I have a few questions that I'll insert
> inline with the patch below. --[[Joey]]
+>> I have made some corrections to this patch (my cas plugin) in order to use
+>> IkiWiki 3.00 interface and take your comments into account. It should work
+>> fine now.
+>>
+>> You can pull it from my git repo at
+>> http://git.boulgour.com/bbb/ikiwiki.git/ and maybe add it to your main
+>> repo.
+>>
+>> I will add GNU GPL copyright license statement as soon as I get some free
+>> time.
+>>
+>> --[[/users/bbb]]
+
------------------------------------------------------------------------------
diff --git a/IkiWiki/Plugin/cas.pm b/IkiWiki/Plugin/cas.pm
new file mode 100644
diff --git a/doc/todo/cdate_and_mdate_available_for_templates.mdwn b/doc/todo/cdate_and_mdate_available_for_templates.mdwn
new file mode 100644
index 000000000..70d8fc8c9
--- /dev/null
+++ b/doc/todo/cdate_and_mdate_available_for_templates.mdwn
@@ -0,0 +1,15 @@
+[[!tag wishlist]]
+
+`CDATE_3339`, `CDATE_822`, `MDATE_3339` and `MDATE_822` template variables would be useful for evey page, at least for my templates with Dublin Core metadata.
+
+I tried to pick the relevant lines of the [[inline|plugins/inline]] plugin and hack it into a custom plugin, but it failed miserably because of my obvious lack of perl litteracy...
+
+Anyway, I'm sure this is almost nothing...
+
+* `sub date_822 ($) {}`
+* `sub date_3339 ($) {}`
+* and something like `$template->param('cdate_822' => date_822($IkiWiki::pagectime{$page}));`
+
+Anyone can fill the missing lines?
+
+-- [[nil]]
diff --git a/doc/todo/comment_moderation_feed.mdwn b/doc/todo/comment_moderation_feed.mdwn
new file mode 100644
index 000000000..267706b1b
--- /dev/null
+++ b/doc/todo/comment_moderation_feed.mdwn
@@ -0,0 +1,9 @@
+There should be a way to generate a feed that is updated whenever a new
+comment needs moderation. Otherwise, it can be hard to remember to check
+sites, which may rarely get comments.
+
+The feed should not include the comment subject or body, but could mention
+the author. It would be especially handy if it was generated statically.
+One way would be to generate internal pages corresponding to each comment
+that needs moderation; then the feed could be constructed via a usual
+inline.
diff --git a/doc/todo/configurable_markdown_path.mdwn b/doc/todo/configurable_markdown_path.mdwn
new file mode 100644
index 000000000..63fa2dcbd
--- /dev/null
+++ b/doc/todo/configurable_markdown_path.mdwn
@@ -0,0 +1,64 @@
+[[!template id=gitbranch branch=wtk/mdwn author="[[wtk]]"]]
+
+summary
+=======
+
+Make it easy to configure the Markdown implementation used by the
+[[plugins/mdwn]] plugin. With this patch, you can set the path to an
+external Markdown executable in your ikiwiki config file. If you do
+not set a path, the plugin will use the usual config options to
+determine which Perl module to use.
+
+> This adds a configuration in which a new process has to be worked
+> for every single page rendered. Actually, it doesn't only add
+> such a configuration, it makes it be done by *default*.
+>
+> Markdown is ikiwiki's default, standard renderer. A configuration
+> that makes it slow will make ikiwiki look bad.
+>
+> I would not recommend using Gruber's perl markdown. It is old, terminally
+> buggy, and unmaintained. --[[Joey]] [[!tag reviewed]]
+
+----
+
+I wasn't trying to make an external markdown the default, I was trying
+to make the currently hardcoded `/usr/bin/markdown` configurable. It
+should only use an external process if `markdown_path` is set, which
+it is not by default. Consider the following tests from clean checkouts:
+
+Current ikiwiki trunk:
+
+ $ PERL5LIB="." time ikiwiki --setup docwiki.setup
+ ...
+ 38.73user 0.62system 1:20.90elapsed 48%CPU (0avgtext+0avgdata 103040maxresident)k
+ 0inputs+6472outputs (0major+19448minor)pagefaults 0swaps
+
+My mdwn branch:
+
+ $ PERL5LIB="." time ikiwiki --setup docwiki.setup
+ ...
+ Markdown: Text::Markdown::markdown()
+ ...
+ 39.17user 0.73system 1:21.77elapsed 48%CPU (0avgtext+0avgdata 103072maxresident)k
+ 0inputs+6472outputs (0major+19537minor)pagefaults 0swaps
+
+My mdwn branch with `markdown_path => "/usr/bin/markdown"` added in
+`docwiki.setup` (on my system, `/usr/bin/markdown` is a command-line
+wrapper for `Text::Markdown::markdown`).
+
+ $ PERL5LIB="." time ikiwiki --setup docwiki.setup
+ ...
+ Markdown: /usr/bin/markdown
+ ...
+ 175.35user 18.99system 6:38.19elapsed 48%CPU (0avgtext+0avgdata 92320maxresident)k
+ 0inputs+17608outputs (0major+2189080minor)pagefaults 0swaps
+
+So my patch doesn't make ikiwiki slow unless the user explicitly
+requests an extenral markdown, which they would presumably only do to
+work around bugs in their system's Perl implementation.
+ -- [[wtk]]
+
+> I was wrong about it being enabled by default, but I still don't like
+> the idea of a configuration that makes ikiwiki slow on mdwn files,
+> even if it is a nonstandard configuration. How hard can it be to install
+> the Text::Markdown library? --[[Joey]]
diff --git a/doc/todo/configurable_tidy_command_for_htmltidy.mdwn b/doc/todo/configurable_tidy_command_for_htmltidy.mdwn
new file mode 100644
index 000000000..e317184b5
--- /dev/null
+++ b/doc/todo/configurable_tidy_command_for_htmltidy.mdwn
@@ -0,0 +1,8 @@
+[[!tag patch patch]]
+
+I was trying to get htmltidy to [play nicely with MathML][play]. Unfortunately, I couldn't construct a command line that I was happy with, but along the way I altered htmltidy to allow a configurable command line. This seemed like a generally useful thing, so I've published my [patch][] as a Git branch.
+
+[play]: http://lists.w3.org/Archives/Public/html-tidy/2006JanMar/0052.html
+[patch]: http://www.physics.drexel.edu/~wking/code/git/git.php?p=ikiwiki.git&a=commitdiff&h=408ee89fd7c1dc70510385a7cf263a05862dda97&hb=e65ce4f0937eaf622846c02a9d39fa7aebe4af12
+
+> Thanks, [[done]] --[[Joey]]
diff --git a/doc/todo/configurable_timezones.mdwn b/doc/todo/configurable_timezones.mdwn
index f8b1dbbab..36f2e9dbb 100644
--- a/doc/todo/configurable_timezones.mdwn
+++ b/doc/todo/configurable_timezones.mdwn
@@ -4,7 +4,4 @@ This is nice for shared hosting, and other situation where the user doesn't have
> [[done]] via the ENV setting in the setup file. --[[Joey]]
-
-Example (ikiwiki.setup):
-
- ENV => { TZ => "Europe/Sofia" }
+>> Now via a timezone setting that is web configurable. --[[Joey]]
diff --git a/doc/todo/conflict_free_comment_merges.mdwn b/doc/todo/conflict_free_comment_merges.mdwn
new file mode 100644
index 000000000..e84400c17
--- /dev/null
+++ b/doc/todo/conflict_free_comment_merges.mdwn
@@ -0,0 +1,23 @@
+Currently, new comments are named with an incrementing ID (comment_N). So
+if a wiki has multiple disconnected servers, and comments are made to the
+same page on both, merging is guaranteed to result in conflicts.
+
+I propose avoiding such merge problems by naming a comment with a sha1sum
+of its (full) content. Keep the incrementing ID too, so there is an
+-ordering. And so duplicate comments are allowed..)
+So, "comment_N_SHA1".
+
+Note: The comment body will need to use meta title in the case where no
+title is specified, to retain the current behavior of the default title
+being "comment N".
+
+What do you think [[smcv]]? --[[Joey]]
+
+> I had to use md5sums, as sha1sum perl module may not be available and I
+> didn't want to drag it in. But I think that's ok; this doesn't need to be
+> cryptographically secure and even the chances of being able to
+> purposefully cause a md5 collision and thus an undesired merge conflict
+> are quite low since it modifies the input text and adds a date stamp to
+> it.
+>
+> Anyway, I think it's good, [[done]] --[[Joey]]
diff --git a/doc/todo/countdown_directive.mdwn b/doc/todo/countdown_directive.mdwn
new file mode 100644
index 000000000..61c36204c
--- /dev/null
+++ b/doc/todo/countdown_directive.mdwn
@@ -0,0 +1,5 @@
+I'd love to have a countdown directive, which would take a timestamp to count down to and generate a JavaScript timer in the page.
+
+Ideally I'd also like to either have parameters providing content to show before and after the time passes, or integration with existing conditional directives to do the same thing.
+
+[[!tag wishlist]]
diff --git a/doc/todo/credentials_page.mdwn b/doc/todo/credentials_page.mdwn
new file mode 100644
index 000000000..6b90af144
--- /dev/null
+++ b/doc/todo/credentials_page.mdwn
@@ -0,0 +1,33 @@
+pushing [[this|todo/httpauth feature parity with passwordauth]] and [[this|todo/htpasswd mirror of the userdb]] further (although rather in the [[wishlist]] priority): would it make sense for users to have a `$USER/credentials` page that is by default locked to the user and admins, where the user can state one or more of the below?
+
+* OpenID
+* ssh public key (would require an additional mechanism for writing this to a `authorized_keys` file with appropriate environment variables or prefix that makes sure the commit is checked against the right user and that the user names agree)
+* gpg public key (once there is a mechanism that relies on gpg for authentication))
+* https certificate hash (don't know details; afair the creation of such certificates is typically initiated server-side)
+* password hash (this is generally considered a valuable secret; is this still true with good hashes and proper salting?)
+
+such a page could have a form as described in [[todo/structured page data]] and could even serve as a way of managing users. --[[chrysn]]
+
+> I was just thinking about something along these lines myself. The
+> idea, if I understand correctly, is to allow users to have multiple
+> login options all leading to the same identity. This would allow a
+> user to login for example via either their Google account or their
+> WordPress account, while still being identified as the same user.
+
+> However, I'm not sure this should be a static page (I guess you
+> mean `$USER/credentials`, I don't think ‘creditentials’ actually
+> exists). Something entirely managed at the CGI level is probably
+> better, as it also helps keeping the data in its place (such as ssh
+> public keys in `authorized_keys` etc).
+
+> -- GB
+
+>> having multiple login options leading to the same identity, and (more important to me) giving the user an easy way to review and edit them. i'm thinking a bit of foaf+ssl style "i am $USER and you can recognize me by my client certificate $CERTIFICATE" statements.
+>>
+>> the reason why i want this in a static place instead of cgi level is that it can be used, for example, for automatically creating htpasswd files for read-only (cgi-less) replicas of private wikis. furthermore, it all gets versioned and it can easily be seen where the data really is. the credentials have to be filed appropriately by plugins anyway, but that can happen as a part of the regular rebuild process.
+>>
+>> and yes, you're right about the word misusage; thanks for pointing it out and fixing it.
+>>
+>> --[[chrysn]]
+
+an issue to be considered: for ways of authentication that don't explicitly mention the user name (and that would be everything but password; especially OpenID), there has to be a way to prevent users from hijacking an admin's account. the user wouldn't get more privileges, but the admin could find himself logged in as a user instead of an admin when he logs in using his OpenID, for example. he could fix it by removing the openid from the user's ("his") page, but it has to be taken care of nevertheless. --[[chrysn]]
diff --git a/doc/todo/dependency_types.mdwn b/doc/todo/dependency_types.mdwn
index da9b5e6cf..4db633ead 100644
--- a/doc/todo/dependency_types.mdwn
+++ b/doc/todo/dependency_types.mdwn
@@ -553,29 +553,16 @@ operators. Currently, this turns into roughly:
`FailReason() & SuccessReason(patch)`
Let's say that the glob instead returns a HardFailReason, which when
-ANDed with another object, drops their influences. (But when ORed, combines
-them.) Fixes the above, but does it always work?
+ANDed with another object, blocks their influences. (But when ORed,
+combines them.)
-"(bugs/* or link(patch)) and backlink(index)" =>
-`( HardFailReason() | SuccessReason(page) ) & SuccessReason(index)`` =>
-`SuccessReason(page & SuccessReason(index)` =>
-SuccessReason(page, index) => right
+Question: Are all pagespec terms that return reason objects w/o any
+influence info, suitable to block influence in this way?
-"(bugs/* and link(patch)) or backlink(index)" =>
-`( HardFailReason() & SuccessReason(page) ) | SuccessReason(index)`` =>
-`HardFailReason() | SuccessReason(index)` =>
-`SuccessReason(index)` => right
-
-Ok so far, but:
-
-"!bugs/* and link(patch)" =>
-`!SuccessReason() | SuccessReason(bugs/foo)` =>
-'FailReason() | SuccessReason(bugs/foo)
-`FailReason(bugs/foo)` => wrong!
-
-This could be fixed by adding a HardSuccessReason that glob also returns.
-Maybe just a field of the object that is set if it is "hard" is a better
-approach though.
+To be suitable to block, a term should never change from failing to match a
+page to successfully matching it, unless that page is directly changed in a
+way that influences are not needed for ikiwiki to notice. But, if a term
+did not meet these criteria, it would have an influence. QED.
#### Influence types
diff --git a/doc/todo/description_meta_param_passed_to_templates.mdwn b/doc/todo/description_meta_param_passed_to_templates.mdwn
new file mode 100644
index 000000000..712471258
--- /dev/null
+++ b/doc/todo/description_meta_param_passed_to_templates.mdwn
@@ -0,0 +1,10 @@
+[[!tag wishlist patch]]
+
+I'd like to use the description parameter from [[meta|/ikiwiki/directive/meta]] directives in custom [[inline|/ikiwiki/directive/inline]] templates. I guess this could be useful to others too.
+
+The only change required is on [line 266](http://github.com/joeyh/ikiwiki/blob/master/IkiWiki/Plugin/meta.pm#L266) of `meta.pm`
+
+ - foreach my $field (qw{author authorurl permalink}) {
+ + foreach my $field (qw{author authorurl description permalink}) {
+
+> Good idea, [[done]]. --[[Joey]]
diff --git a/doc/todo/double-click_protection_for_form_buttons.mdwn b/doc/todo/double-click_protection_for_form_buttons.mdwn
new file mode 100644
index 000000000..501be4498
--- /dev/null
+++ b/doc/todo/double-click_protection_for_form_buttons.mdwn
@@ -0,0 +1,5 @@
+A small piece of JS to prevent double-submitting forms would be quite nice. I seem to have developed a habit of doing this and having to resolve a merge conflict for two initial commits. -- [[Jon]]
+
+> By the time you see that merge conflict, the first commit has
+> already successfully happened, so you can just hit cancel
+> and throw away the second submit. --[[Joey]]
diff --git a/doc/todo/edit_form:_no_fixed_size_for_textarea.mdwn b/doc/todo/edit_form:_no_fixed_size_for_textarea.mdwn
index 4c9c2352a..77e46049f 100644
--- a/doc/todo/edit_form:_no_fixed_size_for_textarea.mdwn
+++ b/doc/todo/edit_form:_no_fixed_size_for_textarea.mdwn
@@ -10,7 +10,7 @@ On longer pages its not very comfortable to edit pages with such a small box. Th
> }
>
> Perhaps you have replaced it with a modified style sheet that does not
-> include that? --[[Joey]] [[!tag done]]
+> include that? --[[Joey]]
>> The screen shot was made with http://ikiwiki.info/ where i didn't change anything. The width is optimally used. The problem is the height.
@@ -32,3 +32,21 @@ On longer pages its not very comfortable to edit pages with such a small box. Th
>>> --[[Joey]]
>>>>>> the javascript approach would need to work something like this: you need to know about the "bottom-most" item on the edit page, and get a handle for that object in the DOM. You can then obtain the absolute position height-wise of this element and the absolute position of the bottom of the window to determine the pixel-difference. Then, you set the height of the textarea to (current height in px) + determined-value. This needs to be re-triggered on various resize events, at least for the window and probably for other elements too. I may have a stab at this at some point. -- [[Jon]]
+
+Google chrome has a completly elegant fix for this problem: All textareas
+have a small resize handle in a corner, that can be dragged around. No
+nasty javascript needed. IMHO, this is the right solution, and I hope other
+browsers emulate it. [[done]]
+--[[Joey]]
+
+Wouldn't it be possible to just implement an integer-valued setting for this, accessible via the "Setup" wiki page? This would require a wiki regen, but such a setting would not be changed frequently I suppose. Also, Mediawiki has this implemented as a per-user setting (two settings, actually, -- number of rows and columns of the edit area); such a per-user setting would be the best possible implementation, but I'm not sure if ikiwiki already supports per-user settings. Please consider implementing this as the current 20 rows is a great PITA for any non-trivial page.
+
+> I don't think it would need a wiki rebuild, as the textarea is generated dynamically by the CGI when you perform a CGI action, and (as far as I know) is not cooked into any static content. -- [[Jon]]
+
+>> There is no need for a configuration setting for this -- to change
+>> the default height from 20 rows to something else, you can just put
+>> something like this in your `local.css`: --[[Joey]]
+
+ #editcontent {
+ height: 50em;
+ }
diff --git a/doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn b/doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn
new file mode 100644
index 000000000..4bc10e432
--- /dev/null
+++ b/doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn
@@ -0,0 +1,8 @@
+[[plugins/edittemplate]] looks for the specified template relative to the
+page the directive appears on. Which can be handy, eg, make a
+blog/mytemplate and put the directive on blog, and it will find
+"mytemplate". However, it can also be confusing, since other templates
+always are looked for in `templates/`.
+
+I think it should probably fall back to looking for `templates/$foo`.
+--[[Joey]]
diff --git a/doc/todo/enable-htaccess-files.mdwn b/doc/todo/enable-htaccess-files.mdwn
index e302a49ed..3b9721d50 100644
--- a/doc/todo/enable-htaccess-files.mdwn
+++ b/doc/todo/enable-htaccess-files.mdwn
@@ -12,6 +12,13 @@
qr/(^|\/).svn\//, qr/.arch-ids\//, qr/{arch}\//],
wiki_link_regexp => qr/\[\[(?:([^\]\|]+)\|)?([^\s\]#]+)(?:#([^\s\]]+))?\]\]/,
+> Note that the above patch is **completely broken**.
+> It removes the crucial excludes of all files starting with a dot.
+> The negative regexps for htaccess have no effect, so the whole
+> thing only "works" because it allows *any* file starting with a dot.
+> If you applied this patch to your ikiwiki, you opened a huge security
+> hole. --[[Joey]]
+
[[!tag patch patch/core]]
This lets the site administrator have a `.htaccess` file in their underlay
@@ -57,5 +64,17 @@ It should be off by default of course. --Max
---
+1 I want `.htaccess` so I can rewrite some old Wordpress URLs to make feeds work again. --[[hendry]]
+> Unless you cannot modify apache's configuration, you do not need htaccess
+> to do that. Apache's documentation recommends against using htaccess
+> unless you're a user who cannot modify the main server configuration.
+> --[[Joey]]
+
---
+1 for various purposes (but sometimes the filename isn't `.htaccess`, so please make it configurable) --[[schmonz]]
+
+> I've described a workaround for one use case at the [[plugins/rsync]] [[plugins/rsync/discussion]] page. --[[schmonz]]
+
+---
+
+[[done]], you can use the `include` setting to override the default
+excludes now. Please use extreme caution when doing so. --[[Joey]]
diff --git a/doc/todo/enable_arbitrary_markup_for_directives.mdwn b/doc/todo/enable_arbitrary_markup_for_directives.mdwn
new file mode 100644
index 000000000..c1f0f86ed
--- /dev/null
+++ b/doc/todo/enable_arbitrary_markup_for_directives.mdwn
@@ -0,0 +1,47 @@
+One of the good things about [PmWiki](http://www.pmwiki.org) is the ability to treat arbitrary markup as directives.
+In ikiwiki, all directives have the same format:
+
+\[[!name arguments]]
+
+But with PmWiki, directives can be added to the engine (with the "Markup" hook) with the usual name and function passing, but also with a regexp which has capturing parentheses, and the results of the match are passed to the given function.
+Would it be possible to alter the "preprocess" hook to have an optional regex argument which acted in a similar fashion?
+
+For example, one could then write a plugin which would treat
+
+Category: Foo, Bar
+
+as a tag, by using a regex such as /^Category:\s*([\w\s,]+)$/; the result "Foo, Bar" could then be further processed by the hook function.
+
+This could also make it easier to support more styles of markup, rather than having to do all the processing in "htmlize" and/or "filter".
+
+-- [[KathrynAndersen]]
+
+[[!taglink wishlist]]
+
+> Arbitrary text transformations can already be done via the filter and
+> sanitize hooks. That's how the smiley and typography plugins do their
+> thing.
+>
+> AFAICS, the only benefit to having a regexp-based-hook interface is less
+> overhead in passing page content into the hooks. But that overhead is a
+> small amount of the total render time.
+>
+> Also, I notice that smiley does such complicated things in its sanitize
+> hook (ie, it looks at html context around the smilies) that a simple
+> matching regexp would not be sufficient. Furthermore, typography needs to
+> pass the page content into the library it uses, which does not expose
+> regexps to match on. So ikiwiki's more general filtering interface seems
+> to allow both of these to do things that could not be done with the
+> PmWiki interface. --[[Joey]]
+
+>>You have some good points. I was aware of using filter, but it didn't occur to me that one could use sanitize to do processing also, probably because "sanitize" brought to mind removing harmful content rather than doing other alterations.
+>>It has also occurred to me, on further thought, that if one wants one's chosen markup to actually be processed during the "preprocess" stage, that one could do so by converting the chosen markup to directive-style markup during the "filter" stage and then processing the directive during the "preprocess" stage as per usual. Is there a tag for "no longer on the wishlist?". --[[KathrynAndersen]]
+
+>>> Yeah, sanitize is a misleading name for the relatively few things that
+>>> use it this way.
+>>>
+>>> While you could do a filter to preprocess step, it is a bit
+>>> of a long way round, since filter always runs just before
+>>> preprocess.
+>>>
+>>> Anyway, guess this is [[done]] --[[Joey]]
diff --git a/doc/todo/feed_enhancements_for_inline_pages.mdwn b/doc/todo/feed_enhancements_for_inline_pages.mdwn
new file mode 100644
index 000000000..b48c37d7b
--- /dev/null
+++ b/doc/todo/feed_enhancements_for_inline_pages.mdwn
@@ -0,0 +1,132 @@
+[[!template id=gitbranch branch=GiuseppeBilotta/inlinestuff author="Giuseppe Bilotta"]]
+
+I rearranged my patchset once again, to clearly identify the origin and
+motivation of each patch, which is explained in the following.
+
+In my ikiwiki-based website I have the following situation:
+
+* `$config{usedirs}` is 1
+* there are a number of subdirectories (A/, B/, C/, etc)
+ with pages under each of them (A/page1, A/page2, B/page3, etc)
+* 'index pages' for each subdirectory: A.mdwn, B.mdwn, C.mdwn;
+ these are rather barebone, only contain an inline directive for their
+ respective subpages and become A/index.html, etc
+* there is also the main index.mdwn, which inlines A.mdwn, B.mdwn, C.mdwn,
+ etc (i.e. the top-level index files are also inlined on the homepage)
+
+With the upstream `inline` plugin, the feeds for A, B, C etc are located
+in `A/index.atom`, `B/index.atom`, etc; their title is the wiki name and
+their main link goes to the wiki homepage rather than to their
+respective subdir (e.g. I would expect `A/index.atom` to have a link to
+`http://website/A` but it actually points to `http://website/`).
+
+This is due to them being generated from the main index page, and is
+fixed by the first patch: ‘inline: base feed urls on included page
+name’. As explained in the commit message for the patch itself, this is
+a ‘forgotten part’ from a previous page vs destpage fix which has
+already been included upstream.
+
+> Applied. --[[Joey]]
+
+>> Thanks.
+
+The second patch, ‘inline: improve feed title and description
+management’, aligns feed title and description management by introducing
+a `title` option to complement `description`, and by basing the
+description on the page description if the entry is missing. If no
+description is provided by either the directive parameter or the page
+metadata, we use a user-configurable default based on both the page
+title and wiki name rather than hard-coding the wiki name as description.
+
+> Reviewing, this seems ok, but I don't like that
+> `feed_desc_fmt` is "safe => 0". And I question if that needs
+> to be configurable at all. I say, drop that configurable, and
+> only use the page meta description (or wikiname for index).
+>
+> Oh, and could you indent your `elsif` the same as I? --[[Joey]]
+
+>> I hadn't even realized that I was nesting ifs inside else clauses,
+>> sorry. I think you're also right about the safety of the key, after
+>> all it only gets interpolated with known, safe strings.
+
+>>> I did not mean to imply that I thought it safe. --[[Joey]]
+
+>>>> Sorry for assuming you implied that. I do think it is safe, though
+>>>> (I defaulted to not safe just to err on the safe side).
+
+>> The question is what to do for pages that do not have a description
+>> (and are not the index). With your proposal, the Atom feed subtitle
+>> would turn up empty. We could make it conditional in the default
+>> template, or we could have `$desc` default to `$title` if nothing
+>> else is provided, but at this point I see no reason to _not_ allow
+>> the user to choose a way to build a default description.
+
+>>> RSS requires the `<description>` element be present, it can't
+>>> be conditionalized away. But I see no reason to add the complexity
+>>> of an option to configure a default value for a field that
+>>> few RSS consumers likely even use. That's about 3 levels below useful.
+>>> --[[Joey]]
+
+>>>> The way I see it, there are three possibilities for non-index pages
+>>>> which have no description meta: (1) we leave the
+>>>> description/subtitle in feed blank, per your current proposal here
+>>>> (2) we hard-code some string to put there and (3) we make the
+>>>> string to put there configurable. Honestly, I think option #1 sucks
+>>>> aesthetically and option #2 is conceptually wrong (I'm against
+>>>> hard-coding stuff in general), which leaves option #3: however
+>>>> rarely used it would be, I still think it'd be better than #2 and
+>>>> less unaesthetical than #1.
+
+>>>> I'm also not sure what's ‘complex’ about having such an option:
+>>>> it's definitely not going to get much use, but does it hurt to have
+>>>> it? I could understand not wasting time putting it in, but since
+>>>> the code is written already … (but then again I'm known for being a
+>>>> guy who loves options).
+
+The third patch, ‘inline: allow assigning an id to postform/feedlink’,
+does just that. I don't currently use it, but it can be particularly
+useful in the postform case for example for scriptable management of
+multiple postforms in the same page.
+
+> Applied. --[[Joey]]
+
+>> Thanks.
+
+In one of my wiki setups I had a terminating '/' in `$config{url}`. You
+mention that it should not be present, but I have not seen this
+requirement described anywhere. Rather than restricting the user input,
+I propose a patch that prevents double slashes from appearing in links
+created by `urlto()` by fixing the routine itself.
+
+> If this is fixed I would rather not put the overhead of fixing it in
+> every call to `urlto`. And I'm not sure this is a comprehensive
+> fix to every problem a trailing slash in the url could cause. --[[Joey]]
+
+>> Maybe something that sanitizes the config value would be better instead?
+>> What is the policy about automatic changing user config?
+
+>>> It's impossible to do for perl-format setup files. --[[Joey]]
+
+>>>> Ok. In that case I think that we should document that it must be
+>>>> slash-less. I'll cook up a patch in that sense.
+
+The inline plugin is also updated (in a separate patch) to use `urlto()`
+rather than hand-coding the feed urls. You might want to keep this
+change even if you discard the urlto patch.
+
+> IIRC, I was missing a proof that this always resulted in identical urls,
+> which is necessary to prevent flooding. I need such a proof before I can
+> apply that. --[[Joey]]
+
+>> Well, the URL would obviously change if the `$config{url}` ended in
+>> slash and the `urlto` patch (or other equivalent) went into effect.
+
+>> Aside from that, if I read the code correctly, the only other extra
+>> thing that `urlto` does is to `beautify_url_path` the `"/".$to` part,
+>> and the only way this would cause the url to be altered is if the
+>> feed name was "index" (which can easily happen) and
+>> `$config{htmlext}` was set to something like `.rss` or
+>> `.rss.1`.
+
+>> So there is a remote possibility that a different URL would be
+>> produced.
diff --git a/doc/todo/finer_control_over___60__object___47____62__s.mdwn b/doc/todo/finer_control_over___60__object___47____62__s.mdwn
new file mode 100644
index 000000000..50c4d43bf
--- /dev/null
+++ b/doc/todo/finer_control_over___60__object___47____62__s.mdwn
@@ -0,0 +1,98 @@
+IIUC, the current version of [HTML::Scrubber][] allows for the `object` tags to be either enabled or disabled entirely. However, while `object` can be used to add *code* (which is indeed a potential security hole) to a document, reading [Objects, Images, and Applets in HTML documents][objects-html] reveals that the &ldquo;dangerous&rdquo; are not all the `object`s, but rather those having the following attributes:
+
+ classid %URI; #IMPLIED -- identifies an implementation --
+ codebase %URI; #IMPLIED -- base URI for classid, data, archive--
+ codetype %ContentType; #IMPLIED -- content type for code --
+ archive CDATA #IMPLIED -- space-separated list of URIs --
+
+It seems that the following attributes are, OTOH, safe:
+
+ declare (declare) #IMPLIED -- declare but don't instantiate flag --
+ data %URI; #IMPLIED -- reference to object's data --
+ type %ContentType; #IMPLIED -- content type for data --
+ standby %Text; #IMPLIED -- message to show while loading --
+ height %Length; #IMPLIED -- override height --
+ width %Length; #IMPLIED -- override width --
+ usemap %URI; #IMPLIED -- use client-side image map --
+ name CDATA #IMPLIED -- submit as part of form --
+ tabindex NUMBER #IMPLIED -- position in tabbing order --
+
+Should the former attributes be *scrubbed* while the latter left intact, the use of the `object` tag would seemingly become safe.
+
+Note also that allowing `object` (either restricted in such a way or not) automatically solves the [[/todo/svg]] issue.
+
+For Ikiwiki, it may be nice to be able to restrict [URI's][URI] (as required by the `data` and `usemap` attributes) to, say, relative and `data:` (as per [RFC 2397][]) ones as well, though it requires some more consideration.
+
+&mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+[[wishlist]]
+
+> SVG can contain embedded javascript.
+
+>> Indeed.
+
+>> So, a more general tool (`XML::Scrubber`?) will be necessary to
+>> refine both [XHTML][] and SVG.
+
+>> &hellip; And to leave [MathML][] as is (?.)
+
+>> &mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+> The spec that you link to contains
+> examples of objects that contain python scripts, Microsoft OLE
+> objects, and Java. And then there's flash. I don't think ikiwiki can
+> assume all the possibilities are handled securely, particularly WRT XSS
+> attacks.
+> --[[Joey]]
+
+>> I've scanned over all the `object` examples in the specification and
+>> all of those that hold references to code (as opposed to data) have a
+>> distinguishing `classid` attribute.
+
+>> While I won't assert that it's impossible to reference code with
+>> `data` (and, thanks to `text/xhtml+xml` and `image/svg+xml`, it is
+>> *not* impossible), throwing away any of the &ldquo;insecure&rdquo;
+>> attributes listed above together with limiting the possible URI's
+>> (i.&nbsp;e., only *local* and certain `data:` ones for `data` and
+>> `usemap`) should make `object` almost as harmless as, say, `img`.
+
+>>> But with local data, one could not embed youtube videos, which surely
+>>> is the most obvious use case?
+
+>>>> Allowing a &ldquo;remote&rdquo; object to render on one's page is a
+ security issue by itself.
+ Though, of course, having an explicit whitelist of URI's may make
+ this issue more tolerable.
+ &mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+>>> Note that youtube embedding uses an
+>>> object element with no classid. The swf file is provided via an
+>>> enclosed param element. --[[Joey]]
+
+>>>> I've just checked a random video on YouTube and I see that the
+ `.swf` file is provided via an enclosed `embed` element. Whether
+ to allow those or not is a different issue.
+ &mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+>> (Though it certainly won't solve the [[SVG_problem|/todo/SVG]] being
+>> restricted in such a way.)
+
+>> Of the remaining issues I could only think of recursive
+>> `object` &mdash; the one that references its container document.
+
+>> &mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+## See also
+
+* [Objects, Images, and Applets in HTML documents][objects-html]
+* [[plugins/htmlscrubber|/plugins/htmlscrubber]]
+* [[todo/svg|/todo/svg]]
+* [RFC 2397: The &ldquo;data&rdquo; URL scheme. L.&nbsp;Masinter. August 1998.][RFC 2397]
+* [Uniform Resource Identifier &mdash; the free encyclopedia][URI]
+
+[HTML::Scrubber]: http://search.cpan.org/~podmaster/HTML-Scrubber-0.08/Scrubber.pm
+[MathML]: http://en.wikipedia.org/wiki/MathML
+[objects-html]: http://www.w3.org/TR/1999/REC-html401-19991224/struct/objects.html
+[RFC 2397]: http://tools.ietf.org/html/rfc2397
+[URI]: http://en.wikipedia.org/wiki/Uniform_Resource_Identifier
+[XHTML]: http://en.wikipedia.org/wiki/XHTML
diff --git a/doc/todo/generic_insert_links.mdwn b/doc/todo/generic_insert_links.mdwn
new file mode 100644
index 000000000..050f32ee7
--- /dev/null
+++ b/doc/todo/generic_insert_links.mdwn
@@ -0,0 +1,24 @@
+The attachment plugin's Insert Links button currently only knows
+how to insert plain wikilinks and img directives (for images).
+
+[[wishlist]]: Generalize this, so a plugin can cause arbitrary text
+to be inserted for a particular file. --[[Joey]]
+
+Design:
+
+Add an insertlinks hook. Each plugin using the hook would be called,
+and passed the filename of the attachment. If it knows how to handle
+the file type, it returns a the text that should be inserted on the page.
+If not, it returns undef, and the next plugin is tried.
+
+This would mean writing plugins in order to handle links for
+special kinds of attachments. To avoid that for simple stuff,
+a fallback plugin could run last and look for a template
+named like `templates/embed_$extension`, and insert a directive like:
+
+ \[[!template id=embed_vp8 file=my_movie.vp8]]
+
+Then to handle a new file type, a user could just make a template
+that expands to some relevant html. In the example above,
+`templates/embed_vp8` could make a html5 video tag, possibly with some
+flash fallback code even.
diff --git a/doc/todo/git_attribution/discussion.mdwn b/doc/todo/git_attribution/discussion.mdwn
index dfb490bc2..6905d9b4b 100644
--- a/doc/todo/git_attribution/discussion.mdwn
+++ b/doc/todo/git_attribution/discussion.mdwn
@@ -72,7 +72,7 @@ no determination of uniqueness)
> GIT_AUTHOR_EMAIL can also be set.
>
> There is one thing yet to be solved, and that is how to tell the
-> difference between a web commit by 'Joey Hess <joey@kitenet.net>',
+> difference between a web commit by 'Joey Hess <joey\@kitenet.net>',
> and a git commit by the same. I think we do want to differentiate these,
> and the best way to do it seems to be to add a line to the end of the
> commit message. Something like: "\n\nWeb-commit: true"
@@ -94,5 +94,5 @@ no determination of uniqueness)
> * github pushes to twitter ;-)
>
> So while I tried that way at first, I'm now leaning toward encoding the
-> username in the email address. Like "user <user@web>", or
-> "joey <http://joey.kitenet.net/@web>".
+> username in the email address. Like "user <user\@web>", or
+> "joey <http://joey.kitenet.net/\@web>".
diff --git a/doc/todo/headless_git_branches.mdwn b/doc/todo/headless_git_branches.mdwn
new file mode 100644
index 000000000..1dd867765
--- /dev/null
+++ b/doc/todo/headless_git_branches.mdwn
@@ -0,0 +1,74 @@
+Ikiwiki should really survive being asked to work with a git branch that has no existing commits.
+
+ mkdir iki-gittest
+ cd iki-gittest
+ GIT_DIR=barerepo.git git init
+ git clone barerepo.git srcdir
+ ikiwiki --rcs=git srcdir destdir
+
+I've fixed this initial construction case, and, based on my testing, I've also fixed the post-update executing on a new master, and ikiwiki.cgi executing on a non-existent master cases.
+
+Please commit so my users stop whining at me about having clean branches to push to, the big babies.
+
+Summary: Change three scary loud failure cases related to empty branches into three mostly quiet success cases.
+
+[[!tag patch]]
+
+<pre>
+diff --git a/IkiWiki/Plugin/git.pm b/IkiWiki/Plugin/git.pm
+index cf7fbe9..e5bafcf 100644
+--- a/IkiWiki/Plugin/git.pm
++++ b/IkiWiki/Plugin/git.pm
+@@ -439,17 +439,21 @@ sub git_commit_info ($;$) {
+
+ my @opts;
+ push @opts, "--max-count=$num" if defined $num;
+-
+- my @raw_lines = run_or_die('git', 'log', @opts,
+- '--pretty=raw', '--raw', '--abbrev=40', '--always', '-c',
+- '-r', $sha1, '--', '.');
+-
++ my @raw_lines;
+ my @ci;
+- while (my $parsed = parse_diff_tree(\@raw_lines)) {
+- push @ci, $parsed;
+- }
++
++ # Test to see if branch actually exists yet.
++ if (run_or_non('git', 'show-ref', '--quiet', '--verify', '--', 'refs/heads/' . $config{gitmaster_branch}) ) {
++ @raw_lines = run_or_die('git', 'log', @opts,
++ '--pretty=raw', '--raw', '--abbrev=40', '--always', '-c',
++ '-r', $sha1, '--', '.');
++
++ while (my $parsed = parse_diff_tree(\@raw_lines)) {
++ push @ci, $parsed;
++ }
+
+- warn "Cannot parse commit info for '$sha1' commit" if !@ci;
++ warn "Cannot parse commit info for '$sha1' commit" if !@ci;
++ };
+
+ return wantarray ? @ci : $ci[0];
+ }
+@@ -474,7 +478,10 @@ sub rcs_update () {
+ # Update working directory.
+
+ if (length $config{gitorigin_branch}) {
+- run_or_cry('git', 'pull', '--prune', $config{gitorigin_branch});
++ run_or_cry('git', 'fetch', '--prune', $config{gitorigin_branch});
++ if (run_or_non('git', 'show-ref', '--quiet', '--verify', '--', 'refs/remotes/' . $config{gitorigin_branch} . '/' . $config{gitmaster_branch}) ) {
++ run_or_cry('git', 'merge', $config{gitorigin_branch} . '/' . $config{gitmaster_branch});
++ }
+ }
+ }
+
+@@ -559,7 +566,7 @@ sub rcs_commit_helper (@) {
+ # So we should ignore its exit status (hence run_or_non).
+ if (run_or_non('git', 'commit', '-m', $params{message}, '-q', @opts)) {
+ if (length $config{gitorigin_branch}) {
+- run_or_cry('git', 'push', $config{gitorigin_branch});
++ run_or_cry('git', 'push', $config{gitorigin_branch}, $config{gitmaster_branch});
+ }
+ }
+
+</pre>
diff --git a/doc/todo/html.mdwn b/doc/todo/html.mdwn
index 44f20c876..4f4542be2 100644
--- a/doc/todo/html.mdwn
+++ b/doc/todo/html.mdwn
@@ -1,6 +1,6 @@
Create some nice(r) stylesheets.
Should be doable w/o touching a single line of code, just
-editing the [[wikitemplates]] and/or editing [[style.css]].
+editing the [[templates]] and/or editing [[style.css]].
[[done]] ([[css_market]] ..)
diff --git a/doc/todo/htpasswd_mirror_of_the_userdb.mdwn b/doc/todo/htpasswd_mirror_of_the_userdb.mdwn
new file mode 100644
index 000000000..e4a411780
--- /dev/null
+++ b/doc/todo/htpasswd_mirror_of_the_userdb.mdwn
@@ -0,0 +1,29 @@
+[[!tag wishlist]]
+
+Ikiwiki is static, so access control for viewing the wiki must be
+implemented on the web server side. Managing wiki users and access
+together, we can currently
+
+* use [[httpauth|plugins/httpauth/]], but some [[passwordauth|plugins/passwordauth]] functionnality [[is missing|todo/httpauth_feature_parity_with_passwordauth/]];
+* use [[passwordauth|plugins/passwordauth]] plus [[an Apache `mod_perl` authentication mechanism|plugins/passwordauth/discussion/]], but this is Apache-centric and enabling `mod_perl` just for auth seems overkill.
+
+Moreover, when ikiwiki is just a part of a wider web project, we may want
+to use the same userdb for the other parts of this project.
+
+I think an ikiwiki plugin which would (re)generate an htpasswd version of
+the user/passwd base (better, two htpasswd files, one with only the wiki
+admins and one with everyone) each time an user is added or modified would
+solve this problem:
+
+* access control can be managed from the web server
+* user management is handled by the passwordauth plugin
+* htpasswd format is understood by various servers (Apache, lighttpd, nginx, ...) and languages commonly used for web development (perl, python, ruby)
+* htpasswd files can be mirrored on other machines when the web site is distributed
+
+-- [[nil]]
+
+> I think this is a good idea. Although unless the password hashes that
+> are stored in the userdb are compatible with htpasswd hashes,
+> the htpasswd hashes will need to be stored in the userdb too. Then
+> any userdb change can just regenerate the htpasswd file, dumping out
+> the right kind of hashes. --[[Joey]]
diff --git a/doc/todo/http_bl_support.mdwn b/doc/todo/http_bl_support.mdwn
new file mode 100644
index 000000000..f7a46ee6c
--- /dev/null
+++ b/doc/todo/http_bl_support.mdwn
@@ -0,0 +1,67 @@
+[Project Honeypot](http://projecthoneypot.org/) has an HTTP:BL API available to subscribed (it's free, accept donations) people/orgs. There's a basic perl package someone wrote, I'm including a copy here.
+
+[from here](http://projecthoneypot.org/board/read.php?f=10&i=112&t=112)
+
+> The [[plugins/blogspam]] service already checks urls against
+> the surbl, and has its own IP blacklist. The best way to
+> support the HTTP:BL may be to add a plugin
+> [there](http://blogspam.repository.steve.org.uk/file/cc858e497cae/server/plugins/).
+> --[[Joey]]
+
+<pre>
+package Honeypot;
+
+use Socket qw/inet_ntoa/;
+
+my $dns = 'dnsbl.httpbl.org';
+my %types = (
+0 => 'Search Engine',
+1 => 'Suspicious',
+2 => 'Harvester',
+4 => 'Comment Spammer'
+);
+sub query {
+my $key = shift || die 'You need a key for this, you get one at http://www.projecthoneypot.org';
+my $ip = shift || do {
+warn 'no IP for request in Honeypot::query().';
+return;
+};
+
+my @parts = reverse split /\./, $ip;
+my $lookup_name = join'.', $key, @parts, $dns;
+
+my $answer = gethostbyname ($lookup_name);
+return unless $answer;
+$answer = inet_ntoa($answer);
+my(undef, $days, $threat, $type) = split /\./, $answer;
+my @types;
+while(my($bit, $typename) = each %types) {
+push @types, $typename if $bit & $type;
+}
+return {
+days => $days,
+threat => $threat,
+type => join ',', @types
+};
+
+}
+1;
+</pre>
+
+From the page:
+
+> The usage is simple:
+
+> use Honeypot;
+> my $key = 'XXXXXXX'; # your key
+> my $ip = '....'; the IP you want to check
+> my $q = Honeypot::query($key, $ip);
+
+> use Data::Dumper;
+> print Dumper $q;
+
+Any chance of having this as a plugin?
+
+I could give it a go, too. Would be fun to try my hand at Perl. --[[simonraven]]
+
+[[!tag wishlist]]
diff --git a/doc/todo/inline_raw_files.mdwn b/doc/todo/inline_raw_files.mdwn
new file mode 100644
index 000000000..8228186f9
--- /dev/null
+++ b/doc/todo/inline_raw_files.mdwn
@@ -0,0 +1,115 @@
+[[!template id=gitbranch branch=wtk/raw_inline author="[[wtk]]"]]
+
+summary
+=======
+
+Extend inlining to handle raw files (files with unrecognized extensions).
+
+Also raise an error in `IkiWiki::pagetype($file)` if `$file` is blank, which avoids trying to do much with missing files, etc.
+
+I'm using the new code in my [blog][].
+
+[blog]: http://www.physics.drexel.edu/~wking/unfolding-disasters/posts/yacc2dot/
+
+usage
+=====
+
+ \[[!inline pagenames="somefile.txt" template="raw" feeds="no"]]
+
+
+> But inline already supports raw files in two ways:
+>
+> * setting raw=yes will cause a page to be inlined raw without
+> using any template, as if it were part of the page at the location
+> of the inline
+> * otherwise, the file becomes an enclosure in the rss feed, for use with
+> podcasting.
+>
+> So I don't see the point of your patch. Although since your text
+> editor seems to like to make lots of whitespace changes, it's possible
+> I missed something in the large quantity of noise introduced by it.
+> --[[Joey]]
+
+>> As I understand it, setting `raw=yes` causes the page to be inlined
+>> as if the page contents had appeared in place of the directive. The
+>> content is then processed by whatever `htmlize()` applies to the
+>> inlining page. I want the inlined page to be unprocessed, and
+>> wrapped in `<pre><code>...</code></pre>` (as they are on the blog
+>> post I link to above).
+>>
+>> Enclosures do not include the page contents at all, just a link to
+>> them. I'm trying to inline the content so I can comment on it from
+>> the inlining page.
+>>
+>> Apologies for my cluttered version history, I should have branched my
+>> earlier changes off to make things clearer. I tried to isolate my
+>> whitespace changes (fixes?) in c9ae012d245154c3374d155958fcb0b60fda57ce.
+>> 157389355d01224b2d3c3f6e4c1eb42a20ec8a90 should hold all the content
+>> changes.
+>>
+>> A list of other things globbed into my master branch that should have
+>> been separate branches:
+>>
+>> * Make it easy to select a Markdown executable for mdwn.pm.
+>> * Included an updated form of
+>> [[Javier Rojas' linktoimgonly.pm|forum/link_to_an_image_inside_the_wiki_without_inlining_it]].
+>> * Included an updated form of
+>> [Jason Blevins' mdwn_itex.pm](http://jblevins.org/git/ikiwiki/plugins.git/plain/mdwn_itex.pm).
+>> * Assorted minor documentation changes.
+>>
+>> --[[wtk]]
+
+>>> I haven't heard anything in a while, so I've reorganized my version
+>>> history and rebased it on the current ikiwiki head. Perhaps now it
+>>> will be easier to merge or reject. Note the new branch name:
+>>> `raw_inline`. I'll open separate todo items for items mentioned in my
+>>> previous comment. --[[wtk]]
+
+----
+
+Reviewing your patch the first thing I see is this:
+
+<pre>
++ if (! $file) {
++ error("Missing file.");
++ }
+</pre>
+
+This fails if the filename is "0". Also, `pagetype()`
+currently cannot fail; allowing it to crash the entire
+wiki build if the filename is somehow undefined seems
+unwise.
+
+I didn't look much further, because it seems to me what you're trying to do
+can be better accomplished by using the highlight plugin. Assuming the raw
+file you want to inline and comment on is some source-code-like thing,
+which seems likely.
+
+Or, another way to do it would be to use the templates plugin, and make
+a template there that puts an inline directive inside pre tags.
+ --[[Joey]] [[!tag reviewed]]
+
+----
+
+If `pagetype()` cannot fail, then I suppose that check has to go ;).
+
+I was under the impression that [[plugins/highlight]] didn't support
+inlining code. It looks like it supports highlighing stand-alone
+files or embedded code. Perhaps I should extend it to support inlined
+code instead of pushing this patch?
+
+> If you configure highlight to support standalone files, then you can
+> inline the resulting pages and get nicely highlighted source code
+> inlined into the page. --[[Joey]]
+
+The `raw.tmpl` included in the patch *does* include the inlined
+content inside `pre` tags. The problem is that the current inline
+code insists on running `htmlize()` on the content before inserting it
+in the template. The heart of my patch is an altered
+`get_inline_content()` that makes the `htmlize()` call dependent on a
+`$read_raw` flag. If the flag is set, the raw (non-htmlized) content
+is used instead.
+
+I just rebased my patches against the current Ikiwiki trunk (no major
+changes) to make them easier to review.
+ --[[wtk]]
diff --git a/doc/todo/latex.mdwn b/doc/todo/latex.mdwn
index 4363003c1..fb273c1ab 100644
--- a/doc/todo/latex.mdwn
+++ b/doc/todo/latex.mdwn
@@ -9,6 +9,8 @@ of the ikiwiki [[/logo]].
> [[users/JasonBlevins]] has also a plugin for including [[LaTeX]] expressions (by means of `itex2MML`) -- [[plugins/mdwn_itex]] (look at his page for the link). --Ivan Z.
+>> I've [[updated|mdwn_itex]] Jason's plugin for ikiwiki 3.x. --[[wtk]]
+
----
ikiwiki could also support LaTeX as a document type, again rendering to HTML.
@@ -227,5 +229,12 @@ Ah yes.. sorry forgot to update the plugin in my public_html folder %-). This wa
>
> --[[Joey]]
+-----
+
+I'm using a [plugin](http://metameso.org/~joe/math/tex.pm) created by [Josef Urban](http://www.cs.ru.nl/~urban) that gets LaTeX into ikiwiki by using [LaTeXML](http://dlmf.nist.gov/LaTeXML). This could well be "the right way" to go (long term) but the plugin still does not render math expressions right, because ikiwiki is filtering out requisite header information. Examples (I recommend you use Firefox to view these!) are available [here](http://metameso.org/aa/math/) and [here](http://metameso.org/aa/simple/). Compare that last example to the [file generated by LaTeXML](http://metameso.org/~joe/math/math.xml). I posted the sources [here](http://metameso.org/aa/sources/) for easy perusal. How to get ikiwiki to use the original DOCTYPE and html fields? I could use some help getting this polished off. --[[jcorneli]]
+
+> update: it seems important to force the browser to think of the content as xml, e.g. [http://metameso.org/~joe/math/example.xml](http://metameso.org/~joe/math/example.xml) has the same source code as [http://metameso.org/~joe/math/example.html](http://metameso.org/~joe/math/example.html) and the former shows math working, but the latter doesn't. --[[jcorneli]]
+
+
[[!tag soc]]
[[!tag wishlist]]
diff --git a/doc/todo/link_plugin_perhaps_too_general__63__.mdwn b/doc/todo/link_plugin_perhaps_too_general__63__.mdwn
new file mode 100644
index 000000000..8a5fd50eb
--- /dev/null
+++ b/doc/todo/link_plugin_perhaps_too_general__63__.mdwn
@@ -0,0 +1,25 @@
+[[!tag wishlist blue-sky]]
+(This isn't important to me - I don't use MediaWiki or Creole syntax myself -
+but just thinking out loud...)
+
+The [[ikiwiki/wikilink]] syntax IkiWiki uses sometimes conflicts with page
+languages' syntax (notably, [[plugins/contrib/MediaWiki]] and [[plugins/Creole]]
+want their wikilinks the other way round, like
+`\[[plugins/write|how to write a plugin]]`). It would be nice if there was
+some way for page language plugins to opt in/out of the normal wiki link
+processing - then MediaWiki and Creole could have their own `linkify` hook
+that was only active for *their* page types, and used the appropriate
+syntax.
+
+In [[todo/matching_different_kinds_of_links]] I wondered about adding a
+`\[[!typedlink to="foo" type="bar"]]` directive. This made me wonder whether
+a core `\[[!link]]` directive would be useful; this could be a fallback for
+page types where a normal wikilink can't be done for whatever reason, and
+could also provide extension points more easily than WikiLinks' special
+syntax with extra punctuation, which doesn't really scale?
+
+Straw-man:
+
+ \[[!link to="ikiwiki/wikilink" desc="WikiLinks"]]
+
+--[[smcv]]
diff --git a/doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn b/doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn
new file mode 100644
index 000000000..2b2b0242e
--- /dev/null
+++ b/doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn
@@ -0,0 +1,11 @@
+One feature of mediawiki which I quite like is the ability to mark a change as 'minor', or 'trivial'. This can then be used to filter the 'recentchanges' page, to only show substantial edits.
+
+The utility of this depends entirely on whether the editors use it properly.
+
+I currently use an inline on the front page of my personal homepage to show the most recent pages (by creation date) within a subsection of my site (a blog). Blog posts are rarely modified much after they are 'created' (or published - I bodge the creation time via meta when I publish a post. It might sit in draft form indefinitely), so this effectively shows only non-trivial changes.
+
+I would like to have a short list of the most recent modifications to the site on the front page. I therefore want to sort by modified time rather than creation time, but exclude edits that I self-identify as minor. I also only want to take a short number of items, the top 5, and display only their titles (which may be derived from filename, or set via meta again).
+
+I'm still thinking through how this might be achieved in an ikiwiki-suitable fashion, but I think I need a scheme to identify certain edits as trivial. This would have to work via web edits (easier: could add a check box to the edit form) and plain changes in the VCS (harder: scan for keywords in a commit message? in a VCS-agnostic fashion?)
+
+[[!tag wishlist]]
diff --git a/doc/todo/matching_different_kinds_of_links.mdwn b/doc/todo/matching_different_kinds_of_links.mdwn
index 26c5a072b..da3ea49f6 100644
--- a/doc/todo/matching_different_kinds_of_links.mdwn
+++ b/doc/todo/matching_different_kinds_of_links.mdwn
@@ -36,6 +36,11 @@ Besides pagespecs, the `rel=` attribute could be used for styles. --Ivan Z.
> normal links.) Might be better to go ahead and add the variable to
> core though. --[[Joey]]
+>> I've implemented this with the data structure you suggested, except that
+>> I called it `%typedlinks` instead of `%linktype` (it seemed to make more
+>> sense that way). I also ported `tag` to it, and added a `tagged_is_strict`
+>> config option. See below! --[[smcv]]
+
I saw somewhere else here some suggestions for the wiki-syntax for specifying the relation name of a link. One more suggestion---[the syntax used in Semantic MediaWiki](http://en.wikipedia.org/wiki/Semantic_MediaWiki#Basic_usage), like this:
<pre>
@@ -45,3 +50,147 @@ I saw somewhere else here some suggestions for the wiki-syntax for specifying th
So a part of the effect of [[`\[[!taglink TAG\]\]`|plugins/tag]] could be represented as something like `\[[tag::TAG]]` or (more understandable relation name in what concerns the direction) `\[[tagged::TAG]]`.
I don't have any opinion on this syntax (whether it's good or not)...--Ivan Z.
+
+-------
+
+>> [[!template id=gitbranch author="[[Simon_McVittie|smcv]]" branch=smcv/ready/link-types]]
+>> [[!tag patch]]
+
+## Documentation for smcv's branch
+
+### added to [[ikiwiki/pagespec]]
+
+* "`typedlink(type glob)`" - matches pages that link to a given page (or glob)
+ with a given link type. Plugins can create links with a specific type:
+ for instance, the tag plugin creates links of type `tag`.
+
+### added to [[plugins/tag]]
+
+If the `tagged_is_strict` config option is set, `tagged()` will only match
+tags explicitly set with [[ikiwiki/directive/tag]] or
+[[ikiwiki/directive/taglink]]; if not (the default), it will also match
+any other [[WikiLinks|ikiwiki/WikiLink]] to the tag page.
+
+### added to [[plugins/write]]
+
+#### `%typedlinks`
+
+The `%typedlinks` hash records links of specific types. Do not modify this
+hash directly; call `add_link()`. The keys are page names, and the values
+are hash references. In each page's hash reference, the keys are link types
+defined by plugins, and the values are hash references with link targets
+as keys, and 1 as a dummy value, something like this:
+
+ $typedlinks{"foo"} = {
+ tag => { short_word => 1, metasyntactic_variable => 1 },
+ next_page => { bar => 1 },
+ };
+
+Ordinary [[WikiLinks|ikiwiki/WikiLink]] appear in `%links`, but not in
+`%typedlinks`.
+
+#### `add_link($$;$)`
+
+ This adds a link to `%links`, ensuring that duplicate links are not
+ added. Pass it the page that contains the link, and the link text.
+
+An optional third parameter sets the link type (`undef` produces an ordinary
+[[ikiwiki/WikiLink]]).
+
+## Review
+
+Some code refers to `oldtypedlinks`, and other to `oldlinktypes`. --[[Joey]]
+
+> Oops, I'll fix that. That must mean missing test coverage, too :-(
+> --s
+
+>> A test suite for the dependency resolver *would* be nice. --[[Joey]]
+
+>>> Bug fixed, I think. A test suite for the dependency resolver seems
+>>> more ambitious than I want to get into right now, but I added a
+>>> unit test for this part of it... --s
+
+I'm curious what your reasoning was for adding a new variable
+rather than using `pagestate`. Was it only because you needed
+the `old` version to detect change, or was there other complexity?
+--J
+
+> You seemed to be more in favour of adding it to the core in
+> your proposal above, so I assumed that'd be more likely to be
+> accepted :-) I don't mind one way or the other - `%typedlinks`
+> costs one core variable, but saves one level of hash nesting. If
+> you're not sure either, then I think the decision should come down
+> to which one is easier to document clearly - I'm still unhappy with
+> my docs for `%typedlinks`, so I'll try to write docs for it as
+> `pagestate` and see if they work any better. --s
+
+>> On reflection, I don't think it's any better as a pagestate, and
+>> the contents of pagestates (so far) aren't documented for other
+>> plugins' consumption, so I'm inclined to leave it as-is, unless
+>> you want to veto that. Loose rationale: it needs special handling
+>> in the core to be a dependency type (I re-used the existing link
+>> type), it's API beyond a single plugin, and it's really part of
+>> the core parallel to pagestate rather than being tied to a
+>> specific plugin. Also, I'd need to special-case it to have
+>> ikiwiki not delete it from the index, unless I introduced a
+>> dummy typedlinks plugin (or just hook) that did nothing... --s
+
+I have not convinced myself this is a real problem, but..
+If a page has a typed link, there seems to be no way to tell
+if it also has a separate, regular link. `add_link` will add
+to `@links` when adding a typed, or untyped link. If only untyped
+links were recorded there, one could tell the difference. But then
+typed links would not show up at all in eg, a linkmap,
+unless it was changed to check for typed links too.
+(Or, regular links could be recorded in typedlinks too,
+with a empty type. (Bloaty.)) --J
+
+> I think I like the semantics as-is - I can't think of any
+> reason why you'd want to ask the question "does A link to B,
+> not counting tags and other typed links?". A typed link is
+> still a link, in my mind at least. --s
+
+>> Me neither, let's not worry about it. --[[Joey]]
+
+I suspect we could get away without having `tagged_is_strict`
+without too much transitional trouble. --[[Joey]]
+
+> If you think so, I can delete about 5 LoC. I don't particularly
+> care either way; [[Jon]] expressed concern about people relying
+> on the current semantics, on one of the pages requesting this
+> change. --s
+
+>> Removed in a newer version of the branch. --s
+
+I might have been wrong to introduce `typedlink(tag foo)`. It's not
+very user-friendly, and is more useful as a backend for other plugins
+that as a feature in its own right - any plugin introducing a link
+type will probably also want to have its own preprocessor directive
+to set that link type, and its own pagespec function to match it.
+I wonder whether to make a `typedlink` plugin that has the typedlink
+pagespec match function and a new `\[[!typedlink to="foo" type="bar"]]`
+though... --[[smcv]]
+
+> I agree, per-type matchers are more friendly and I'm not enamored of the
+> multi-parameter pagespec syntax. --[[Joey]]
+
+>> Removed in a newer version of the branch. I re-introduced it as a
+>> plugin in `smcv/typedlink`, but I don't think we really need it. --s
+
+----
+
+I am ready to merge this, but I noticed one problem -- since `match_tagged`
+now only matches pages with the tag linktype, a wiki will need to be
+rebuilt on upgrade in order to get the linktype of existing tags in it
+recorded. So there needs to be a NEWS item about this and
+the postinst modified to force the rebuild.
+
+> Done, although you'll need to plug in an appropriate version number when
+> you release it. Is there a distinctive reminder string you grep for
+> during releases? I've used `UNRELEASED` for now. --[[smcv]]
+
+Also, the ready branch adds `typedlink()` to [[ikiwiki/pagespec]],
+but you removed that feature as documented above.
+--[[Joey]]
+
+> [[Done]]. --s
diff --git a/doc/todo/mdwn_itex.mdwn b/doc/todo/mdwn_itex.mdwn
new file mode 100644
index 000000000..3e304fa76
--- /dev/null
+++ b/doc/todo/mdwn_itex.mdwn
@@ -0,0 +1,22 @@
+[[!template id=gitbranch branch=wtk/mdwn_itex author="[[wtk]]"]]
+
+summary
+=======
+
+Extend the [[plugins/mdwn]] plugin to support [itex][] using Jacques
+Distler's [itex2MML][].
+
+notes
+=====
+
+This is an updated form of [[users/JasonBlevins]]' plugin. You can
+see the plugin [in action][example] on my blog. The blog post lists a
+few additional changes you may need to make to use the plugin,
+including changing your page template to a MathML-friendly doctype and
+disabling plugins like [[plugins/htmlscrubber]] and
+[[plugins/htmltidy]] which would otherwise strip out the generated
+MathML.
+
+[itex]: http://golem.ph.utexas.edu/~distler/blog/itex2MMLcommands.html
+[itex2MML]: http://golem.ph.utexas.edu/~distler/blog/itex2MML.html
+[example]: http://www.physics.drexel.edu/~wking/unfolding-disasters/posts/mdwn_itex/
diff --git a/doc/todo/mercurial.mdwn b/doc/todo/mercurial.mdwn
index e71c8106a..de1f148e5 100644
--- a/doc/todo/mercurial.mdwn
+++ b/doc/todo/mercurial.mdwn
@@ -119,3 +119,11 @@ I have a few notes on mercurial usage after trying it out for a while:
>> I think the ideal solution would be to build `$destdir/recentchanges/*` directly from the output of `hg log`. --[[buo]]
>>>> That would be 100 times as slow, so I chose not to do that. --[[Joey]]
+
+>>>> Since this is confusing people, allow me to clarify: Ikiwiki's
+>>>> recentchanges generation pulls log information directly out of the VCS as
+>>>> needed. It caches it in recentchanges/* in the `scrdir`. These cache
+>>>> files need not be preserved, should never be checked into VCS, and if
+>>>> you want to you can configure your VCSignore file to ignore them,
+>>>> just as you can configure it to ignore the `.ikiwiki` directory in the
+>>>> `scrdir`. --[[Joey]]
diff --git a/doc/todo/mirrorlist_with_per-mirror_usedirs_settings.mdwn b/doc/todo/mirrorlist_with_per-mirror_usedirs_settings.mdwn
index 6ca9962ba..baad063ef 100644
--- a/doc/todo/mirrorlist_with_per-mirror_usedirs_settings.mdwn
+++ b/doc/todo/mirrorlist_with_per-mirror_usedirs_settings.mdwn
@@ -21,4 +21,76 @@ and decided this time it was really needed to implement this feature.
--[[intrigeri]]
+> Ping. --[[intrigeri]]
+
[[!tag patch]]
+
+>> (I'm not an ikiwiki committer, opinions may vary.)
+>>
+>>> In my opinion, you're an ikiwiki committer! --[[Joey]]
+>>
+>> This would be easier to review if there weren't a million merges from
+>> master; perhaps either leave a branch as-is, or rebase it, or merge
+>> only at "significant" times like after a release?
+>>
+>> I believe Joey's main objection to complex $config entries is that
+>> it's not at all clear what [[plugins/websetup]] would do with them.
+>> Would something like this make a reasonable alternative?
+>>
+>> $config{mirrorlist} = ["nousedirs|file:///home/intrigeri/wiki",
+>> "usedirs|http://example.com/wiki", "http://example.net"];
+>>
+>> From how I understand tainting, this:
+>>
+>> $untainted{$_} = possibly_foolish_untaint($tainted->{$_})
+>>
+>> probably needs to untaint the key too:
+>>
+>> my $key = possibly_foolish_untaint($_);
+>> $untainted{$key} = possibly_foolish_untaint($tainted->{key});
+>>
+>> --[[smcv]]
+
+>>> You are fully right about the complex `$config` entries. I'll
+>>> convert this to use what you are suggesting, i.e. what we ended up
+>>> choosing for the `po_slave_languages` setting.
+>>>
+>>> About the merges in this branch: Joey told me once he did not care
+>>> about this; moreover the `--no-merges` git log option makes it
+>>> easy to filter these out. I'll try merging tagged releases only in
+>>> the future, though.
+>>>
+>>> --[[intrigeri]]
+
+>>>> FWIW, I don't care about merge commits etc because I review
+>>>> `git diff ...intrigeri/mirrorlist` -- and if I want to dig deeper
+>>>> into the why of some code, I'll probably checkout the branch and
+>>>> use git blame.
+>>>>
+>>>> I agree with what smcv said, my other concern though is that
+>>>> this is such an edge case, that supporting it just adds clutter.
+>>>> Have to wonder if it wouldn't perhaps be better to do something
+>>>> using the goto plugin and cgiurl, so that the mirror doesn't have
+>>>> to know about the configuration of the other mirror. --[[Joey]]
+
+>>>>> I have implemented something using the cgi + goto in my (history
+>>>>> rewrite warning) mirrorlist branch. Please review, please pull.
+>>>>> --[[intrigeri]]
+
+>>>>>> Ping? I've merged 3.20110321 in my `mirrorlist` branch and
+>>>>>> checked it still works properly. --[[intrigeri]]
+
+>>>>> concerning goto/cgiurl, what about having that as the default in
+>>>>> mirrorlist, but keeping ``nousedirs|file:///home/intrigeri/wiki`` and
+>>>>> ``usedirs|http://example.com/wiki`` valid for cgi-less cases?
+>>>>> that would keep typical installation with a clutter-less configuration,
+>>>>> and support more individual setups too.
+>>>>> --[[chrysn]]
+
+>>>>>> I would not mind. On the other hand Joey was concerned about
+>>>>>> cluttering the code to support edge cases, which I fully
+>>>>>> understand. The case you (chrysn) are describing being even
+>>>>>> more specific than the one I was initially talking of, I think
+>>>>>> this should not block the merge of the branch I have been
+>>>>>> proposing. Support for the usecase you are suggesting can
+>>>>>> always be added later if needed. --[[intrigeri]]
diff --git a/doc/todo/more_flexible_inline_postform.mdwn b/doc/todo/more_flexible_inline_postform.mdwn
index bc8bc0809..414476bd7 100644
--- a/doc/todo/more_flexible_inline_postform.mdwn
+++ b/doc/todo/more_flexible_inline_postform.mdwn
@@ -16,3 +16,8 @@ logical first step towards doing comment-like things with inlined pages).
> Perhaps what we need is a `postform` plugin/directive that inline depends
> on (automatically enables); its preprocess method could automatically be
> invoked from preprocess_inline when needed. --[[smcv]]
+
+>> I've been looking at this stuff again. I think you are right, this would
+>> be the right approach. The comments plugin could use it similarly, allowing
+>> sites which desire it to have an inline comment submission form on all
+>> pages with comments enabled. I'm going to take a look. -- [[Jon]]
diff --git a/doc/todo/multiple_template_directories.mdwn b/doc/todo/multiple_template_directories.mdwn
index c09a9595f..6a474b4f3 100644
--- a/doc/todo/multiple_template_directories.mdwn
+++ b/doc/todo/multiple_template_directories.mdwn
@@ -11,3 +11,63 @@ ought to do the trick.
> global dir when it cannot find a template. For me, this is good enough.
> And it is even documented in the man page. Sigh. I guess this could be
> considered [[done]].
+
+I have a use case for this, a site composed of blogs and wikis, templates divided in three categories : common, blog and wiki. The only solution I found is maintaining hard links, being able to have multiple template dirs would obviously be better. -- Changaco
+
+> [[plugins/underlay]] used to allow adding extra templatedirs, but Joey
+> removed that functionality when he made templates search the wiki's
+> own `templates` directory.
+>
+> You can get a 3-level hierarchy like this:
+>
+> * instance-specific overrides: $srcdir/templates
+> * common to the entire site: a directory that is the value of all
+> instances' `templatedir` parameters
+> * common to every ikiwiki in the world: /usr/share/ikiwiki/templates
+> (implicitly searched)
+>
+> (by "instance" I mean an instance of ikiwiki - a .setup file, basically.)
+>
+> For a more complex hierarchy you'd need the old [[plugins/underlay]]
+> functionality, i.e. you'd need to (ask Joey to) revert the patch that
+> removed it. For instance, if anyone has a hierarchy like this, then
+> they need the old functionality back in order to split the template
+> search path for the things marked `(???)`:
+>
+> every ikiwiki in the world (/usr/share/ikiwiki/templates)
+> \--- your site (???)
+> \--- your blogs (???)
+> \--- travel blog ($srcdir/templates)
+> \--- code blog ($srcdir/templates)
+> \--- your wikis (???)
+> \--- travel wiki ($srcdir/templates)
+> \--- code wiki ($srcdir/templates)
+>
+> This looks pretty hypothetical to me, though...
+> --[[smcv]]
+
+>> The reason I removed it is because the same functionality of having
+>> multiple template directories is still present. Just put them in
+>> the templates/ subdirectory of multiple underlay directories instead.
+>> --[[Joey]]
+
+>>>Thanks, I didn't realize this was possible. Problem solved. -- Changaco
+
+>>>> We can consider this [[done]], then. For reference, the solution
+>>>> to the hierarchy I mentioned above would be:
+>>>>
+>>>> all your sites have $your_underlay as an underlay
+>>>>
+>>>> the blogs and wikis all have $blog_underlay or $wiki_underlay
+>>>> (as appropriate) as a higher priority underlay
+>>>>
+>>>> every ikiwiki in the world (/usr/share/ikiwiki/templates)
+>>>> \--- your site ($your_underlay/templates, or templatedir)
+>>>> \--- your blogs ($blog_underlay/templates)
+>>>> \--- travel blog ($srcdir/templates)
+>>>> \--- code blog ($srcdir/templates)
+>>>> \--- your wikis ($wiki_underlay/templates)
+>>>> \--- travel wiki ($srcdir/templates)
+>>>> \--- code wiki ($srcdir/templates)
+>>>>
+>>>> --[[smcv]]
diff --git a/doc/todo/multiple_templates.mdwn b/doc/todo/multiple_templates.mdwn
index 72783c556..30fb8d6ee 100644
--- a/doc/todo/multiple_templates.mdwn
+++ b/doc/todo/multiple_templates.mdwn
@@ -1,4 +1,4 @@
-> Another useful feature might be to be able to choose a different [[template|wikitemplates]]
+> Another useful feature might be to be able to choose a different [[template|templates]]
> file for some pages; [[blog]] pages would use a template different from the
> home page, even if both are managed in the same repository, etc.
diff --git a/doc/todo/nested_preprocessor_directives.mdwn b/doc/todo/nested_preprocessor_directives.mdwn
index b5080dc3c..4a2795e30 100644
--- a/doc/todo/nested_preprocessor_directives.mdwn
+++ b/doc/todo/nested_preprocessor_directives.mdwn
@@ -16,3 +16,50 @@ nesting, a new syntax would be needed. Maybe something xml-like?
> """]]
>
> --[[JoshTriplett]]
+
+>> Yes it's definitely possible to do something like that. I'm not 100%
+>> sure if it can be done in perl regexp or needs a real recursive descent
+>> parser though.
+>>
+>> In the meantime, this is an interesting approach:
+>> <https://github.com/timo/ikiwiki/commit/410bbaf141036164f92009599ae12790b1530886>
+>> (the link has since been fixed twice)
+>>
+>> \[[!directive text=<<FOO
+>> ...
+>> FOO]]
+>>
+>> Since that's implemented, I will probably just merge it,
+>> once I satisfy myself it doesn't blow up in any edge cases.
+>> (It also adds triple single quotes as a third, distinct type of quotes,
+>> which feels a bit redundant given the here docs.) --[[Joey]]
+>>
+>> Hmm, that patch changes a `m///sgx` to a `m///msgx`. Meaning
+>> that any '^' or '$' inside the regexp will change behavior from matching
+>> the start/end of string to matching the start/end of individual lines
+>> within the string. And there is one legacy '$' which must then
+>> change behavior; the "delimiter to next param".
+>>
+>> So, I'm not sure what behavior that will cause, but I suspect it will
+>> be a bug. Unless the `\s+|$' already stops matching at a newline within
+>> the string like it's whitespace. That needs more alalysis.
+>> Update: seems it does, I'm fairly satisfied that is not a bug.
+>>
+>> Also, the patch seems incomplete, only patching the first regexp
+>> but not the other two in the same function, which also are quoting-aware. --[[Joey]]
+>>
+>> Yes, I'm terribly sorry. I actually did edit the other two regexps, but
+>> I apparently missed copying it over as well. Should have been doing this
+>> in a git repo all along. Look at the new commit I put atop it that has
+>> the rest as well:
+>> (redacted: is now part of the commit linked to from above)
+>> Also: I'm not sure any more, why I added the m modifier. It was very
+>> late at night and I was getting a bit desperate (turned out, the next
+>> morning, I put my extra regexes after the "unquoted value" one. heh.)
+>> So, feel free to fix that. --Timo
+>>
+>> I've fixed the patch by rebasing, fixed the link above. I'm still not
+>> sure if the m modifier for the regex is still needed (apparently I
+>> didn't put it in the other regexes. Not completely sure about the
+>> implications.) Am now trying to wrap my head around a test case to
+>> test the new formats for a bit. --Timo
diff --git a/doc/todo/openid_user_filtering.mdwn b/doc/todo/openid_user_filtering.mdwn
index 8b2d0082e..6a318c4c0 100644
--- a/doc/todo/openid_user_filtering.mdwn
+++ b/doc/todo/openid_user_filtering.mdwn
@@ -7,3 +7,7 @@ So I suggest an ikiwiki configuration like:
users => ["*.webvm.net"],
Would only allow edits from openIDs of that form.
+
+> This kind of thing can be [[done]] now: --[[Joey]]
+>
+> locked_pages => "* and !user(http://*.webvm.net/)"
diff --git a/doc/todo/optional_underlaydir_prefix.mdwn b/doc/todo/optional_underlaydir_prefix.mdwn
new file mode 100644
index 000000000..06900a904
--- /dev/null
+++ b/doc/todo/optional_underlaydir_prefix.mdwn
@@ -0,0 +1,46 @@
+For security reasons, symlinks are disabled in IkiWiki. That's fair enough, but that means that some problems, which one could otherwise solve by using a symlink, cannot be solved. The specfic problem in this case is that all underlays are placed at the root of the wiki, when it could be more convenient to place some underlays in specific sub-directories.
+
+Use-case 1 (to keep things tidy):
+
+Currently IkiWiki has some javascript files in `underlays/javascript`; that directory is given as one of the underlay directories. Thus, all the javascript files appear in the root of the generated site. But it would be tidier if one could say "put the contents of *this* underlaydir under the `js` directory".
+
+> Of course, this could be accomplished, if we wanted to, by moving the
+> files to `underlays/javascript/js`. --[[Joey]]
+
+Use-case 2 (a read-only external dir):
+
+Suppose I want to include a subset of `/usr/local/share/docs` on my wiki, say the docs about `foo`. But I want them to be under the `docs/foo` sub-directory on the generated site. Currently I can't do that. If I give `/usr/local/share/docs/foo` as an underlaydir, then the contents of that will be in the root of the site, rather than under `docs/foo`. And if I give `/usr/local/share/docs` as an underlaydir, then the contents of the `foo` dir will be under `foo`, but it will also include every other thing in `/usr/local/share/docs`.
+
+Since we can't use symlinks in an underlay dir to link to these directories, then perhaps one could give a specific underlay dir a specific prefix, which defines the sub-directory that the underlay should appear in.
+
+I'm not sure how this would be implemented, but I guess it could be configured something like this:
+
+ prefixed_underlay => {
+ 'js' => '/usr/local/share/ikiwiki/javascript',
+ 'docs/foo' => '/usr/local/share/docs/foo',
+ }
+
+> So, let me review why symlinks are an issue. For normal, non-underlay
+> pages, users who do not have filesystem access to the server may have
+> commit access, and so could commit eg, a symlink to `/etc/passwd` (or
+> to `/` !). The guards are there to prevent ikiwiki either exposing the
+> symlink target's contents, or potentially overwriting it.
+>
+> Is this a concern for underlays? Most of the time, certianly not;
+> the underlay tends to be something only the site admin controls.
+> Not all the security checks that are done on the srcdir are done
+> on the underlays, either. Most checks done on files in the underlay
+> are only done because the same code handles srcdir files. The one
+> exception is the test that skips processing symlinks in the underlay dir.
+> (But note that the underlay directory can be a symlinkt to elsewhere
+> which the srcdir, by default, cannot.)
+>
+> So, one way to approach this is to make ikiwiki follow directory symlinks
+> inside the underlay directory. Just a matter of passing `follow => 1` to
+> find. (This would still not allow individual files to be symlinks, because
+> `readfile` does not allow reading symlinks. But I don't see much need
+> for that.) --[[Joey]]
+
+>> If you think that enabling symlinks in underlay directories wouldn't be a security issue, then I'm all for it! That would be much simpler to implement, I'm sure. --[[KathrynAndersen]]
+
+[[!taglink wishlist]]
diff --git a/doc/todo/org_mode.mdwn b/doc/todo/org_mode.mdwn
new file mode 100644
index 000000000..3e9d95376
--- /dev/null
+++ b/doc/todo/org_mode.mdwn
@@ -0,0 +1,24 @@
+[[!template id=gitbranch branch=wtk/org author="[[wtk]]"]]
+
+summary
+=======
+
+Add a plugin for handling files written in [org-mode][].
+
+notes
+=====
+
+This is an updated form of [Manoj Srivastava's plugin][MS]. You can
+see the plugin [in action][example] on my blog.
+
+For reasons discussed in the [[reStructuredText plugin|plugins/rst]],
+wikilinks and other ikiwiki markup that inserts raw HTML can cause
+problems. Org-mode provides a [means for processing raw HTML][raw],
+but Ikiwiki currently (as far as I know) lacks a method to escape
+inserted HTML depending on which plugins will be used during the
+[[htmlize phase|plugins/write#index11h3]].
+
+[org-mode]: http://orgmode.org/
+[MS]: http://www.golden-gryphon.com/blog/manoj/blog/2008/06/08/Using_org-mode_with_Ikiwiki/
+[example]: http://www.physics.drexel.edu/~wking/unfolding-disasters/posts/Git/notes/
+[raw]: http://orgmode.org/manual/Quoting-HTML-tags.html
diff --git a/doc/todo/pagespec_aliases.mdwn b/doc/todo/pagespec_aliases.mdwn
new file mode 100644
index 000000000..2db53d545
--- /dev/null
+++ b/doc/todo/pagespec_aliases.mdwn
@@ -0,0 +1,93 @@
+[[!tag patch wishlist]]I quite often find myself repeating a boiler-plate
+pagespec chunk, e.g.
+
+ and !*.png and !*.jpg...
+
+it would be quite nice if I could conveniently bundle them together into a
+pagespec "alias", and instead write
+
+ and !image()...
+
+I wrote the following plugin to achieve this:
+
+ commit f3a9dd113338fe5d2b717de1dc69679ff74e2f8d
+ Author: Jon Dowland <jmtd@debian.org>
+ Date: Tue May 3 17:40:16 2011 +0100
+
+ new plugin: alias.pm - pagespec aliases
+
+ diff --git a/IkiWiki/Plugin/alias.pm b/IkiWiki/Plugin/alias.pm
+ new file mode 100644
+ index 0000000..b8d4574
+ --- /dev/null
+ +++ b/IkiWiki/Plugin/alias.pm
+ @@ -0,0 +1,47 @@
+ +package IkiWiki::Plugin::alias;
+ +
+ +use warnings;
+ +use strict;
+ +use IkiWiki '3.00';
+ +
+ +sub import {
+ + hook(type => "getsetup", id=> "alias", call => \&getsetup);
+ + hook(type => "checkconfig", id=> "alias", call => \&checkconfig);
+ +}
+ +
+ +sub getsetup () {
+ + return
+ + plugin => {
+ + description => "allows the definition of pagespec aliases",
+ + safe => 1,
+ + rebuild => 1,
+ + section => "misc",
+ + },
+ + pagespec_aliases => {
+ + type => "string",
+ + example => {"image" => "*jpg or *jpeg or *png or *gif or *ico" },
+ + description => "a set of mappings from alias name to pagespec",
+ + safe => 1,
+ + rebuild => 0,
+ + },
+ +}
+ +
+ +sub checkconfig () {
+ + no strict 'refs';
+ + no warnings 'redefine';
+ +
+ + if ($config{pagespec_aliases}) {
+ + foreach my $key (keys %{$config{pagespec_aliases}}) {
+ + my $value = ${$config{pagespec_aliases}}{$key};
+ + # XXX: validate key?
+ + my $subname = "IkiWiki::PageSpec::match_$key";
+ + *{ $subname } = sub {
+ + my $path = shift;
+ + return IkiWiki::pagespec_match($path, $value);
+ + }
+ + }
+ + }
+ +}
+ +
+ +1;
+
+I need to reflect on this a bit more before I send a pull request. In
+particular I imagine the strict/warnings stuff will make you puke. Also, I'm
+not sure whether I should name-grab 'alias' since [[todo/alias_directive]] is
+an existing wishlist item.
+
+Here's an example setup chunk:
+
+ pagespec_aliases:
+ image: "*.png or *.jpg or *.jpeg or *.gif or *.ico"
+ helper: "*.css or *.js"
+ boring: "image() or helper()"
+
+The above demonstrates self-referential dynamic pagespec aliases. It doesn't work,
+however, to add ' or internal()' to `boring`, for some reason.
+
+-- [[Jon]]
+
+> another useful pagespec alias for large maps:
+
+ basewiki: "sandbox or templates or templates/* or ikiwiki or ikiwiki/* or shortcuts or recentchanges or wikiicons/*"
+
+> -- [[Jon]]
diff --git a/doc/todo/pagespec_aliases/discussion.mdwn b/doc/todo/pagespec_aliases/discussion.mdwn
new file mode 100644
index 000000000..abbe80e6a
--- /dev/null
+++ b/doc/todo/pagespec_aliases/discussion.mdwn
@@ -0,0 +1,13 @@
+Something which is similar to aliases is the "trail" concept I use in the [[plugins/contrib/report]] plugin. (Also my "pmap" plugin, but that's only in my "experimental" branch on github). One can define a "trail" by making a report with the "doscan" option (I should probably change the name of that) and then that page has a "trail" which matches the pagespec in that report.
+Then one can reference that page as a "trail" without having to reuse that pagespec.
+(It's also very useful in speeding up the processing, because the matching pages have been remembered, and one doesn't have to search for them again).
+
+So, for example, one could make a page "all_images" and have a report (or pmap, which is simpler) like so:
+
+ \[[!pmap pages="*.png or *.jpg or *.jpeg or *.gif or *.ico"]]
+
+And then later, somewhere else
+
+ \[[!report template="images.tmpl" trail="all_images" pages="album/*"]]
+
+and that would show all the images under "album".
diff --git a/doc/todo/passwordauth:_sendmail_interface.mdwn b/doc/todo/passwordauth:_sendmail_interface.mdwn
index 29f28ca32..556240964 100644
--- a/doc/todo/passwordauth:_sendmail_interface.mdwn
+++ b/doc/todo/passwordauth:_sendmail_interface.mdwn
@@ -35,7 +35,7 @@ in the ikiwiki source code, where emailing is done.
OK, so I'll have a look at replacing all email handling with *Email::Send*.
[[!tag patch]]
-*<http://www.thomas.schwinge.homeip.net/tmp/ikiwiki-sendmail.patch>*
+*<http://schwinge.homeip.net/~thomas/tmp/ikiwiki-sendmail.patch>*
Remaining TODOs:
diff --git a/doc/todo/pingback_support.mdwn b/doc/todo/pingback_support.mdwn
index b10366bda..7b3b158ee 100644
--- a/doc/todo/pingback_support.mdwn
+++ b/doc/todo/pingback_support.mdwn
@@ -37,3 +37,5 @@ case I will consider this done with an entry in [[tips]]; otherwise a
> whenever a page is posted or edited, and gets the changed content, it can
> simply scan it for urls (may have to htmlize first?), and send pings to
> all urls found. --[[Joey]]
+
+>> Is there any update on this? This would be highly useful and is the main reason why I am not using my blog more regularly, yet. (And yes, now that git-annex is doing everything I need and more, I thought I should revisit this one, as well). -- RichiH
diff --git a/doc/todo/po:_avoid_rebuilding_to_fix_meta_titles.mdwn b/doc/todo/po:_avoid_rebuilding_to_fix_meta_titles.mdwn
new file mode 100644
index 000000000..9bb9c72c4
--- /dev/null
+++ b/doc/todo/po:_avoid_rebuilding_to_fix_meta_titles.mdwn
@@ -0,0 +1,60 @@
+Re the meta title escaping issue worked around by `change`.
+
+> I suppose this does not only affect meta, but other things
+> at scan time too. Also, handling it only on rebuild feels
+> suspicious -- a refresh could involve changes to multiple
+> pages and trigger the same problem, I think. Also, exposing
+> this rebuild to the user seems really ugly, not confidence inducing.
+>
+> So I wonder if there's a better way. Such as making po, at scan time,
+> re-run the scan hooks, passing them modified content (either converted
+> from po to mdwn or with the escaped stuff cheaply de-escaped). (Of
+> course the scan hook would need to avoid calling itself!)
+>
+> (This doesn't need to block the merge, but I hope it can be addressed
+> eventually..)
+>
+> --[[Joey]]
+>>
+>> I'll think about it soon.
+>>
+>> --[[intrigeri]]
+>>
+>>> Did you get a chance to? --[[Joey]]
+
+>>>> I eventually did, and got rid of the ugly double rebuild of pages
+>>>> at build time. This involved adding a `rescan` hook. Rationale
+>>>> and details are in my po branch commit messages. I believe this
+>>>> new way of handling meta title escaping to be far more robust.
+>>>> Moreover this new implementation is more generic, feels more
+>>>> logical to me, and probably fixes other similar bugs outside the
+>>>> meta plugin scope. Please have a look when you can.
+>>>> --[[intrigeri]]
+
+>>>>> Glad you have tackled this. Looking at
+>>>>> 25447bccae0439ea56da7a788482a4807c7c459d,
+>>>>> I wonder how this rescan hook is different from a scan hook
+>>>>> with `last => 1` ? Ah, it comes *after* the preprocess hook
+>>>>> in scan mode. Hmm, I wonder if there's any reason to have
+>>>>> the scan hook called before those as it does now. Reordering
+>>>>> those 2 lines could avoid adding a new hook. --[[Joey]]
+
+>>>>>> Sure. I was fearing to break other plugins if I did so, so I
+>>>>>> did not dare to. I'll try this. --[[intrigeri]]
+
+>>>>>>> Done in my po branch, please have a look. --[[intrigeri]]
+
+>>>>>>>> I've merged it. Didn't look at the po.pm changes closely;
+>>>>>>>> assume they're ok. [[done]] --[[Joey]]
+>>>>>>>>
+>>>>>>>> My thinking about the reordering being safe is that
+>>>>>>>> the relative ordering of scan and preprocess in scan mode hooks
+>>>>>>>> has not been defined before, so it should be ok to define it. :)
+>>>>>>>>
+>>>>>>>> And as to possible breakage from things that assumed the old
+>>>>>>>> ordering, such a thing would need to have a scan hook and a
+>>>>>>>> preprocess in scan mode hook, and the two hooks would need to
+>>>>>>>> populate the same data structure with conflicting information,
+>>>>>>>> in order for there to be a problem. That seems highly unlikely
+>>>>>>>> and would be pretty broken on its own. And no plugin in ikiwiki
+>>>>>>>> itself has both types of hooks. --[[Joey]]
diff --git a/doc/todo/po:_better_documentation.mdwn b/doc/todo/po:_better_documentation.mdwn
new file mode 100644
index 000000000..6e9804df4
--- /dev/null
+++ b/doc/todo/po:_better_documentation.mdwn
@@ -0,0 +1,3 @@
+Maybe write separate documentation for the po plugin, depending on the
+people it targets: translators, wiki administrators, hackers. This
+plugin may be complex enough to deserve this.
diff --git a/doc/todo/po:_better_links.mdwn b/doc/todo/po:_better_links.mdwn
new file mode 100644
index 000000000..af879a56a
--- /dev/null
+++ b/doc/todo/po:_better_links.mdwn
@@ -0,0 +1,12 @@
+Once the fix to
+[[bugs/pagetitle_function_does_not_respect_meta_titles]] from
+[[intrigeri]]'s `meta` branch is merged into ikiwiki upstream, the
+generated links' text will be optionally based on the page titles set
+with the [[meta|plugins/meta]] plugin, and will thus be translatable.
+It will also allow displaying the translation status in links to slave
+pages. Both were implemented, and reverted in commit
+ea753782b222bf4ba2fb4683b6363afdd9055b64, which should be reverted
+once [[intrigeri]]'s `meta` branch is merged.
+
+An integration branch, called `meta-po`, merges [[intrigeri]]'s `po`
+and `meta` branches, and thus has this additional features.
diff --git a/doc/todo/po:_better_translation_interface.mdwn b/doc/todo/po:_better_translation_interface.mdwn
new file mode 100644
index 000000000..e66a77b85
--- /dev/null
+++ b/doc/todo/po:_better_translation_interface.mdwn
@@ -0,0 +1,2 @@
+Add a message-by-message translation interface to the PO plugin,
+with automatic escaping of special chars.
diff --git a/doc/todo/po:_remove_po_files_when_disabling_plugin.mdwn b/doc/todo/po:_remove_po_files_when_disabling_plugin.mdwn
new file mode 100644
index 000000000..5d0318ae1
--- /dev/null
+++ b/doc/todo/po:_remove_po_files_when_disabling_plugin.mdwn
@@ -0,0 +1,13 @@
+ikiwiki now has a `disable` hook. Should the po plugin remove the po
+files from the source repository when it has been disabled?
+
+> pot files, possibly, but the po files contain work, so no. --[[Joey]]
+
+>> I tried to implement this in my `po-disable` branch, but AFAIK, the
+>> current rcs plugins interface provides no way to tell whether a
+>> given file (e.g. a POT file in my case) is under version control;
+>> in most cases, it is not, thanks to .gitignore or similar, but we
+>> can't be sure. So I just can't decide it is needed to call
+>> `rcs_remove` rather than a good old `unlink`. --[[intrigeri]]
+
+>>> I guess you could call `rcs_remove` followed by `unlink`. --[[Joey]]
diff --git a/doc/todo/po:_rethink_pagespecs.mdwn b/doc/todo/po:_rethink_pagespecs.mdwn
new file mode 100644
index 000000000..98c7ff655
--- /dev/null
+++ b/doc/todo/po:_rethink_pagespecs.mdwn
@@ -0,0 +1,40 @@
+I was suprised that, when using the map directive, a pagespec of "*"
+listed all the translated pages as well as regular pages. That can
+make a big difference to an existing wiki when po is turned on,
+and seems generally not wanted.
+(OTOH, you do want to match translated pages by
+default when locking pages.) --[[Joey]]
+
+> Seems hard to me to sort apart the pagespec whose matching pages
+> list must be restricted to pages in the master (or current?)
+> language, and the ones that should not. The only solution I can see
+> to this surprising behaviour is: documentation. --[[intrigeri]]
+
+>> Well, a sorting criteria might be that if a PageSpec is used
+>> with a specified locaction, as happens whenever a PageSpec is
+>> used on a page, then it should match only `currentlang()`. If it
+>> is used without a location, as in the setup file, then no such limit.
+
+>>> Ok. --[[intrigeri]]
+
+>> Note that
+>> `match_currentlang` currently dies if called w/o a location -- if
+>> it instead was always true w/o a location, this would just mean that
+>> all pagespecs should have `and currentlang()` added to them. How to
+>> implement that? All I can think of doing is wrapping
+>> `pagespec_translate`.
+
+>>> Seems doable. --[[intrigeri]]
+
+>> The only case I've found where it does make sense to match other
+>> language pages is on `l10n.ikiwiki.info` when listing pages that
+>> need translation.
+>>
+>> Otherwise, it can be documented, but that's not really enough;
+>> a user who makes a site using auto-blog.setup and enables po will
+>> get a really screwed up blog that lists translations as separate posts
+>> and needs significant work to fix. I have thought about making
+>> `match_currentlang` a stub in IkiWiki (done in my currentlang branch),
+>> so I can use it in all the PageSpecs in the example blog etc, but I
+>> can't say I love the idea.
+>> --[[Joey]]
diff --git a/doc/todo/po:_translation_of_directives.mdwn b/doc/todo/po:_translation_of_directives.mdwn
new file mode 100644
index 000000000..89fc93620
--- /dev/null
+++ b/doc/todo/po:_translation_of_directives.mdwn
@@ -0,0 +1,8 @@
+If a translated page contains a directive, it may expand to some english
+text, or text in whatever single language ikiwiki is configured to "speak".
+
+Maybe there could be a way to switch ikiwiki to speaking another language
+when building a non-english page? Then the directives would get translated.
+
+(We also will need this in order to use translated templates, when they are
+available.)
diff --git a/doc/todo/po_needstranslation_pagespec.mdwn b/doc/todo/po_needstranslation_pagespec.mdwn
new file mode 100644
index 000000000..45b7377ea
--- /dev/null
+++ b/doc/todo/po_needstranslation_pagespec.mdwn
@@ -0,0 +1,12 @@
+Commit b225fdc44d4b3d in my po branch adds a `needstranslation()`
+PageSpec. It makes it easy to list pages that need translation work.
+Please review. --[[intrigeri]]
+
+> Looks good, cherry-picked. The only improvment I can
+> think of is that `needstranslation(50)` could match
+> only pages less than 50% translated. --[[Joey]]
+
+>> This improvement has been implemented as 98cc946 in my po branch.
+>> --[[intrigeri]]
+
+[[!tag patch done]]
diff --git a/doc/todo/preview_changes_before_git_commit.mdwn b/doc/todo/preview_changes_before_git_commit.mdwn
new file mode 100644
index 000000000..187497cf4
--- /dev/null
+++ b/doc/todo/preview_changes_before_git_commit.mdwn
@@ -0,0 +1,17 @@
+ikiwiki allows to commit changes to the doc wiki over the `git://...` protocol.
+It would be nice if there'd be a uniform way to view these changes before `git
+push`ing. For the GNU Hurd's web pages, we include a *render_locally* script,
+<http://www.gnu.org/software/hurd/render_locally>, with instructions on
+<http://www.gnu.org/software/hurd/contributing/web_pages.html>, section
+*Preview Changes*. With ikiwiki, one can use `make docwiki`, but that excludes
+a set of pages, as per `docwiki.setup`. --[[tschwinge]]
+
+> `ikiwiki -setup some.setup --render file.mdwn` will build the page and
+> dump it to stdout. So, for example:
+
+ ikiwiki -setup docwiki.setup --render doc/todo/preview_changes_before_git_commit.mdwn | w3m -T text/html
+
+> You have to have a setup file, though it suffices to make up your own
+> if you don't have the real one. Using ikiwiki.info's real setup file
+> won't actually work since it uses a search plugin that gets unhappy
+> if this is not in `/srv/web/ikiwiki.info`. --[[Joey]]
diff --git a/doc/todo/replace_HTML::Template_with_Template_Toolkit.mdwn b/doc/todo/replace_HTML::Template_with_Template_Toolkit.mdwn
index c4e78ca0b..d55fc0aa8 100644
--- a/doc/todo/replace_HTML::Template_with_Template_Toolkit.mdwn
+++ b/doc/todo/replace_HTML::Template_with_Template_Toolkit.mdwn
@@ -5,9 +5,14 @@ features and thus makes it rather hard to give an ikiwiki site a consistent
look. If you browse the templates provided in the tarball, you'll notice that
more than one of them contain the `<html>` tag, which is unnecessary.
+> Note that is no longer true, and I didn't have to do such an intrusive
+> change to fix it either. --[[Joey]]
+
Maybe it's just me, I also find HTML::Template cumbersome to use, due in part
to its use of capital letters.
+> Its entirely optional use of capital letters? --[[Joey]]
+
Finally, the software seems unmaintained: the mailing list and searchable
archives linked from
<http://html-template.sourceforge.net/html_template.html#frequently%20asked%20questions>
@@ -58,3 +63,25 @@ Yes, Template::Toolkit is very powerful. But I think it's somehow overkill for a
I'd have to agree that Template::Toolkit is overkill and personally I'm not a fan, but it is very popular (there is even a book) and the new version (3) is alleged to be much more nimble than current version. --[[ajt]]
HTML::Template's HTML-like markup prevents me from editing templates in KompoZer or other WYSIWYG HTML editors. The editor tries to render the template markup rather than display it verbatim, and large parts of the template become invisible. A markup syntax that doesn't confuse editors (such as Template::Toolkit's "[% FOO %]") may promote template customization. The ability to replace the template engine would be within the spirit of ikiwiki's extensibility. --Rocco
+
+
+I agree that being able to replace the template toolkit would be a great piece of modularity, and one I would use. If I could use the slot-based filling and the conditional logic from Template::Toolkit, we could build much more flexible inline and archivepage templates that would look different depending on where in the wiki we use them. Some of this can currently be accomplished with separate templates for each use case and a manual call to the right template in the !inline directive, but this is limited, cumbersome, and makes it difficult to reuse bits of formatting by trapping all of that information in multiple template files. -Ian
+
+> I don't wish HTML::Template to be *replaced* by Template::Toolkit - as
+> others have said above, it's overkill for my needs. However, I also
+> agree that HTML::Template has its own problems too. The idea of making
+> the template system modular, with a choice of which backend to use - I
+> really like that idea. It would enable me to use some other template
+> system I like better, such as Text::Template or Text::NeatTemplate. But I
+> think it would be a lot of work to implement, though perhaps no more work
+> than making the revision-control backend modular, I guess. One would
+> need to write an IkiWiki template interface that didn't care what the
+> backend was, and yet is somehow still flexible enough to take advantage
+> of special features of different backends. There are an *awful lot* of
+> things that use templates - not just the `pagetemplate` and `template`
+> plugins, but a number of others which have specialized templates of their
+> own. -- [[KathrynAndersen]]a
+
+>> A modular template system in ikiwiki is unlikely, as template objects
+>> are part of the API, notably the `pagetemplate` hook. Unless the other
+>> system has a compatible template object. --[[Joey]]
diff --git a/doc/todo/rewrite_ikiwiki_in_haskell.mdwn b/doc/todo/rewrite_ikiwiki_in_haskell.mdwn
index 204c48cd7..48ed744b1 100644
--- a/doc/todo/rewrite_ikiwiki_in_haskell.mdwn
+++ b/doc/todo/rewrite_ikiwiki_in_haskell.mdwn
@@ -29,6 +29,7 @@ It's appealing for a lot of reasons, including:
edit in html editors currently.
- This would be a chance to make WikiLinks with link texts read
"the right way round" (ie, vaguely wiki creole compatably).
+ *[See also [[todo/link_plugin_perhaps_too_general?]] --[[smcv]]]*
- The data structures would probably be quite different.
- I might want to drop a lot of the command-line flags, either
requiring a setup file be used for those things, or leaving the
diff --git a/doc/todo/salmon_protocol_for_comment_sharing.mdwn b/doc/todo/salmon_protocol_for_comment_sharing.mdwn
new file mode 100644
index 000000000..1e56b0a8b
--- /dev/null
+++ b/doc/todo/salmon_protocol_for_comment_sharing.mdwn
@@ -0,0 +1,21 @@
+The <a href="http://www.salmon-protocol.org/home">Salmon protocol</a>
+provides for aggregating comments across sites. If a site that syndicates
+a feed receives a comment on an item in that feed, it can re-post the
+comment to the original source.
+
+> Ikiwiki does not allow comments to be posted on items it aggregates.
+> So salmon protocol support would only need to handle the comment
+> receiving side of the protocol.
+>
+> The current draft protocol document confuses me when it starts talking
+> about using OAuth in the abuse prevention section, since their example
+> does not show use of OAuth, and it's not at all clear to me where the
+> OAuth relationship between aggregator and original source is supposed
+> to come from.
+>
+> Their security model, which goes on to include Webfinger,
+> thirdparty validation services, XRD, and Magic Signatures, looks sorta
+> like they kept throwing technology, at it, hoping something will stick. :-P
+> --[[Joey]]
+
+[[!tag wishlist]]
diff --git a/doc/todo/selective_more_directive.mdwn b/doc/todo/selective_more_directive.mdwn
new file mode 100644
index 000000000..2a9998205
--- /dev/null
+++ b/doc/todo/selective_more_directive.mdwn
@@ -0,0 +1,28 @@
+I'm setting up a blog for NaNoWriMo and other story-writing, which means long posts every day. I want to have excerpts on the front page, which link to the full length story posts. I also want a dedicated page for each story which inlines the story in full and in chronological order. I can use the "more" directive to achieve this effect on the front page but then it spoils the story page. My solution was to add a pages= parameter to the more directive to make it more selective.
+
+ --- /usr/share/perl5/IkiWiki/Plugin/more.pm 2010-10-09 00:09:24.000000000 +0000
+ +++ .ikiwiki/IkiWiki/Plugin/more.pm 2010-11-01 20:24:59.000000000 +0000
+ @@ -26,7 +26,10 @@
+
+ $params{linktext} = $linktext unless defined $params{linktext};
+
+ - if ($params{page} ne $params{destpage}) {
+ + if ($params{page} ne $params{destpage} &&
+ + (! exists $params{pages} ||
+ + pagespec_match($params{destpage}, $params{pages},
+ + location => $params{page}))) {
+ return "\n".
+ htmllink($params{page}, $params{destpage}, $params{page},
+ linktext => $params{linktext},
+
+I can now call it as
+
+ \[[!more pages="index" linktext="Chapter 1" text="""
+ etc
+ """]]
+
+I'm not entirely happy with the design, since I would rather put this information in the inline directive instead of in every story post. Unfortunately I found no way to pass parameters from the inline directive to the inlined page.
+
+-- [[dark]]
+
+> Me neither, but nor do I see a better way, so [[applied|done]]. --[[Joey]]
diff --git a/doc/todo/smarter_sorting.mdwn b/doc/todo/smarter_sorting.mdwn
new file mode 100644
index 000000000..901e143a7
--- /dev/null
+++ b/doc/todo/smarter_sorting.mdwn
@@ -0,0 +1,141 @@
+I benchmarked a build of a large wiki (my home wiki), and it was spending
+quite a lot of time sorting; `CORE::sort` was called only 1138 times, but
+still flagged as the #1 time sink. (I'm not sure I trust NYTProf fully
+about that FWIW, since it also said 27238263 calls to `cmp_age` were
+the #3 timesink, and I suspect it may not entirely accurately measure
+the overhead of so many short function calls.)
+
+`pagespec_match_list` currently always sorts *all* pages first, and then
+finds the top M that match the pagespec. That's innefficient when M is
+small (as for example in a typical blog, where only 20 posts are shown,
+out of maybe thousands).
+
+As [[smcv]] noted, It could be flipped, so the pagespec is applied first,
+and then sort the smaller matching set. But, checking pagespecs is likely
+more expensive than sorting. (Also, influence calculation complicates
+doing that.)
+
+Another option, when there is a limit on M pages to return, might be to
+cull the M top pages without sorting the rest.
+
+> The patch below implements this.
+>
+> But, I have not thought enough about influence calculation.
+> I need to figure out which pagespec matches influences need to be
+> accumulated for in order to determine all possible influences of a
+> pagespec are known.
+>
+> The old code accumulates influences from matching all successful pages
+> up to the num cutoff, as well as influences from an arbitrary (sometimes
+> zero) number of failed matches. New code does not accumulate influences
+> from all the top successful matches, only an arbitrary group of
+> successes and some failures.
+>
+> Also, by the time I finished this, it was not measuarably faster than
+> the old method. At least not with a few thousand pages; it
+> might be worth revisiting this sometime for many more pages? [[done]]
+> --[[Joey]]
+
+<pre>
+diff --git a/IkiWiki.pm b/IkiWiki.pm
+index 1730e47..bc8b23d 100644
+--- a/IkiWiki.pm
++++ b/IkiWiki.pm
+@@ -2122,36 +2122,54 @@ sub pagespec_match_list ($$;@) {
+ my $num=$params{num};
+ delete @params{qw{num deptype reverse sort filter list}};
+
+- # when only the top matches will be returned, it's efficient to
+- # sort before matching to pagespec,
+- if (defined $num && defined $sort) {
+- @candidates=IkiWiki::SortSpec::sort_pages(
+- $sort, @candidates);
+- }
+-
++ # Find the first num matches (or all), before sorting.
+ my @matches;
+- my $firstfail;
+ my $count=0;
+ my $accum=IkiWiki::SuccessReason->new();
+- foreach my $p (@candidates) {
+- my $r=$sub->($p, %params, location => $page);
++ my $i;
++ for ($i=0; $i < @candidates; $i++) {
++ my $r=$sub->($candidates[$i], %params, location => $page);
+ error(sprintf(gettext("cannot match pages: %s"), $r))
+ if $r->isa("IkiWiki::ErrorReason");
+ $accum |= $r;
+ if ($r) {
+- push @matches, $p;
++ push @matches, $candidates[$i];
+ last if defined $num && ++$count == $num;
+ }
+ }
+
++ # We have num natches, but they may not be the best.
++ # Efficiently find and add the rest, without sorting the full list of
++ # candidates.
++ if (defined $num && defined $sort) {
++ @matches=IkiWiki::SortSpec::sort_pages($sort, @matches);
++
++ for ($i++; $i < @candidates; $i++) {
++ # Comparing candidate with lowest match is cheaper,
++ # so it's done before testing against pagespec.
++ if (IkiWiki::SortSpec::cmptwo($candidates[$i], $matches[-1], $sort) < 0 &&
++ $sub->($candidates[$i], %params, location => $page)
++ ) {
++ # this could be done less expensively
++ # using a binary search
++ for (my $j=0; $j < @matches; $j++) {
++ if (IkiWiki::SortSpec::cmptwo($candidates[$i], $matches[$j], $sort) < 0) {
++ splice @matches, $j, $#matches-$j+1, $candidates[$i],
++ @matches[$j..$#matches-1];
++ last;
++ }
++ }
++ }
++ }
++ }
++
+ # Add simple dependencies for accumulated influences.
+- my $i=$accum->influences;
+- foreach my $k (keys %$i) {
+- $depends_simple{$page}{lc $k} |= $i->{$k};
++ my $inf=$accum->influences;
++ foreach my $k (keys %$inf) {
++ $depends_simple{$page}{lc $k} |= $inf->{$k};
+ }
+
+- # when all matches will be returned, it's efficient to
+- # sort after matching
++ # Sort if we didn't already.
+ if (! defined $num && defined $sort) {
+ return IkiWiki::SortSpec::sort_pages(
+ $sort, @matches);
+@@ -2455,6 +2473,12 @@ sub sort_pages {
+ sort $f @_
+ }
+
++sub cmptwo {
++ $a=$_[0];
++ $b=$_[1];
++ $_[2]->();
++}
++
+ sub cmp_title {
+ IkiWiki::pagetitle(IkiWiki::basename($a))
+ cmp
+</pre>
+
+This would be bad when M is very large, and particularly, of course, when
+there is no limit and all pages are being matched on. (For example, an
+archive page shows all pages that match a pagespec specifying a creation
+date range.) Well, in this case, it *does* make sense to flip it, limit by
+pagespe first, and do a (quick)sort second. (No influence complications,
+either.)
+
+> Flipping when there's no limit implemented, and it knocked 1/3 off
+> the rebuild time of my blog's archive pages. --[[Joey]]
+
+Adding these special cases will be more complicated, but I think the best
+of both worlds. --[[Joey]]
diff --git a/doc/todo/structured_page_data.mdwn b/doc/todo/structured_page_data.mdwn
index 72bfd8dea..9f21fab7f 100644
--- a/doc/todo/structured_page_data.mdwn
+++ b/doc/todo/structured_page_data.mdwn
@@ -1,5 +1,7 @@
This is an idea from [[JoshTriplett]]. --[[Joey]]
+* See further discussion at [[forum/an_alternative_approach_to_structured_data]].
+
Some uses of ikiwiki, such as for a bug-tracking system (BTS), move a bit away from the wiki end
of the spectrum, and toward storing structured data about a page or instead
of a page.
@@ -251,6 +253,9 @@ in a large number of other cases.
> dependencies between bugs from arbitrary links.
>> This issue (the need for distinguished kinds of links) has also been brought up in other discussions: [[tracking_bugs_with_dependencies#another_kind_of_links]] (deps vs. links) and [[tag_pagespec_function]] (tags vs. links). --Ivan Z.
+>>> And multiple link types are now supported; plugins can set the link
+>>> type when registering a link, and pagespec functions can match on them. --[[Joey]]
+
----
#!/usr/bin/perl
diff --git a/doc/todo/support_includes_in_setup_files.mdwn b/doc/todo/support_includes_in_setup_files.mdwn
new file mode 100644
index 000000000..50afb2b6b
--- /dev/null
+++ b/doc/todo/support_includes_in_setup_files.mdwn
@@ -0,0 +1,10 @@
+I have a client server setup so I can I edit/preview on my laptop/desktop and push to a server. I therefore have two almost identical setup files that reasonably often I let get out of sync. I'd like to be able into include the common parts into the two setup files. Currently the following works, but it relies on knowing the implementation of IkiWiki::Setup::Standard
+
+use IkiWiki::Setup::Standard { specific stuff };
+require "/path/to/common_setup";
+
+where common_setup contains a call to IkiWiki::Setup::merge
+
+To see that this is fragile, note that the require must come second, or ikiwiki will try to load a module called IkiWiki::Setup::merge
+
+DavidBremner
diff --git a/doc/todo/support_link__40__.__41___in_pagespec.mdwn b/doc/todo/support_link__40__.__41___in_pagespec.mdwn
new file mode 100644
index 000000000..653db1ff2
--- /dev/null
+++ b/doc/todo/support_link__40__.__41___in_pagespec.mdwn
@@ -0,0 +1,21 @@
+[[!tag wishlist]]
+
+It would be nice to have pagespecs support "link(.)" as syntax.
+This would match pages that link to the page that invokes the pagespec.
+The use case is a blog with tags, and having a page for each tag
+which uses !inline to list all posts with the tag.
+
+Joey said on IRC that "probably changing the derel() function in
+IkiWiki.pm is the best way to do it".
+
+> I implemented this suggestion in the simplest possible way, [[!taglink patch]] available [[here|http://git.oblomov.eu/ikiwiki/patch/f4a52de556436fdee00fd92ca9a3b46e876450fa]].
+> An alternative approach, very similar, would be to make the empty page parameter mean current page (e.g. `link()` would mean pages linking here). The patch would be very similar.
+> -- GB
+
+>> Thanks for this, and also for your recent spam-fighting.
+>> Huh, I was right about changing derel, didn't realize it would be
+>> so obvious a change. :) Oh well, I managed to complicate it
+>> some in optimisation pass.. ;)
+>>
+>> Note that your git-daemon on git.oblomov.eu seems down.
+>> I pulled the patch from gitweb, [[done]] --[[Joey]]
diff --git a/doc/todo/svg.mdwn b/doc/todo/svg.mdwn
index 0a15af4cd..274ebf3e3 100644
--- a/doc/todo/svg.mdwn
+++ b/doc/todo/svg.mdwn
@@ -3,6 +3,7 @@ We should support SVG. In particular:
* We could support rendering SVGs to PNGs when compiling the wiki. Not all browsers support SVG yet.
* We could support editing SVGs via the web interface. SVG can contain unsafe content such as scripting, so we would need to whitelist safe markup.
+ * I am interested in seeing [svg-edit](http://code.google.com/p/svg-edit/) integrated -- [[EricDrechsel]]
--[[JoshTriplett]]
@@ -56,3 +57,21 @@ in the trunk if other people think it's useful.
[htmlscrubber.pm]:http://xbeta.org/gitweb/?p=xbeta/ikiwiki.git;a=blob;f=IkiWiki/Plugin/htmlscrubber.pm;h=3c0ddc8f25bd8cb863634a9d54b40e299e60f7df;hb=fe333c8e5b4a5f374a059596ee698dacd755182d
[diff]: http://xbeta.org/gitweb/?p=xbeta/ikiwiki.git;a=blobdiff;f=IkiWiki/Plugin/htmlscrubber.pm;h=3c0ddc8f25bd8cb863634a9d54b40e299e60f7df;hp=3bdaccea119ec0e1b289a0da2f6d90e2219b8d66;hb=fe333c8e5b4a5f374a059596ee698dacd755182d;hpb=be0b4f603f918444b906e42825908ddac78b7073
+
+> Unfortuantly these links are broken. --[[Joey]]
+
+* * *
+
+Actually, there's a way to embed SVG into MarkDown sources using the [data: URI scheme][rfc2397], [like this](data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBzdGFuZGFsb25lPSJubyI/Pgo8c3ZnIHdpZHRoPSIxOTIiIGhlaWdodD0iMTkyIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KIDwhLS0gQ3JlYXRlZCB3aXRoIFNWRy1lZGl0IC0gaHR0cDovL3N2Zy1lZGl0Lmdvb2dsZWNvZGUuY29tLyAtLT4KIDx0aXRsZT5IZWxsbywgd29ybGQhPC90aXRsZT4KIDxnPgogIDx0aXRsZT5MYXllciAxPC90aXRsZT4KICA8ZyB0cmFuc2Zvcm09InJvdGF0ZSgtNDUsIDk3LjY3MTksIDk3LjY2OCkiIGlkPSJzdmdfNyI+CiAgIDxyZWN0IHN0cm9rZS13aWR0aD0iNSIgc3Ryb2tlPSIjMDAwMDAwIiBmaWxsPSIjRkYwMDAwIiBpZD0ic3ZnXzUiIGhlaWdodD0iNTYuMDAwMDAzIiB3aWR0aD0iMTc1IiB5PSI2OS42Njc5NjkiIHg9IjEwLjE3MTg3NSIvPgogICA8dGV4dCB4bWw6c3BhY2U9InByZXNlcnZlIiB0ZXh0LWFuY2hvcj0ibWlkZGxlIiBmb250LWZhbWlseT0ic2VyaWYiIGZvbnQtc2l6ZT0iMjQiIHN0cm9rZS13aWR0aD0iMCIgc3Ryb2tlPSIjMDAwMDAwIiBmaWxsPSIjZmZmZjAwIiBpZD0ic3ZnXzYiIHk9IjEwNS42NjgiIHg9Ijk5LjY3MTkiPkhlbGxvLCB3b3JsZCE8L3RleHQ+CiAgPC9nPgogPC9nPgo8L3N2Zz4=).
+Of course, this way to display an image one needs to click a link, but it may be considered a feature.
+&mdash;&nbsp;[[Ivan_Shmakov]], 2010-03-12Z.
+
+[rfc2397]: http://tools.ietf.org/html/rfc2397
+
+> You can do the same with img src actually.
+>
+> If svg markup allows unsafe elements (ie, javascript),
+> which it appears to,
+> then this is a security hole, and the htmlscrubber
+> needs to lock it down more. Darn, now I have to spend my afternoon making
+> security releases! --[[Joey]]
diff --git a/doc/todo/tagging_with_a_publication_date.mdwn b/doc/todo/tagging_with_a_publication_date.mdwn
index 80240ec5a..39fc4e220 100644
--- a/doc/todo/tagging_with_a_publication_date.mdwn
+++ b/doc/todo/tagging_with_a_publication_date.mdwn
@@ -38,3 +38,34 @@ on vacation".
> >
> > I no longer have the original wiki for which I wanted this feature, but I can
> > see using it on future ones. -- [[DonMarti]]
+
+>>> FWIW, for the case where one wants to update a site offline,
+>>> using an ikiwiki instance on a laptop, and include some deffered
+>>> posts in the push, the ad-hoc cron job type approach will be annoying.
+>>>
+>>> In modern ikiwiki, I guess the way to accomplish this would be to
+>>> add a pagespec that matches only pages posted in the present or past.
+>>> Then a page can have its post date set to the future, using meta date,
+>>> and only show up when its post date rolls around.
+>>>
+>>> Ikiwiki will need to somehow notice that a pagespec began matching
+>>> a page it did not match previously, despite said page not actually
+>>> changing. I'm not sure what the best way is.
+>>>
+>>> * One way could be to
+>>> use a needsbuild hook and some stored data about which pagespecs
+>>> exclude pages in the future. (But I'm not sure how evaluating the
+>>> pagespec could lead to that metadata and hook being set up.)
+>>> * Another way would be to use an explicit directive to delay a
+>>> page being posted. Then the directive stores the metadata and
+>>> sets up the needsbuild hook.
+>>> * Another way would be for ikiwiki to remember the last
+>>> time it ran. It could then easily find pages that have a post
+>>> date after that time, and treat them the same as it treats actually
+>>> modified files. Or a plugin could do this via a needsbuild hook,
+>>> probably. (Only downside to this is it would probably need to do
+>>> a O(n) walk of the list of pages -- but only running an integer
+>>> compare per page.)
+>>>
+>>> You'd still need a cron job to run ikiwiki -refresh every hour, or
+>>> whatever, so it can update. --[[Joey]]
diff --git a/doc/todo/toc_plugin:_set_a_header_ceiling___40__opposite_of_levels__61____41__.mdwn b/doc/todo/toc_plugin:_set_a_header_ceiling___40__opposite_of_levels__61____41__.mdwn
index 547c7a80a..07d2d383c 100644
--- a/doc/todo/toc_plugin:_set_a_header_ceiling___40__opposite_of_levels__61____41__.mdwn
+++ b/doc/todo/toc_plugin:_set_a_header_ceiling___40__opposite_of_levels__61____41__.mdwn
@@ -1,3 +1,48 @@
It would be nice if the [[plugins/toc]] plugin let you specify a header level "ceiling" above which (or above and including which) the headers would not be incorporated into the toc.
Currently, the levels=X parameter lets you tweak how deep it will go for small headers, but I'd like to chop off the h1's (as I use them for my page title) -- [[Jon]]
+
+> This change to toc.pm should do it. --[[KathrynAndersen]]
+
+> > The patch looks vaguely OK to me but it's hard to tell without
+> > context. It'd be much easier to review if you used unified diff
+> > (`diff -u`), which is what `git diff` defaults to - almost all
+> > projects prefer to receive changes as unified diffs (or as
+> > branches in their chosen VCS, which is [[git]] here). --[[smcv]]
+
+> > > Done. -- [[KathrynAndersen]]
+
+> > > > Looks like Joey has now [[merged|done]] this. Thanks! --[[smcv]]
+
+ --- /files/git/other/ikiwiki/IkiWiki/Plugin/toc.pm 2009-11-16 12:44:00.352050178 +1100
+ +++ toc.pm 2009-12-26 06:36:06.686512552 +1100
+ @@ -53,8 +53,8 @@
+ my $page="";
+ my $index="";
+ my %anchors;
+ - my $curlevel;
+ - my $startlevel=0;
+ + my $startlevel=($params{startlevel} ? $params{startlevel} : 0);
+ + my $curlevel=$startlevel-1;
+ my $liststarted=0;
+ my $indent=sub { "\t" x $curlevel };
+ $p->handler(start => sub {
+ @@ -67,10 +67,16 @@
+
+ # Take the first header level seen as the topmost level,
+ # even if there are higher levels seen later on.
+ + # unless we're given startlevel as a parameter
+ if (! $startlevel) {
+ $startlevel=$level;
+ $curlevel=$startlevel-1;
+ }
+ + elsif (defined $params{startlevel}
+ + and $level < $params{startlevel})
+ + {
+ + return;
+ + }
+ elsif ($level < $startlevel) {
+ $level=$startlevel;
+ }
+
+[[!tag patch]]
diff --git a/doc/todo/toplevel_index.mdwn b/doc/todo/toplevel_index.mdwn
index 77e315811..92cef99ac 100644
--- a/doc/todo/toplevel_index.mdwn
+++ b/doc/todo/toplevel_index.mdwn
@@ -1,7 +1,7 @@
Some inconsistences around the toplevel [[index]] page:
* [[ikiwiki]] is a separate page; links to [[ikiwiki]] should better go to
- the [[index]] though.
+ the index though.
> At least for this wiki, I turned out to have a use for [[ikiwiki]]
> pointing to a different page, though the general point might still
diff --git a/doc/todo/tracking_bugs_with_dependencies.mdwn b/doc/todo/tracking_bugs_with_dependencies.mdwn
index 5f3ece290..456dadad0 100644
--- a/doc/todo/tracking_bugs_with_dependencies.mdwn
+++ b/doc/todo/tracking_bugs_with_dependencies.mdwn
@@ -81,6 +81,9 @@ I like the idea of [[tips/integrated_issue_tracking_with_ikiwiki]], and I do so
>> I saw that this issue is targeted at by the work on [[structured page data#another_kind_of_links]]. --Ivan Z.
+>>> It's fixed now; links can have a type, such as "tag", or "dependency",
+>>> and pagespecs can match links of a given typo. --[[Joey]]
+
Okie - I've had a quick attempt at this. Initial patch attached. This one doesn't quite work.
And there is still a lot of debugging stuff in there.
diff --git a/doc/todo/transient_pages.mdwn b/doc/todo/transient_pages.mdwn
new file mode 100644
index 000000000..fe2259b40
--- /dev/null
+++ b/doc/todo/transient_pages.mdwn
@@ -0,0 +1,318 @@
+On [[todo/auto-create_tag_pages_according_to_a_template]], [[chrysn]]
+suggests:
+
+> Instead of creating a file that gets checked in into the RCS, the
+> source files could be left out and the output files be written as
+> long as there is no physical source file (think of a virtual underlay).
+> Something similar would be required to implement alias directive,
+> which couldn't be easily done by writing to the RCS as the page's
+> contents can change depending on which other pages claim it as an alias.
+
+`add_autofile` could be adapted to do this, or a similar API could be
+added.
+
+This would also be useful for autoindex, as suggested on
+[[plugins/autoindex/discussion]] and [[!debbug 544322]]. I'd also like
+to use it for [[plugins/contrib/album]].
+
+It could also be used for an [[todo/alias_directive]].
+
+--[[smcv]]
+
+> All [[merged|done]] --[[Joey]]
+
+--------------------------
+
+[[!template id=gitbranch branch=smcv/ready/transient author="[[smcv]]"]]
+[[!tag patch]]
+
+Related branches:
+
+* `ready/tag-test`: an extra regression test for tags
+ > merged --[[Joey]]
+* either `transient-relative` or `transient-relative-api`: avoid using `Cwd`
+ on initialization
+ > merged the latter --[[Joey]]
+* `ready/transient-aggregate`: use for aggregate
+ > merged --[[Joey]]
+* `ready/transient-autoindex`: optionally use for autoindex,
+ which is [[!debbug 544322]] (includes autoindex-autofile from
+ [[todo/autoindex should use add__95__autofile]])
+ > merged. I do note that this interacts badly with ikiwiki-hosting's
+ > backup/restore/branch handling, since that does not back up the
+ > transientdir by default, and so autoindex will not recreate the
+ > "deleted" pages. I'll probably have to make it back up the transientdir
+ > too. --[[Joey]]
+* `ready/transient-recentchanges`: use for recentchanges
+ > merged --[[Joey]]
+* `ready/transient-tag`: optionally use for tag (includes tag-test)
+ > merged --[[Joey]]
+
+I think this branch is now enough to be useful. It adds the following:
+
+If the `transient` plugin is loaded, `$srcdir/.ikiwiki/transient` is added
+as an underlay. I'm not sure whether this should be a plugin or core, so
+I erred on the side of more plugins; I think it's "on the edge of the core",
+like goto.
+
+Pages in the transient underlay are automatically
+deleted if a page of the same name is created in the srcdir (or an underlay
+closer to the srcdir in stacking order).
+
+With the additional `ready/transient-tag` branch,
+`tag` enables `transient`, and if `tag_autocreate_commit` is set to 0
+(default 1), autocreated tags are written to the transient underlay.
+There is a regression test.
+
+With the additional `transient-autoindex` branch,
+`autoindex` uses autofiles. It also enables `transient`, and if
+`autoindex_commit` is set to 0 (default 1), autoindexes are written to
+the transient underlay. There is a regression test. However, this branch
+is blocked by working out what the desired behaviour is, on
+[[todo/autoindex_should_use_add__95__autofile]].
+
+> I wonder why this needs to be configurable? I suppose that gets back to
+> whether it makes sense to check these files in or not. The benefits of
+> checking them in:
+>
+> * You can edit them from the VCS, don't have to go into the web
+> interface. Of course, files from the underlays have a similar issue,
+> but does it make sense to make that wart larger?
+> * You can know you can build the same site with nothing missing
+> even if you don't there enable autoindex or whatever. (Edge case.)
+
+>> I'm not sure that that's a huge wart; you can always "edit by
+>> overwriting". If you're running a local clone of the wiki on your laptop
+>> or whatever, you have the underlays already, and can copy from there.
+>> Tag and autoindex pages have rather simple source code anyway. --s
+
+> The benefit of using transient pages seems to just be avoiding commit
+> clutter? For files that are never committed, transient pages are a clear
+> win, but I wonder if adding configuration clutter just to avoid some
+> commit clutter is really worth it.
+
+>> According to the last section of
+>> [[todo/auto-create_tag_pages_according_to_a_template]], [[chrysn]] and
+>> Eric both feel rather strongly that it should be possible to
+>> not commit any tags; in [[plugins/autoindex/discussion]],
+>> lollipopman and [[JoeRayhawk]] both requested the same for autoindex.
+>> I made it configurable because, as you point out,
+>> there are also reasons why it makes sense to check these
+>> automatically-created files in. I'm neutral on this, personally.
+>>
+>> If this is a point of contention, would you accept a branch that
+>> just adds `transient` and uses it for [[plugins/recentchanges]],
+>> which aren't checked in and never have been? I've split the
+>> branch up in the hope that *some* of it can get merged.
+>>
+>>> I will be happy to merge transient-recentchanges when it's ready.
+>>> I see no obstacle to merging transient-tag either, and am not
+>>> really against using it for autoindex or aggregate either
+>>> once they get completed.
+>>> I just wanted to think through why configurability is needed.
+>>> --[[Joey]]
+>>
+>> One potentially relevant point is that configuration clutter only
+>> affects the site admin whereas commit clutter is part of the whole
+>> wiki's history. --[[smcv]]
+
+> Anyway, the configurability
+> appears subtly broken; the default is only 1 if a new setup file is
+> generated. (Correction: It was not even the default then --[[Joey]])
+> With an existing setup file, the 'default' values in
+> `getsetup` don't take effect, so it will default to undef, which
+> is treated the same as 0. --[[Joey]]
+
+>> Fixed in the branches, hopefully. (How disruptive would it be to have
+>> defaults take effect whenever the setup file doesn't set a value, btw?
+>> It seems pretty astonishing to have them work as they do at the moment.) --s
+
+>>> Well, note that default is not actually a documented field in
+>>> getsetup hooks at all! (It is used in IkiWiki.pm's own `getsetup()`, and
+>>> the concept may have leaked out into one or two plugins (comments,
+>>> transient)).
+>>>
+>>> Running getsetup at plugin load time is something I have considered
+>>> doing. It would simplify some checkconfig hooks that just set hardcoded
+>>> defaults. Although since dying is part of the getsetup hook's API, it
+>>> could be problimaric.
+>>> --[[Joey]]
+
+autoindex ignores pages in the transient underlay when deciding whether
+to generate an index.
+
+With the additional `ready/transient-recentchanges` branch, new recent
+changes go in the transient underlay; I tested this manually.
+
+Not done yet (in that branch, at least):
+
+* `remove` can't remove transient pages: this turns out to be harder than
+ I'd hoped, because I don't want to introduce a vulnerability in the
+ non-regular-file detection, so I'd rather defer that.
+
+ > Hmm, I'd at least want that to be dealt with before this was used
+ > by default for autoindex or tag. --[[Joey]]
+
+ >> I'll try to work out which of the checks are required for security
+ >> and which are just nice-to-have, but I'd appreciate any pointers
+ >> you could give. Note that my branch wasn't meant to enable either
+ >> by default, and now hopefully doesn't. --[[smcv]]
+
+ >>> Opened a new bug for this, [[bugs/removal_of_transient_pages]]
+ >>> --[[Joey]]
+
+* Transient tags that don't match any pages aren't deleted: I'm not sure
+ that that's a good idea anyway, though. Similarly, transient autoindexes
+ of directories that become empty aren't deleted.
+
+ > Doesn't seem necessary, or really desirable to do that. --[[Joey]]
+
+ >> Good, that was my inclination too. --s
+
+* In my `untested/transient` branch, new aggregated files go in the
+ transient underlay too (they'll naturally migrate over time). I haven't
+ tested this yet, it's just a proof-of-concept.
+
+ > Now renamed to `ready/transient-aggregate`; it does seem to work fine.
+ > --s
+
+> I can confirm that the behavior of autoindex, at least, is excellent.
+> Haven't tried tag. Joey, can you merge transient and autoindex? --JoeRayhawk
+
+>> Here are some other things I'd like to think about first: --[[Joey]]
+>>
+>> * There's a FIXME in autoindex.
+>>
+>> > Right, the extra logic for preventing autoindex pages from being
+>> > re-created. This is taking a while, so I'm going to leave out the
+>> > autoindex part for the moment. The FIXME is only relevant
+>> > because I tried to solve
+>> > [[todo/autoindex should use add__95__autofile]] first, but
+>> > strictly speaking, that's an orthogonal change. --s
+
+>> * Suggest making recentchanges unlink the transient page
+>> first, and only unlink from the old location if it wasn't
+>> in the transient location. Ok, it only saves 1 syscall :)
+>>
+>> > Is an unlink() really that expensive? But, OK, fixed in the
+>> > `ready/transient-recentchanges` branch. --s
+
+>> >> It's not, but it's easy. :) --[[Joey]]
+
+>> * Similarly it's a bit worrying for performance that it
+>> needs to pull in and use `Cwd` on every ikiwiki startup now.
+>> I really don't see the need; `wikistatedir` should
+>> mostly be absolute, and ikiwiki should not chdir in ways
+>> that break it anyway.
+>>
+>> > The reason to make it absolute is that relative underlays
+>> > are interpreted as relative to the base underlay directory,
+>> > not the cwd, by `add_underlay`.
+>> >
+>> > The updated `ready/transient-only` branch only loads `Cwd` if
+>> > the path is relative; an extra commit on branch
+>> > `smcv/transient-relative` goes behind `add_underlay`'s
+>> > back to allow use of a cwd-relative underlay. Which direction
+>> > would you prefer?
+>> >
+>> > I note in passing that [[plugins/autoindex]] and `IkiWiki::Render`
+>> > both need to use `Cwd` and `File::Find` on every refresh, so
+>> > there's only any point in avoiding `Cwd` for runs that don't
+>> > actually refresh, like simple uses of the CGI. --s
+
+>> >> Oh, right, I'd forgotten about the horrificness of File::Find
+>> >> that required a chdir for security. Ugh. Can we just avoid
+>> >> it for those simple cases then? (demand-calculate wikistatedir)
+>> >> --[[Joey]]
+
+>> >>> The reason that transientdir needs to be absolute is that it's
+>> >>> added as an underlay.
+>> >>>
+>> >>> We could avoid using `Cwd` by taking the extra commit from either
+>> >>> `smcv/transient-relative` or `smcv/transient-relative-api`;
+>> >>> your choice. I'd personally go for the latter.
+>> >>>
+>> >>> According to git grep, [[plugins/po]] already wants to look at
+>> >>> the underlaydirs in its checkconfig hook, so I don't think
+>> >>> delaying calculation of the underlaydir is viable. (I also noticed
+>> >>> a bug,
+>> >>> [[bugs/po:_might_not_add_translated_versions_of_all_underlays]].)
+>> >>>
+>> >>> `underlaydirs` certainly needs to have been calculated by the
+>> >>> time `refresh` hooks finish, so `find_src_files` can use it. --s
+
+>> * Unsure about the use of `default_pageext` in the `change`
+>> hook. Is everything in the transientdir really going
+>> to use that pageext? Would it be better to look up the
+>> complete source filename?
+>>
+>> > I've updated `ready/transient` to do a more thorough GC by
+>> > using File::Find on the transient directory. This does
+>> > require `File::Find` and `Cwd`, but only when pages change,
+>> > and `refresh` loads both of those in that situation anyway.
+>> >
+>> > At the moment everything in the transientdir will either
+>> > have the `default_pageext` or be internal, although I
+>> > did wonder whether to make [[plugins/contrib/album]]
+>> > viewer pages optionally be `html`, for better performance
+>> > when there's a very large number of photos. --s
+
+>> >> Oh, ugh, more File::Find... Couldn't it just assume that the
+>> >> transient page has the same extension as its replacement?
+>> >> --[[Joey]]
+
+>> >>> Good idea, that'll be true for web edits at least.
+>> >>> Commit added. --s
+
+--------------------------
+
+## An earlier version
+
+I had a look at implementing this. It turns out to be harder than I thought
+to have purely in-memory pages (several plugins want to be able to access the
+source file as a file), but I did get this proof-of-concept branch
+to write tag and autoindex pages into an underlay.
+
+This loses the ability to delete the auto-created pages (although they don't
+clutter up git this way, at least), and a lot of the code in autoindex is
+probably now redundant, so this is probably not quite ready for merge, but
+I'd welcome opinions.
+
+Usage: set `tag_underlay` and/or `autoindex_underlay` to an absolute path,
+which you must create beforehand. I suggest *srcdir* + `/.ikiwiki/transient`.
+
+Refinements that could be made if this approach seems reasonable:
+
+* make these options boolean, and have the path always be `.ikiwiki/transient`
+* improve the `remove` plugin so it also deletes from this special underlay
+
+>> Perhaps it should be something more generic, so that other plugins could use it (such as "album" mentioned above).
+>> The `.ikiwiki/transient` would suit this, but instead of saying "tag_underlay" or "autoindex_underlay" have "use_transient_underlay" or something like that?
+>> Or to make it more flexible, have just one option "transient_underlay" which is set to an absolute path, and if it is set, then one is using a transient-underlay.
+>> --[[KathrynAndersen]]
+
+>>> What I had in mind was more like `tag_autocreate_transient => 1` or
+>>> `autoindex_transient => 1`; you might conceivably want tags to be
+>>> checked in but autoindices to be transient, and it's fine for each
+>>> plugin to make its own decision. Going from that to one boolean
+>>> (or just always-transient if people don't think that's too
+>>> astonishing) would be trivial, though.
+>>>
+>>> I don't think relocating the transient underlay really makes sense,
+>>> except for prototyping: you only want one, and `.ikiwiki` is as good
+>>> a place as any (ikiwiki already needs to be able to write there).
+>>>
+>>> For [[plugins/contrib/album]] I think I'd just make the photo viewer
+>>> pages always-transient - you can always make a transient page
+>>> permanent by editing it, after all.
+>>>
+>>> Do you think this approach has enough potential that I should
+>>> continue to hack on it? Any thoughts on the implementation? --[[smcv]]
+
+>>>> Ah, now I understand what you're getting at. Yes, it makes sense to put transient pages under `.ikiwiki`.
+>>>> I haven't looked at the code, but I'd be interested in seeing whether it's generic enough to be used by other plugins (such as `album`) without too much fuss.
+>>>> The idea of a transient underlay gives us a desirable feature for free: that if someone edits the transient page, it is made permanent and added to the repository.
+>>>>
+>>>> I think the tricky thing with removing these transient underlay pages is the question of how to prevent whatever auto-generated the pages in the first place from generating them again - or, conversely, how to force whatever auto-generated those pages to regenerate them if you've changed your mind.
+>>>> I think you'd need something similar to `will_render` so that transient pages would be automatically removed if whatever auto-generated them is no longer around.
+>>>> -- [[KathrynAndersen]]
diff --git a/doc/todo/two-way_convert_of_wikis.mdwn b/doc/todo/two-way_convert_of_wikis.mdwn
new file mode 100644
index 000000000..61f02a30b
--- /dev/null
+++ b/doc/todo/two-way_convert_of_wikis.mdwn
@@ -0,0 +1,15 @@
+
+[[!tag wishlist]]
+
+Ok, the vision is this: Some of you will know git-svn. I want something like
+git-svn,, but for wikis. I want to be able to do the following:
+
+1. Convert a moinmoin (or whatever) wiki to a local ikiwiki on my laptop.
+2. Edit my local copy (offline).
+3. Preview the changes with my local ikiwki installation + browser.
+4. Push the changes back to moinmoin (or whatever) wiki.
+
+I know, I know, ikiwiki wasn't designed for that, but it would be really cool,
+and useful and people ask for that kind of thing too.
+
+--[[David_Riebenbauer]]
diff --git a/doc/todo/untrusted_git_push_hooks.mdwn b/doc/todo/untrusted_git_push_hooks.mdwn
new file mode 100644
index 000000000..313078ce5
--- /dev/null
+++ b/doc/todo/untrusted_git_push_hooks.mdwn
@@ -0,0 +1,12 @@
+Re the canrename, canremove, and canedit hooks:
+
+Of the three, only canremove is currently checked during an untrusted
+git push (a normal git push is assumed to be from a trusted user and
+bypasses all checks).
+
+It would probably make sense to add the canedit hook to the checks done
+there. Calling the canrename hook is tricky, because after all, git does
+not record explicit file moves.
+
+The checkcontent hook is another hook not currently called there, that
+probably should be.
diff --git a/doc/todo/use_secure_cookies_for_ssl_logins.mdwn b/doc/todo/use_secure_cookies_for_ssl_logins.mdwn
new file mode 100644
index 000000000..194db2f36
--- /dev/null
+++ b/doc/todo/use_secure_cookies_for_ssl_logins.mdwn
@@ -0,0 +1,36 @@
+[[!template id=gitbranch branch=smcv/ready/sslcookie-auto author="[[smcv]]"]]
+[[!tag patch]]
+
+At the moment `sslcookie => 0` never creates secure cookies, so if you log in
+with SSL, your browser will send the session cookie even over plain HTTP.
+Meanwhile `sslcookie => 1` always creates secure cookies, so you can't
+usefully log in over plain http.
+
+This branch adds `sslcookie => 0, sslcookie_auto => 1` as an option; this
+uses the `HTTPS` environment variable, so if you log in over SSL you'll
+get a secure session cookie, but if you log in over HTTP, you won't.
+(The syntax for the setup file is pretty rubbish - any other suggestions?)
+
+> Does this need to be a configurable option at all? The behavior could
+> just be changed in the sslcookie = 0 case. It seems sorta reasonable
+> that, once I've logged in via https, I need to re-login if I then
+> switch to http.
+
+>> Even better. I've amended the branch to have this behaviour, which
+>> turns it into a one-line patch. --[[smcv]]
+
+> And, if your change is made, the sslcookie option could probably itself
+> be dropped too -- at least I don't see a real use case for it if ikiwiki
+> is more paranoid about cookies by default.
+
+>> I haven't done that; it might make sense to do so, but I think it'd be
+>> better to leave it in as a safety-catch (or in case someone's
+>> using a webserver that doesn't put `$HTTPS` in the environment). --s
+
+> Might be best to fix
+> [[todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both]]
+> first, so that dual https/http sites can better be set up. --[[Joey]]
+
+>> Thanks for merging that! :-) --s
+
+[[merged|done]] --[[Joey]]
diff --git a/doc/todo/use_templates_for_the_img_plugin.mdwn b/doc/todo/use_templates_for_the_img_plugin.mdwn
new file mode 100644
index 000000000..1cee1b535
--- /dev/null
+++ b/doc/todo/use_templates_for_the_img_plugin.mdwn
@@ -0,0 +1,29 @@
+[[!template id=gitbranch branch=jmtd/img_use_template author="[[Jon]]"]]
+
+Not finished! :-)
+
+The patches in <http://github.com/jmtd/ikiwiki/tree/img_use_template> convert the `img.pm` plugin to use a template (by default, `img.tmpl`, varied using a `template=` parameter) rather than hard-code the generated HTML.
+
+I originally thought of this to solve the problem outlined in [[bugs/can't mix template vars inside directives]], before I realised I could wrap the `img` call in my pages with a template to achieve the same thing. I therefore sat on it.
+
+However, I since thought of another use for this, and so started implementing it. (note to self: explain this other use)
+
+----
+
+Ok, I have managed to achieve what I wanted with stock ikiwiki, this branch might not have any more life left in it (but it has proven an interesting experiment to see how much logic could be moved from `img.pm` into a template relatively easily. Although the template is not terribly legible.)
+
+My ikiwiki page has a picture on the front page. I've changed that picture just once, but I would like to change it again from time to time. I also want to keep a "gallery", or at least a list, of previous pictures, and perhaps include text alongside each picture, but not on the front page.
+
+I've achieved this as follows
+
+ * each index picture gets a page under "indexpics".
+ * the "indexpics" page has a raw inline to include them all[1]
+ * the front page has more-or-less the same inline, with show=1
+ * each index picture page has a [[plugins/conditional]]:
+ * if you are being included, show the resized picture only, and link the picture to the relevant indexpic page
+ * else, show the picture with the default link to a full-size image, and include explanatory text.
+ * most of the boilerplate is hidden inside a template
+
+It is not quite as I envisaged it: the explanatory text would probably make sense on the indexpics "gallery" page, but since that includes the page, the wrong trouser-leg of the conditional is used. But it works quite well. Introducing a new index picture involves creating an appropriate page under indexpics and the rest happens automatically.
+
+[1] lie #1: the pagespec is a lot more complex as it has to exclude raw image filetypes
diff --git a/doc/todo/user-defined_templates_outside_the_wiki.mdwn b/doc/todo/user-defined_templates_outside_the_wiki.mdwn
new file mode 100644
index 000000000..1d72aa6a7
--- /dev/null
+++ b/doc/todo/user-defined_templates_outside_the_wiki.mdwn
@@ -0,0 +1,10 @@
+[[!tag wishlist]]
+
+The [[plugins/contrib/ftemplate]] plugin looks for templates inside the wiki
+source, but also looks in the system templates directory (the one with
+`page.tmpl`). This means the wiki admin can provide templates that can be
+invoked via `\[[!template]]`, but don't have to "work" as wiki pages in their
+own right. I think the normal [[plugins/template]] plugin could benefit from
+this functionality.
+
+[[done]] --[[Joey]]
diff --git a/doc/todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both.mdwn b/doc/todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both.mdwn
index 65b7cd96a..6ede7f91e 100644
--- a/doc/todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both.mdwn
+++ b/doc/todo/want_to_avoid_ikiwiki_using_http_or_https_in_urls_to_allow_serving_both.mdwn
@@ -1,3 +1,46 @@
+## current status
+
+[[done]] again! :)
+
+Actually, there are two places where the configured url is still hardcoded:
+
+1. When searching, all the links will use it. This is annoying to fix,
+ and we deem it not a problem.
+2. When ikiwiki dies with an error, the links on the error page will
+ use it. Too bad :)
+
+------
+
+## semi-old
+
+
+* CGI pages, with the exception of edit pages, set `<base>` to
+ `$config{url}`
+
+ I had to revert using `baseurl(undef)` for that, because it needs
+ to be a full url.
+
+ Ideally, baseurl would return an absolute url derived from the url
+ being used to access the cgi, but that needs access to the CGI object,
+ which it does not currently have. Similarly, `misctemplate`
+ does not have access to the CGI object, so it cannot use it to
+ generate a better baseurl. Not sure yet what to do; may have to thread
+ a cgi parameter through all the calls to misctemplate. --[[Joey]]
+
+ > Fixed, cgitemplate is used now. --[[Joey]]
+
+* Using `do=goto` to go to a comment or recentchanges item
+ will redirect to the `$config{url}`-based url, since the
+ permalinks are made to be absolute urls now.
+
+ Fixing this would seem to involve making meta force permalinks
+ to absolute urls when fulling out templates, while allowing them
+ to be left as partial urls internally, for use by goto. --[[Joey]]
+
+ > This reversion has now been fixed. --[[Joey]]
+
+## old attempt
+
It looks like all links in websites are absolute paths, this has some limitations:
* If connecting to website via https://... all links will take you back to http://
@@ -12,18 +55,322 @@ It would be good if relative paths could be used instead, so the transport metho
> "../../", and "../". The only absolute links are to CGIs and the w3c DTD.
> --[[Joey]]
->> The problem is within the CGI script. The links within the HTML page are all absolute, including links to the css file.
->> Having a http links within a HTML page retrieved using https upset most browsers (I think). Also if I push cancel on the edit page in https, I end up at at http page. -- Brian May
+>> The problem is within the CGI script. The links within the HTML page are all
+>> absolute, including links to the css file. Having a http links within a HTML
+>> page retrieved using https upset most browsers (I think). Also if I push cancel
+>> on the edit page in https, I end up at at http page. -- Brian May
>>> Ikiwiki does not hardcode http links anywhere. If you don't want
>>> it to use such links, change your configuration to use https
>>> consistently. --[[Joey]]
-Errr... That is not a solution, that is a work around. ikiwiki does not hard code the absolute paths, but absolute paths are hard coded in the configuration file. If you want to serve your website so that the majority of users can see it as http, including in rss feeds (this allows proxy caches to cache the contents and has reduced load requirements), but editing is done via https for increased security, it is not possible. I have some ideas how this can be implemented (as ikiwiki has the absolute path to the CGI script and the absolute path to the destination, it should be possible to generate a relative path from one to the other), although some minor issues still need to be resolved. -- Brian May
+Errr... That is not a solution, that is a work around. ikiwiki does not hard
+code the absolute paths, but absolute paths are hard coded in the configuration
+file. If you want to serve your website so that the majority of users can see
+it as http, including in rss feeds (this allows proxy caches to cache the
+contents and has reduced load requirements), but editing is done via https for
+increased security, it is not possible. I have some ideas how this can be
+implemented (as ikiwiki has the absolute path to the CGI script and the
+absolute path to the destination, it should be possible to generate a relative
+path from one to the other), although some minor issues still need to be
+resolved. -- Brian May
-I noticed the links to the images on <http://ikiwiki.info/recentchanges/> are also absolute, that is <http://ikiwiki.info/wikiicons/diff.png>; this seems surprising, as the change.tmpl file uses &lt;TMPL_VAR BASEURL&gt;
-which seems to do the right thing in page.tmpl, but not for change.tmpl. Where is BASEURL set? -- Brian May
+I noticed the links to the images on <http://ikiwiki.info/recentchanges/> are
+also absolute, that is <http://ikiwiki.info/wikiicons/diff.png>; this seems
+surprising, as the change.tmpl file uses &lt;TMPL_VAR BASEURL&gt; which seems
+to do the right thing in page.tmpl, but not for change.tmpl. Where is BASEURL
+set? -- Brian May
> The use of an absolute baseurl in change.tmpl is a special case. --[[Joey]]
+So I'm facing this same issue. I have a wiki which needs to be accessed on
+three different URLs(!) and the hard coding of the URL from the setup file is
+becoming a problem for me. Is there anything I can do here? --[[Perry]]
+
+> I remain puzzled by the problem that Brian is discussing. I don't see
+> why you can't just set the cgiurl and url to a https url, and serve
+> the site using both http and https.
+>
+> Just for example, <https://kitenet.net/> is an ikiwiki, and it is accessible
+> via https or http, and if you use https, links will remain on https (except
+> for links using the cgi, which I could fix by changing the cgiurl to https).
+>
+> I think it's possible ikiwiki used to have some
+> absolute urls that have been fixed since Brian filed the bug. --[[Joey]]
+
[[wishlist]]
+
+----
+
+[[!toggle id="smcv-https" text="Some discussion of a rejected implementation, smcv/https."]]
+[[!toggleable id="smcv-https" text="""
+
+[[!template id=gitbranch branch=smcv/https author="[[smcv]]"]]
+
+For a while I've been using a configuration where each wiki has a HTTP and
+a HTTPS mirror, and updating one automatically updates the other, but
+that seems unnecessarily complicated. My `https` branch adds `https_url`
+and `https_cgiurl` config options which can be used to provide a HTTPS
+variant of an existing site; the CGI script automatically detects whether
+it was accessed over HTTPS and switches to the other one.
+
+This required some refactoring, which might be worth merging even if
+you don't like my approach:
+
+* change `IkiWiki::cgiurl` to return the equivalent of `$config{cgiurl}` if
+ called with no parameters, and change all plugins to indirect through it
+ (then I only need to change that one function for the HTTPS hack)
+
+* `IkiWiki::baseurl` already has similar behaviour, so change nearly all
+ references to the `$config{url}` to call `baseurl` (a couple of references
+ specifically wanted the top-level public URL for Google or Blogspam rather
+ than a URL for the user's browser, so I left those alone)
+
+--[[smcv]]
+
+> The justification for your patch seems to be wanting to use a different
+> domain, like secure.foo.com, for https? Can you really not just configure
+> both url and cgiurl to use `https://secure.foo.com/...` and rely on
+> relative links to keep users of `http://insecure.foo.com/` on http until
+> they need to use the cgi?
+
+>> My problem with that is that uses of the CGI aren't all equal (and that
+>> the CA model is broken). You could put CGI uses in two classes:
+>>
+>> - websetup and other "serious" things (for the sites I'm running, which
+>> aren't very wiki-like, editing pages is also in this class).
+>> I'd like to be able to let privileged users log in over
+>> https with httpauth (or possibly even a client certificate), and I don't
+>> mind teaching these few people how to do the necessary contortions to
+>> enable something like CACert.
+>>
+>> - Random users making limited use of the CGI: do=goto, do=404, and
+>> commenting with an OpenID. I don't think it's realistic to expect
+>> users to jump through all the CA hoops to get CACert installed for that,
+>> which leaves their browsers being actively obstructive, unless I either
+>> pay the CA tax (per subdomain) to get "real" certificates, or use plain
+>> http.
+>>
+>> On a more wiki-like wiki, the second group would include normal page edits.
+>>
+>>> I see your use case. It still seems to me that for the more common
+>>> case where CA tax has been paid (getting a cert that is valid for
+>>> multiple subdomains should be doable?), having anything going through the
+>>> cgiurl upgrade to https would be ok. In that case, http is just an
+>>> optimisation for low-value, high-aggregate-bandwidth type uses, so a
+>>> little extra https on the side is not a big deal. --[[Joey]]
+>>
+>> Perhaps I'm doing this backwards, and instead of having the master
+>> `url`/`cgiurl` be the HTTP version and providing tweakables to override
+>> these with HTTPS, I should be overriding particular uses to plain HTTP...
+>>
+>> --[[smcv]]
+>>>
+>>> Maybe, or I wonder if you could just use RewriteEngine for such selective
+>>> up/downgrading. Match on `do=(edit|create|prefs)`. --[[Joey]]
+
+> I'm unconvinced.
+>
+> `Ikiwiki::baseurl()."foo"` just seems to be asking for trouble,
+> ie being accidentially written as `IkiWiki::baseurl("foo")`,
+> which will fail when foo is not a page, but some file.
+
+>> That's a good point. --s
+
+> I see multiple places (inline.pm, meta.pm, poll.pm, recentchanges.pm)
+> where it will now put the https url into a static page if the build
+> happens to be done by the cgi accessed via https, but not otherwise.
+> I would rather not have to audit for such problems going forward.
+
+>> Yes, that's a problem with this approach (either way round). Perhaps
+>> making it easier to run two mostly-synched copies like I was previously
+>> doing is the only solution... --s
+
+"""]]
+
+----
+
+[[!template id=gitbranch branch=smcv/ready/localurl author="[[smcv]]"]]
+[[!tag patch]]
+
+OK, here's an alternative approach, closer in spirit to what was initially
+requested. I included a regression test for `urlto`, `baseurl` and `cgiurl`,
+now that they have slightly more complex behaviour.
+
+The idea is that in the common case, the CGI and the pages will reside on the
+same server, so they can use "semi-absolute" URLs (`/ikiwiki.cgi`, `/style.css`,
+`/bugs/done`) to refer to each other. Most redirects, form actions, links etc.
+can safely use this form rather than the fully-absolute URL.
+
+The initial version of the branch had config options `local_url` and
+`local_cgiurl`, but they're now automatically computed by checking
+whether `url` and `cgiurl` are on the same server with the the same URL
+scheme. In theory you could use things like `//static.example.com/wiki/`
+and `//dynamic.example.com/ikiwiki.cgi` to preserve choice of http/https
+while switching server, but I don't know how consistently browsers
+support that.
+
+"local" here is short for "locally valid", because these URLs are neither
+fully relative nor fully absolute, and there doesn't seem to be a good name
+for them...
+
+I've tested this on a demo website with the CGI enabled, and it seemed to
+work nicely (there might be bugs in some plugins, I didn't try all of them).
+The branch at [[todo/use secure cookies for SSL logins]] goes well with
+this one.
+
+The `$config{url}` and `$config{cgiurl}` are both HTTP, but if I enable
+`httpauth`, set `cgiauthurl` to a HTTPS version of the same site and log
+in via that, links all end up in the HTTPS version.
+
+New API added by this branch:
+
+* `urlto(x, y, 'local')` uses `$local_url` instead of `$config{url}`
+
+ > Yikes. I see why you wanted to keep it to 3 parameters (4 is too many,
+ > and po overrides it), but I dislike overloading the third parameter
+ > like that.
+ >
+ > There are fairly few calls to `urlto($foo, $bar)`, so why not
+ > make that always return the semi-local url form, and leave the third
+ > parameter for the cases that need a true fully-qualified url.
+ > The new form for local urls will typically be only a little bit longer,
+ > except in the unusual case where the cgiurl is elsewhere. --[[Joey]]
+
+ >> So, have urlto(x, y) use `$local_url`? There are few calls, but IMO
+ >> they're for the most important things - wikilinks, img, map and
+ >> other ordinary hyperlinks. Using `$local_url` would be fine for
+ >> webserver-based use, but it does stop you browsing your wiki's
+ >> HTML over `file:///` (unless you set that as the base URL, but
+ >> then you can't move it around), and stops you moving simple
+ >> outputs (like the docwiki!) around.
+ >>
+ >> I personally think breaking the docwiki is enough to block that.
+ >>
+ >>> Well, the docwiki doesn't have an url configured at all, so I assumed
+ >>> it would need to fall back to current behavior in that case. I had
+ >>> not thought about browsing wiki's html files though, good point.
+ >>
+ >> How about this?
+ >>
+ >> * `urlto($link, $page)` with `$page` defined: relative
+ >> * `urlto($link, undef)`: local, starts with `/`
+ >> * `urlto($link)`: also local, as a side-effect
+ >> * `urlto($link, $anything, 1)` (but idiomatically, `$anything` is
+ >> normally undef): absolute, starts with `http[s]://`
+ >>
+ >> --[[smcv]]
+ >>
+ >>> That makes a great deal of sense, bravo for actually removing
+ >>> parameters in the common case while maintaining backwards
+ >>> compatability! --[[Joey]]
+ >>>
+ >>>> Done in my `localurl` branch; not tested in a whole-wiki way
+ >>>> yet, but I did add a regression test. I've used
+ >>>> `urlto(x, undef)` rather than `urlto(x)` so far, but I could
+ >>>> go back through the codebase using the short form if you'd
+ >>>> prefer. --[[smcv]]
+ >>>
+ >>> It does highlight that it would be better to have a
+ >>> `absolute_urlto($link)` (or maybe `absolute(urlto($link))` )
+ >>> rather than the 3 parameter form. --[[Joey]]
+ >>>
+ >>> Possibly. I haven't added this.
+
+* `IkiWiki::baseurl` has a new second argument which works like the
+ third argument of `urlto`
+
+ > I assume you have no objection to this --[[smcv]]
+
+ >> It's so little used that I don't really care if it's a bit ugly.
+ >> (But I assume changes to `urlto` will follow through here anyway.)
+ >> --[[Joey]]
+
+ >>> I had to use it a bit more, as a replacement for `$config{url}`
+ >>> when doing things like referencing stylesheets or redirecting to
+ >>> the top of the wiki.
+ >>>
+ >>> I ended up redoing this without the extra parameter. Previously,
+ >>> `baseurl(undef)` was the absolute URL; now, `baseurl(undef)` is
+ >>> the local path. I know you objected to me using `baseurl()` in
+ >>> an earlier branch, because `baseurl().$x` looks confusingly
+ >>> similar to `baseurl($x)` but has totally different semantics;
+ >>> I've generally written it `baseurl(undef)` now, to be more
+ >>> explicit. --[[smcv]]
+
+* `IkiWiki::cgiurl` uses `$local_cgiurl` if passed `local_cgiurl => 1`
+
+ > Now changed to always use the `$local_cgiurl`. --[[smcv]]
+
+* `IkiWiki::cgiurl` omits the trailing `?` if given no named parameters
+ except `cgiurl` and/or `local_cgiurl`
+
+ > I assume you have no objection to this --[[smcv]]
+ >
+ >> Nod, although I don't know of a use case. --[[Joey]]
+
+ >>> The use-case is that I can replace `$config{cgiurl}` with
+ >>> `IkiWiki::cgiurl()` for things like the action attribute of
+ >>> forms. --[[smcv]]
+
+Fixed bugs:
+
+* I don't think anything except `openid` calls `cgiurl` without also
+ passing in `local_cgiurl => 1`, so perhaps that should be the default;
+ `openid` uses the `cgiurl` named parameter anyway, so there doesn't even
+ necessarily need to be a way to force absolute URLs? Any other module
+ that really needs an absolute URL could use
+ `cgiurl(cgiurl => $config{cgiurl}, ...)`,
+ although that does look a bit strange
+
+ > I agree that makes sense. --[[Joey]]
+
+ >> I'm not completely sure whether you're agreeing with "perhaps do this"
+ >> or "that looks too strange", so please disambiguate:
+ >> would you accept a patch that makes `cgiurl` default to a local
+ >> (starts-with-`/`) result? If you would, that'd reduce the diff. --[[smcv]]
+
+ >>> Yes, I absolutely think it should default to local. (Note that
+ >>> if `absolute()` were implemented as suggested above, it could also
+ >>> be used with cgiurl if necessary.) --[[Joey]]
+
+ >>>> Done (minus `absolute()`). --[[smcv]]
+
+Potential future things:
+
+* It occurs to me that `IkiWiki::cgiurl` could probably benefit from being
+ exported? Perhaps also `IkiWiki::baseurl`?
+
+ > Possibly, see [[firm_up_plugin_interface]]. --[[Joey]]
+
+ >> Not really part of this branch, though, so wontfix (unless you ask me
+ >> to do so). --[[smcv]]
+
+* Or, to reduce use of the unexported `baseurl` function, it might make
+ sense to give `urlto` a special case that references the root of the wiki,
+ with a trailing slash ready to append stuff: perhaps `urlto('/')`,
+ with usage like this?
+
+ do_something(baseurl => urlto('/', undef, local)`);
+ do_something_else(urlto('/').'style.css');
+ IkiWiki::redirect(urlto('/', undef, 1));
+
+ > AFACIS, `baseurl` is only called in 3 places so I don't think that's
+ > needed. --[[Joey]]
+
+ >> OK, wontfix. For what it's worth, my branch has 6 uses in IkiWiki
+ >> core code (IkiWiki, CGI, Render and the pseudo-core part of editpage)
+ >> and 5 in plugins, since I used it for things like redirection back
+ >> to the top of the wiki --[[smcv]]
+
+merged|done --[[Joey]] (But reopened, see above.)
+
+----
+
+Update: I had to revert part of 296e5cb2fd3690e998b3824d54d317933c595873,
+since it broke openid logins. The openid object requires a complete,
+not a relative cgiurl. I'm not sure if my changing that back to using
+`$config{cgiurl}` will force users back to eg, the non-https version of a
+site when logging in via openid.
+
+> Ok, changed it to use `CGI->url` to get the current absolute cgi url. --[[Joey]]
diff --git a/doc/todo/web_reversion.mdwn b/doc/todo/web_reversion.mdwn
new file mode 100644
index 000000000..736d674fe
--- /dev/null
+++ b/doc/todo/web_reversion.mdwn
@@ -0,0 +1,73 @@
+Goal: Web interface to allow reverting of changes.
+
+Interface:
+
+At least at first, it will be exposed via the recentchanges
+page, with revert icons next to each change. We may want a dynamic
+per-page interface that goes back more than 100 changes later.
+
+Limiting assumptions:
+
+* No support for resolving conflicts in reverts; such a revert would just
+ fail and not happen.
+* No support for reset-to-this-point; initially the interface would only
+ revert a single commit, and if a bunch needed to go, the user would have
+ to drive that one at a time.
+
+Implementation plan:
+
+* `rcs_revert` hook that takes a revision to revert.
+* CGI: `do=revert&rev=foo`
+* recentchanges plugin adds above to recentchanges page
+* prompt user to confirm (to avoid spiders doing reverts),
+ check that user is allowed to make the change, commit reversion,
+ and refresh site.
+
+Peter Gammie has done an initial implementation of the above.
+[[!template id=gitbranch branch=peteg/revert author="[[peteg]]"]]
+
+>> It is on a separate branch now. --[[peteg]]
+
+> Review: --[[Joey]]
+>
+> The revert commit will not currently say what web user did the revert.
+> This could be fixed by doing a --no-commit revert first and then using
+> rcs_commit_staged.
+>> Fixed, I think. --[[peteg]]
+>
+> So I see one thing I completly forgot about is `check_canedit`. Avoiding users
+> using reverting to make changes they would normally not be allowed to do is
+> tricky. I guess that a easy first pass would be to only let admins do it.
+> That would be enough to get the feature out there..
+>
+> I'm thinking about having a `rcs_preprevert`. It would take a rev and look
+> at what changes reverting it would entail, and return the same data
+> structure that `rcs_recieve` does. This could be done by using `git revert
+> --no-commit`, and then examining the changes, and then `git reset` to drop
+> them.
+>> We can use the existing `git_commit_info` with the patch ID - no need to touch the working directory. -- [[peteg]]
+>
+> Then the code that is currently in IkiWiki/Receive.pm, that calls
+> `check_canedit` and `check_canremove` to test the change, can be
+> straightforwardly refactored out, and used for checking reverts too.
+>> Wow, that was easy. :-) -- [[peteg]]
+>
+> (The data from `rcs_preprevert` could also be used for a confirmation
+> prompt -- it doesn't currently include enough info for diffs, but at
+> least could have a list of changed files.)
+>
+> Note that it's possible for a git repo to have commits that modify wiki
+> files in a subdir, and code files elsewhere. `rcs_preprevert` should
+> detect changes outside the wiki dir, and fail, like `rcs_receive` does.
+>> Taken care of by refactoring `rcs_receive` in `git.pm`
+>> I've tested it lightly in my single-user setup. It's a little nasty that the `attachment` plugin
+>> gets used to check whether attachments are allowed -- there really should be a hook for that.
+>>> I agree, but have not figured out a way to make a hook work yet.
+>>> --[[Joey]]
+>>
+>> Please look it over and tell me what else needs fixing... -- [[peteg]]
+
+>>> I have made my own revert branch and put a few^Wseveral fixes in there.
+>>> All merged to master now! --[[Joey]]
+
+[[done]]
diff --git a/doc/todo/wrapperuser.mdwn b/doc/todo/wrapperuser.mdwn
new file mode 100644
index 000000000..4c42b046f
--- /dev/null
+++ b/doc/todo/wrapperuser.mdwn
@@ -0,0 +1,7 @@
+ikiwiki's .setup file can specify wrappergroup, and ikiwiki will set the group
+of the wrappers accordingly. Having had people encounter difficulty before
+when trying to do the same thing with users (for instance, making all wrappers
+6755 ikiwiki:ikiwiki), I think it would help to have "wrapperuser". This could
+only actually take effect if building the wrappers as root (not really the best
+plan), but ikiwiki could at least warn if wrapperuser does not match the user
+the wrapper will end up with.