diff options
Diffstat (limited to 'doc/todo')
47 files changed, 1863 insertions, 116 deletions
diff --git a/doc/todo/ACL.mdwn b/doc/todo/ACL.mdwn index d40701d60..dd9793233 100644 --- a/doc/todo/ACL.mdwn +++ b/doc/todo/ACL.mdwn @@ -21,6 +21,11 @@ something, that I think is very valuable. >>>> Which would rule out openid, or other fun forms of auth. And routing all access >>>> through the CGI sort of defeats the purpose of ikiwiki. --[[Ethan]] +>>>>> I think what Joey is suggesting is to use apache ACLs in conjunction +>>>>> with basic HTTP auth to control read access, and ikiwiki can use the +>>>>> information via the httpauth plugin for other ACLs (write, admin). But +>>>>> yes, that would rule out non-httpauth mechanisms. -- [[Jon]] + Also see [[!debbug 443346]]. > Just a few quick thoughts about this: @@ -76,3 +81,18 @@ Any idea when this is going to be finished? If you want, I am happy to beta tes > Example of use to only allow two users to edit the tipjar page: > locked_pages => 'tipjar and !(user(joey) or user(bob))', > --[[Joey]] + +> > Thank you for the hint but I am being still confused (read: dense)... What I am trying to do is this: + +> > * No anonymous access. +> > * Logged in users can edit and create pages. +> > * Users can set who can edit their pages. +> > * Some pages are only viewable by admins. + +> > Is it possible? If so how?... + +>>> I don't believe this is currently possible. What is missing is the concept +>>> of page 'ownership'. -- [[Jon]] + +>>>> GAH! That is really a shame... Any chance of adding that? No, I do not really expect it to be added, after all my requirements are pushing the boundary of what a wikiwiki + should be. Nonetheless, thanks for your help! diff --git a/doc/todo/Add_label_to_search_form_input_field.mdwn b/doc/todo/Add_label_to_search_form_input_field.mdwn index 51b34927d..514108fba 100644 --- a/doc/todo/Add_label_to_search_form_input_field.mdwn +++ b/doc/todo/Add_label_to_search_form_input_field.mdwn @@ -51,4 +51,6 @@ The patch below adds a label for the field to improve usability: > element. already works in eg, chromium. However, ikiwiki does not use > html5 yet. --[[Joey]] -[[!tag wishlist html5]] +>> [[Done]], placeholder added, in html5 mode only. + +[[!tag wishlist bugs/html5_support]] diff --git a/doc/todo/Fix_selflink_in_po_plugin.mdwn b/doc/todo/Fix_selflink_in_po_plugin.mdwn index ae59e14c2..87fa38911 100644 --- a/doc/todo/Fix_selflink_in_po_plugin.mdwn +++ b/doc/todo/Fix_selflink_in_po_plugin.mdwn @@ -2,3 +2,7 @@ Using the po plugin, a link to /bla is present in the sidebar. When viewing /bla in the default language, this link is detected as a selflink. When viewing a translation of /bla, it isn't. --[[intrigeri]] + +Fixed in my po branch. --[[intrigeri]] + +[[!tag patch]] diff --git a/doc/todo/Google_Sitemap_protocol.mdwn b/doc/todo/Google_Sitemap_protocol.mdwn index 057a88b72..ea8ee7f03 100644 --- a/doc/todo/Google_Sitemap_protocol.mdwn +++ b/doc/todo/Google_Sitemap_protocol.mdwn @@ -34,6 +34,9 @@ for an example. You will probably need to strip out the metadata variables I >>>[xtermin.us rather than localhost](http://xtermin.us/git/?p=website.git;a=blob;f=plugins/googlesitemap.pm) is 404 now. >>> -- weakish + +Although it is not able to read the meta-data from files, using google-sitemapgen [works well for me](http://bzed.de/posts/2010/06/creating_a_google_sitemap_for_ikiwiki/) to create a sitemap for my ikiwiki installation. -- [[bzed|BerndZeimetz]] + There is a [sitemap XML standard](http://www.sitemaps.org/protocol.php) that ikiwiki needs to generate for. # Google Webmaster tools and RSS @@ -45,3 +48,13 @@ On [Google Webmaster tools](https://www.google.com/webmasters/tools) you can sub [Google should grok feeds as sitemaps.](http://www.google.com/support/webmasters/bin/answer.py?answer=34654) Or rather [[plugins/inline]] should be improved to support the [sitemap protocol](http://sitemaps.org/protocol.php) natively. -- [[Hendry]] + + +Took me a minute to figure this out so I figured I'd share the steps I took: + +* Added rss=>1 and allowrss=>1 to my setup file +* Created a new page where the RSS would be created with this content, replacing "first_page" with the page in my wiki with the earliest date: + +<pre> +\[[!inline pages="* and !*/Discussion and created_after(first_page)" archive="yes" rss="yes" ]] +</pre> diff --git a/doc/todo/More_flexible_po-plugin_for_translation.mdwn b/doc/todo/More_flexible_po-plugin_for_translation.mdwn new file mode 100644 index 000000000..3399f7834 --- /dev/null +++ b/doc/todo/More_flexible_po-plugin_for_translation.mdwn @@ -0,0 +1,5 @@ +I have a website with multi-language content, where some content is only in English, some in German, and some is available in both languages. + +The po-module currently has only one master-language, with slave languages, and a PageSpec should be considered. + +It would be nice to flag the content which should have a translation on a file-by-file basis (with some inline directive?) which could contain the information of the master-language for that file and the desired target-languages. diff --git a/doc/todo/Multiple_categorization_namespaces.mdwn b/doc/todo/Multiple_categorization_namespaces.mdwn new file mode 100644 index 000000000..3e9f8feaa --- /dev/null +++ b/doc/todo/Multiple_categorization_namespaces.mdwn @@ -0,0 +1,103 @@ +I came across this when working on converting my old blog into an ikiwiki, but I think it could be of more general use. + +The background: I have a (currently suspended, waiting to be converted) blog on the [il Cannocchiale](http://www.ilcannocchiale.it) hosting platform. Aside from the usual metatadata (title, author), il Cannocchiale also provides tags and two additional categorization namespaces: a blog-specific user-defind "column" (Rubrica) and a platform-wide "category" (Categoria). The latter is used to group and label a couple of platform-wide lists of latest posts, the former may be used in many different ways (e.g. multi-author blogs could have one column per author or so, or as a form of 'macro-tagging'). Columns are also a little more sophisticated than classical tags because you can assign them a subtitle too. + +When I started working on the conversion, my first idea was to convert Rubriche to subdirectories of an ikiwiki blog. However, this left me with a few annoying things: when rebuilding links from the import, I had to (programmatically) dive into each subdirectory to see where each post was; this would also be problematic for future posting, too. It also meant that moving a post from a Rubrica to the other would break all links (unless ikiwiki has a way to fix this automagically). And I wasn't too keen on the fact that the Rubrica would come up in the URL of the post. And finally, of course, I couldn't use this to preserve the Categoria metadata. + +Another solution I thought about was to use special deeper tags for the Rubrica and Categoria (like: `\[[!tag "Rubrica/Some name"]]`), but this is horrible, clumsy, and makes special treatment of these tags a PITN (for example you wouldn't want the Rubrica to be displayed together with the other tags, and you would want it displayed somewhere else like next to the title of the post). This solution however looks to me as the proper path, as long as tags could support totally separate namespaces. I have a tentative implementation of this `tagtype` feature at [my git clone of ikiwiki](http://git.oblomov.eu/ikiwiki). + +The feature is currently implemented as follows: a `tagtypes` config options takes an array of strings: the tag types to be defined _aside from the usual tags_. Each tag type automatically provides a new directive which sets up tags that different from standard tags by having a different tagbase (the same as the tagtype) and link type (again, the same as the tagtype) (a TODO item for this would to make the directive, tagbase and link type customizable). For example, for my imported blog I would define + + tagtypes => [qw{Categoria Rubrica}] + +and then in the blog posts I would have stuff like + + \[[!Categoria "LAVORO/Vita da impiegato"]] + \[[!Rubrica "Il mio mondo"]] + \[[!meta title="Blah blah"]] + \[[!meta author="oblomov"]] + + The body of the article + + \[[!tag a bunch of tags]] + +and the tags would appear at the bottom of the post, the Rubrica next to the title, etc. All of this information would end up as categories in the feeds (although I would like to rework that code to make use of namespaces, terms and labels in a different way). + +> Note [[plugins/contrib/report/discussion]]. To quote myself from the latter page: +> *I find tags as they currently exist to be too limiting. I prefer something that can be used for Faceted Tagging http://en.wikipedia.org/wiki/Faceted_classification; that is, things like Author:Fred Nurk, Genre:Historical, Rating:Good, and so on. Of course, that doesn't mean that each tag is limited to only one value, either; just to take the above examples, something might have more than one author, or have multiple genres (such as Historical + Romance).* + +> So you aren't the only one who wants to do more with tags, but I don't think that adding a new directive for each tag type is the way to go; I think it would be simpler to just have one directive, and take advantage of the new [[matching different kinds of links]] functionality, and enhance the tag directive. +> Perhaps something like this: + + \[[!tag categorica="LAVORO/Vita da impiegato" rubrica="Il mio mondo"]] + +> Part of my thinking in this is to also combine tags with [[plugins/contrib/field]], so that the tags for a page could be queried and displayed; that way, one could put them wherever you wanted on the page, using any of [[plugins/contrib/getfield]], [[plugins/contrib/ftemplate]], or [[plugins/contrib/report]]. +> --[[KathrynAndersen]] + +>> A very generic metadata framework could cover all possible usages of fields, tags, and related metadata, but keeping its _user interface_ generic would only make it hard to use. Note that this is not an objection to the idea of collapsing the fields and tags functionality (at quick glance, I cannot see a real difference between single-valued custom tagtypes and fields, but see below), but more about the syntax. + +>> I had thought about the `\[[!tag type1=value1 type2=value2]]` syntax myself, but ultimately decided against it for a number of reasons, most importantly the fact that (1) it's harder to type, (2) it's harder to spot errors in the tag types (so for example if one misspelled `categoria` as `categorica`, he might not notice it as quickly as seeing the un-parsed `\[[!categorica ]]` directive in the output html) and (3) it encourages collapsing possibly unrelated metadata together (for example, I would never consider putting the categoria information together with the rubrica one; of course with your syntax it's perfectly possible to keep them separate as well). + +>> Point (2) may be considered a downside as well as an upside, depending on perspective, of course. And it would be possible to have a set of predefined tag types to match against, like in my tagtype directive approach but with your syntax. + +>>> You seem to have answered your own objections already. -- K.A. + +>>Point (3) is of course entirely in the hands of the user, but that's exactly what syntax should be about. There is nothing functionally wrong with e.g. `\[[!meta tag=sometag author=someauthor title=sometitle rubrica=somecolumn]]`, but I honestly find it horrible. + +>>> So, really, point 3 comes down to differing aesthetics. -- K.A. + +>> A solution could be to allow both syntaxes, getting to have for example `\[[!sometagtype "blah"]]` as a shortcut for `\[[!tag sometagtype="blah"]]` (or, in the more general case, `\[[!somefieldname "blah"]]` as a shortcut for `\[[!meta fieldname="blah"]]`). + +>> I would like to point out however that there are some functional differences between categorization metadata vs other metadata that might suggest to keep fields and (my extended) tags separate. For examples, in feeds you'd want all categorization metadata to fall in one place, with some appropriate manipulation (which I still have to implement, by the way), while things like author or title would go to the corresponding feed item properties. Although it all would be possible with appropriate report or template juggling, having such default metadata handled natively looks like a bonus to me. + +>>> Whereas I prefer being able to control such things with templates, because it gives more flexibility AND control. - K.A. + +>>>> Flexibility and control is good for tuning and power-usage, but sensible defaults are a must for a platform to be usable out of the box without much intervention. Moreover, there's a possible problem with what kind of data must be passed over to templates. + +Aside from the name of the plugin (and thus of the main directive), which could be `tag`, `meta`, `field` or whatever (maybe extending `meta` would be the most sensible choice), the features we want are + +1. allow multiple values per type/attribute/field/whatever (fields currently only allows one) + * Agreed about multiple values; I've been considering whether I should add that to `field`. -- K.A. +2. allow both hidden and visible references (a la tag vs taglink) + * Hidden and visible references; that's fair enough too. My approach with `ymlfront` and `getfield` is that the YAML code is hidden, and the display is done with `getfield`, but there's no reason not to use additional approaches. -- K.A. +3. allow each type/attribute/field to be exposed under multiple queries (e.g. tags and categories; this is mostly important for backwards compatibility, not sure if it might have other uses too) + * I'm not sure what you mean here. -- K.A. + * Typical example is tags: they are accessible both as `tags` and as `categories`, although the way they are presented changes a little -- G.B. +4. allow arbitrary types/attributes/fields/whatever (even 'undefined' ones) + * Are you saying that these must be typed, or are you saying that they can be user-defined? -- K.A. + * I am saying that the user should be able to define (e.g. in the config) some set of types/fields/attributes/whatever, following the specification illustrated below, but also be able to use something like `\[[!meta somefield="somevalue"]]` where `somefield` was never defined before. In this case `somefield` will have some default values for the properties described in the spec below. -- G.B. + +Each type/attribute/field/whatever (predefined, user-defined, arbitrary) would thus have the following parameters: + +* `directive` : the name of the directive that can be used to set the value as a hidden reference; we can discuss whether, for pre- or user-defined types, it being undef means no directive or a default directive matching the attribute name would be defined. + * I still want there to be able to be enough flexibility in the concept to enable plugins such as `yamlfront`, which sets the data using YAML format, rather than using directives. -- K.A. + * The possibility to use a directive does not preclude other ways of defining the field values. IOW, even if the directive `somefield` is defined, the user would still be able to use the syntax `\[[!meta somefield="somevalue"]]`, or any other syntax (such as YAML). -- G.B. +* `linkdirective` : the name of the directive that can be used for a visible reference; no such directive would be defined by default +* `linktype` : link type for (hidden and visible) references + * Is this the equivalent to "field name"? -- K.A. + * This would be such by default, but it could be set to something different. [[Typed links|matching_different_kinds_of_links]] is a very recent ikiwiki feature. -- G.B. +* `linkbase` : akin to the tagbase parameter + * Is this a field-name -> directory mapping? -- K.A. + * yes, with each directory having one page per value. It might not make sense for all fields, of course -- G.B. + * (nods) I've been working on something similar with my unreleased `tagger` module. In that, by default, the field-name maps to the closest wiki-page of the same name. Thus, if one had the field "genre=poetry" on the page fiction/stories/mary/lamb, then that would map to fiction/genre/poetry if fiction/genre existed. --K.A. + * that's the idea. In your case you could have the linkbase of genre be fiction/genre, and it would be created if it was missing. -- G.B. +* `queries` : list of template queries this type/attribute/field/whatever is exposed to + * I'm not sure what you mean here. -- K.A. + * as mentioned before, some fields may be made accessible through different template queries, in different form. This is the case already for tags, that also come up in the `categories` query (used by Atom and RSS feeds). -- G.B. + * Ah, do you mean that the input value is the same, but the output format is different? Like the difference between TMPL_VAR NAME="FOO" and TMPL_VAR NAME="raw_FOO"; one is htmlized, and the other is not. -- K.A. + * Actually this is about the same information appearing in different queries (e.g. NAME="FOO" and NAME="BAR"). Example: say that I defined a "Rubrica" field. I would want both tags and categories to appear in `categories` template query, but only tags would appear in the `tags` query, and only Rubrica values to appear in `rubrica` queries. The issue of different output formats was presented in the next paragraph instead. -- G.B. + +Where this approach is limiting is on the kind of data that is passed to (template) queries. The value of the metadata fields might need some massaging (e.g. compare how tags are passed to tags queries vs cateogires queries, or also see what is done with the fields in the current `meta` plugin). I have problems on picturing an easy way to make this possible user-side (i.e. via templates and not in Perl modules). Suggestions welcome. + +One possibility could be to have the `queries` configuration allow a hash mapping query names to functions that would transform the data. Lacking that possibility, we might have to leave some predefined fields to have custom Perl-side treatment and leave custom fields to be untransformable. + +----- + +I've now updated the [[plugins/contrib/field]] plugin to have: + +* arrays (multi-valued fields) +* the "linkbase" option as mentioned above (called field_tags), where the linktype is the field name. + +I've also updated [[plugins/contrib/ftemplate]] and [[plugins/contrib/report]] to be able to use multi-valued fields, and [[plugins/contrib/ymlfront]] to correctly return multi-valued fields when they are requested. + +--[[KathrynAndersen]] diff --git a/doc/todo/Separate_OpenIDs_and_usernames.mdwn b/doc/todo/Separate_OpenIDs_and_usernames.mdwn index 2cd52e8c4..a4940220a 100644 --- a/doc/todo/Separate_OpenIDs_and_usernames.mdwn +++ b/doc/todo/Separate_OpenIDs_and_usernames.mdwn @@ -6,6 +6,48 @@ I see this being implemented in one of two possible ways. The easiest seems like A slightly more complex next step would be to request sreg from the provider and, if provided, automatically set the identity's username and email address from the provided persona. If username login to accounts with blank passwords is disabled, then you have the best of both worlds. Passwordless signin, human-friendly attribution, automatic setting of preferences. +> Given that openids are a global user identifier, that can look as pretty +> as the user cares to make it look via delegation, I am not a fan of +> having a site-local identifier that layered on top of that. Perhaps +> partly because every site that I have seen that does that has openid +> implemented as a badly-done wart on the side of their regular login +> system. +> +> The openid plugin now attempts to get an email and a username, and stores +> them in the session database for later use (ie, when the user edits a +> page). +> +> I am considering displaying the userid or fullname, if available, +> instead of the munged openid url in recentchanges and comments. +> It would be nice for those nasty [[google_openids|forum/google_openid_broken?]]. +> But, I first have to find a way to encode the name in the VCS commit log, +> while still keeping the openid of the committer in there too. +> Perhaps something like this (for git): --[[Joey]] +> +> Author: Joey Hess <http://joey.kitenet.net/@web> +> +> Only problem with the above is that the openid will still be displayed +> by CIA. Other option is this, which solves that, but at the expense of +> having to munge the username to fit inside the email address, +> and generally seems backwards: --[[Joey]] +> +> Author: http://joey.kitenet.net/ <Joey_Hess@web> +> +> So, what needs to be done: +> +> * Change `rcs_commit` and `rcs_commit_staged` to take a session object, +> instead of just a userid. (For back-compat, if the parameter is +> not an object, it's a userid.) Bump ikiwiki plugin interface version. +> (done) +> * Modify all RCS plugins to include the session username somewhere +> in the commit, and parse it back out in `rcs_recentchanges`. +> (done for git only so far) +> * Modify recentchanges plugin to display the username instead of the +> `openiduser`. +> (done) +> * Modify comment plugin to put the session username in the comment +> template instead of the `openiduser`. (done) + Unfortunately I don't speak Perl, so hopefully someone thinks these suggestions are good enough to code up. I've hacked on openid code in Ruby before, so hopefully these changes aren't all that difficult to implement. Even if you don't get any data via sreg, you're no worse off than where you are now, so I don't think there'd need to be much in the way of error/sanity-checking of returned data. If it's null or not available then no big deal, typing in a username is no sweat. -[[!tag wishlist]] +[[!tag wishlist done]] diff --git a/doc/todo/abbreviation.mdwn b/doc/todo/abbreviation.mdwn index d24166710..f2880091c 100644 --- a/doc/todo/abbreviation.mdwn +++ b/doc/todo/abbreviation.mdwn @@ -2,4 +2,6 @@ We might want some kind of abbreviation and acronym plugin. --[[JoshTriplett]] * Not sure if this is what you mean, but I'd love a way to make works which match existing page names automatically like (eg. if there is a page called "MySQL" then any time the word MySQL is mentioned it should become a link to that page). -- [[AdamShand]] + * The python-markdown-extras package has support for [abbreviations](http://www.freewisdom.org/projects/python-markdown/Abbreviations), with the syntax that you just use the abbreviation in text (e.g. HTML) and then define the abbreviations at the end (like "footnote-style" links). For consistency, it might be good to use the same syntax, which apparently derives from [PHP-markdown-extra](http://michelf.com/projects/php-markdown/extra/#abbr). + [[wishlist]] diff --git a/doc/todo/allow_displaying_number_of_comments.mdwn b/doc/todo/allow_displaying_number_of_comments.mdwn new file mode 100644 index 000000000..02d55fc9b --- /dev/null +++ b/doc/todo/allow_displaying_number_of_comments.mdwn @@ -0,0 +1,30 @@ +My `numcomments` Git branch adds a `NUMCOMMENTS` `TMPL_VAR`, which is +useful to add to the `forumpage.tmpl` template to emulate (the nice +bits of) a more usual webforum. + +Please review... and pull :) + +-- [[intrigeri]] + +> How is having this variable for showing a count of the comments +> better (or more forum-ish) than the COMMENTSLINK variable which +> includes a count and a link to the comments, and is already displayed +> in inlinepage.tmpl? +> +> `num_comments` will never return undef. +> +> I see no need to add a second pagetemplate hook. +> The existing one can be added to. Probably inside its `if ($shown)` +> block. +> +> It may also be a good idea to either combine the calls to `num_comments` +> used for this and for the commentslink, +> or to memoize it. I'm thinking generally memoizing it may be a good idea +> since the comments for a page will typically be counted twice when it's +> inlined. +> --[[Joey]] + +[[patch]] + +>> Well, the COMMENTSLINK variable fits my needs. Sorry for +>> the disturbance. [[done]] --[[intrigeri]] diff --git a/doc/todo/allow_plugins_to_add_sorting_methods.mdwn b/doc/todo/allow_plugins_to_add_sorting_methods.mdwn new file mode 100644 index 000000000..b523cd19f --- /dev/null +++ b/doc/todo/allow_plugins_to_add_sorting_methods.mdwn @@ -0,0 +1,304 @@ +[[!tag patch]] + +The available [[ikiwiki/pagespec/sorting]] methods are currently hard-coded in +IkiWiki.pm, making it difficult to add any extra sorting mechanisms. I've +prepared a branch which adds 'sort' as a hook type and uses it to implement a +new `meta_title` sort type. + +Someone could use this hook to make `\[[!inline sort=title]]` prefer the meta +title over the page name, but for compatibility, I'm not going to (I do wonder +whether it would be worth making sort=name an alias for the current sort=title, +and changing the meaning of sort=title in 4.0, though). + +> What compatability concerns, exactly, are there that prevent making that +> change now? --[[Joey]] + +*[sort-hooks branch now withdrawn in favour of sort-package --s]* + +I briefly tried to turn *all* the current sort types into hook functions, and +have some of them pre-registered, but decided that probably wasn't a good idea. +That earlier version of the branch is also available for comparison: + +*[also withdrawn in favour of sort-package --s]* + +>> I wonder if IkiWiki would benefit from the concept of a "sortspec", like a [[ikiwiki/PageSpec]] but dedicated to sorting lists of pages rather than defining lists of pages? Rather than defining a sort-hook, define a SortSpec class, and enable people to add their own sort methods as functions defined inside that class, similarly to the way they can add their own pagespec definitions. --[[KathrynAndersen]] + +>>> [[!template id=gitbranch branch=smcv/ready/sort-package author="[[Simon_McVittie|smcv]]"]] +>>> I'd be inclined to think that's overkill, but it wasn't very hard to +>>> implement, and in a way is more elegant. I set it up so sort mechanisms +>>> share the `IkiWiki::PageSpec` package, but with a `cmp_` prefix. Gitweb: +>>> <http://git.pseudorandom.co.uk/smcv/ikiwiki.git?a=shortlog;h=refs/heads/sort-package> + +>>>> I agree it seems more elegant, so I have focused on it. +>>>> +>>>> I don't know about reusing `IkiWiki::PageSpec` for this. +>>>> --[[Joey]] + +>>>>> Fair enough, `IkiWiki::SortSpec::cmp_foo` would be just +>>>>> as easy, or `IkiWiki::Sorting::cmp_foo` if you don't like +>>>>> introducing "sort spec" in the API. I took a cue from +>>>>> [[ikiwiki/pagespec/sorting]] being a subpage of +>>>>> [[ikiwiki/pagespec]], and decided that yes, sorting is +>>>>> a bit like a pagespec :-) Which name would you prefer? --s + +>>>>>> `SortSpec` --[[Joey]] + +>>>>>>> [[Done]]. --s + +>>>> I would be inclined to drop the `check_` stuff. --[[Joey]] + +>>>>> It basically exists to support `title_natural`, to avoid +>>>>> firing up the whole import mechanism on every `cmp` +>>>>> (although I suppose that could just be a call to a +>>>>> memoized helper function). It also lets sort specs that +>>>>> *must* have a parameter, like +>>>>> [[field|plugins/contrib/field/discussion]], fail early +>>>>> (again, not so valuable). +>>>>> +>>>>>> AFAIK, `use foo` has very low overhead when the module is already +>>>>>> loaded. There could be some evalation overhead in `eval q{use foo}`, +>>>>>> if so it would be worth addressing across the whole codebase. +>>>>>> --[[Joey]] +>>>>>> +>>>>>>> check_cmp_foo now dropped. --s +>>>>> +>>>>> The former function could be achieved at a small +>>>>> compatibility cost by putting `title_natural` in a new +>>>>> `sortnatural` plugin (that fails to load if you don't +>>>>> have `title_natural`), if you'd prefer - that's what would +>>>>> have happened if `title_natural` was written after this +>>>>> code had been merged, I suspect. Would you prefer this? --s + +>>>>>> Yes! (Assuming it does not make sense to support +>>>>>> natural order sort of other keys than the title, at least..) +>>>>>> --[[Joey]] + +>>>>>>> Done. I added some NEWS.Debian for it, too. --s + +>>>> Wouldn't it make sense to have `meta(title)` instead +>>>> of `meta_title`? --[[Joey]] + +>>>>> Yes, you're right. I added parameters to support `field`, +>>>>> and didn't think about making `meta` use them too. +>>>>> However, `title` does need a special case to make it +>>>>> default to the basename instead of the empty string. +>>>>> +>>>>> Another special case for `title` is to use `titlesort` +>>>>> first (the name `titlesort` is derived from Ogg/FLAC +>>>>> tags, which can have `titlesort` and `artistsort`). +>>>>> I could easily extend that to other metas, though; +>>>>> in fact, for e.g. book lists it would be nice for +>>>>> `field(bookauthor)` to behave similarly, so you can +>>>>> display "Douglas Adams" but sort by "Adams, Douglas". +>>>>> +>>>>> `meta_title` is also meant to be a prototype of how +>>>>> `sort=title` could behave in 4.0 or something - sorting +>>>>> by page name (which usually sorts in approximately the +>>>>> same place as the meta-title, but occasionally not), while +>>>>> displaying meta-titles, does look quite odd. --s + +>>>>>> Agreed. --[[Joey]] + +>>>>>>> I've implemented meta(title). meta(author) also has the +>>>>>>> `sortas` special case; meta(updated) and meta(date) +>>>>>>> should also work how you'd expect them to (but they're +>>>>>>> earliest-first, unlike age). --s + +>>>> As I read the regexp in `cmpspec_translate`, the "command" +>>>> is required to have params. They should be optional, +>>>> to match the documentation and because most sort methods +>>>> do not need parameters. --[[Joey]] + +>>>>> No, `$2` is either `\w+\([^\)]*\)` or `[^\s]+` (with the +>>>>> latter causing an error later if it doesn't also match `\w+`). +>>>>> This branch doesn't add any parameterized sort methods, +>>>>> in fact, although I did provide one on +>>>>> [[field's_discussion_page|plugins/contrib/report/discussion]]. --s + +>>>> I wonder if it would make sense to add some combining keywords, so +>>>> a sortspec reads like `sort="age then ascending title"` +>>>> In a way, this reduces the amount of syntax that needs to be learned. +>>>> I like the "then" (and it could allow other operations than +>>>> simple combination, if any others make sense). Not so sure about the +>>>> "ascending", which could be "reverse" instead, but "descending age" and +>>>> "ascending age" both seem useful to be able to explicitly specify. +>>>> --[[Joey]] + +>>>>> Perhaps. I do like the simplicity of [[KathrynAndersen]]'s syntax +>>>>> from [[plugins/contrib/report]] (which I copied verbatim, except for +>>>>> turning sort-by-`field` into a parameterized spec). +>>>>> +>>>>> If we're getting into English-like (or at least SQL-like) queries, +>>>>> it might make sense to change the signature of the hook function +>>>>> so it's a function to return a key, e.g. +>>>>> `sub key_age { return -%pagemtime{$_[0]) }`. Then we could sort like +>>>>> this: +>>>>> +>>>>> field(artistsort) or field(artist) or constant(Various Artists) then meta(titlesort) or meta(title) or title +>>>>> +>>>>> with "or" binding more closely than "then". Does this seem valuable? +>>>>> I think the implementation would be somewhat more difficult. and +>>>>> it's probably getting too complicated to be worthwhile, though? +>>>>> (The keys that actually benefit from this could just +>>>>> have smarter cmp functions, I think.) +>>>>> +>>>>> If the hooks return keys rather than cmp results, then we could even +>>>>> have "lowercase" as an adjective used like "ascending"... maybe. +>>>>> However, there are two types of adjective here: "lowercase" +>>>>> really applies to the keys, whereas "ascending" applies to the "cmp" +>>>>> result. Again, I think this is getting too complex, and could just +>>>>> be solved with smarter cmp functions. +>>>>> +>>>>>> I agree. (Also, I think returning keys may make it harder to write +>>>>>> smarter cmp functions.) --[[Joey]] +>>>>> +>>>>> Unfortunately, `sort="ascending mtime"` actually sorts by *descending* +>>>>> timestamp (but`sort=age` is fine, because `age` could be defined as +>>>>> now minus `ctime`). `sort=freshness` isn't right either, because +>>>>> "sort by freshness" seems as though it ought to mean freshest first, +>>>>> but "sort by ascending freshness" means put the least fresh first. If +>>>>> we have ascending and descending keywords which are optional, I don't +>>>>> think we really want different sort types to have different default +>>>>> directions - it seems clearer to have `ascending` always be a no-op, +>>>>> and `descending` always negate. +>>>>> +>>>>>> I think you've convinced me that ascending/descending impose too +>>>>>> much semantics on it, so "-" is better. --[[Joey]] + +>>>>>>> I've kept the semantics from `report` as-is, then: +>>>>>>> e.g. `sort="age -title"`. --s + +>>>>> Perhaps we could borrow from `meta updated` and use `update_age`? +>>>>> `updateage` would perhaps be a more normal IkiWiki style - but that +>>>>> makes me think that updateage is a quantity analagous to tonnage or +>>>>> voltage, with more or less recently updated pages being said to have +>>>>> more or less updateage. I don't know whether that's good or bad :-) +>>>>> +>>>>> I'm sure there's a much better word, but I can't see it. Do you have +>>>>> a better idea? --s + +[Regarding the `meta title=foo sort=bar` special case] + +> I feel it sould be clearer to call that "sortas", since "sort=" is used +> to specify a sort method in other directives. --[[Joey]] +>> Done. --[[smcv]] + +## speed + +I notice the implementation does not use the magic `$a` and `$b` globals. +That nasty perl optimisation is still worthwhile: + + perl -e 'use warnings; use strict; use Benchmark; sub a { $a <=> $b } sub b ($$) { $_[0] <=> $_[1] }; my @list=reverse(1..9999); timethese(10000, {a => sub {my @f=sort a @list}, b => sub {my @f=sort b @list}, c => => sub {my @f=sort { b($a,$b) } @list}})' + Benchmark: timing 10000 iterations of a, b, c... + a: 80 wallclock secs (76.74 usr + 0.05 sys = 76.79 CPU) @ 130.23/s (n=10000) + b: 112 wallclock secs (106.14 usr + 0.20 sys = 106.34 CPU) @ 94.04/s (n=10000) + c: 330 wallclock secs (320.25 usr + 0.17 sys = 320.42 CPU) @ 31.21/s (n=10000) + +Unfortunatly, I think that c is closest to the new implementation. +--[[Joey]] + +> Unfortunately, `$a` isn't always `$main::a` - it's `$Package::a` where +> `Package` is the call site of the sort call. This was a showstopper when +> `sort` was a hook implemented in many packages, but now that it's a +> `SortSpec`, I may be able to fix this by putting a `sort` wrapper in the +> `SortSpec` namespace, so it's like this: +> +> sub sort ($@) +> { +> my $cmp = shift; +> return sort $cmp @_; +> } +> +> which would mean that the comparison used `$IkiWiki::SortSpec::a`. +> --s + +>> I've now done this. On a wiki with many [[plugins/contrib/album]]s +>> (a full rebuild takes half an hour!), I tested a refresh after +>> `touch tags/*.mdwn` (my tag pages contain inlines of the form +>> `tagged(foo)` sorted by date, so they exercise sorting). +>> I also tried removing sorting from `pagespec_match_list` +>> altogether, as an upper bound for how fast we can possibly make it. +>> +>> * `master` at branch point: 63.72user 0.29system +>> * `master` at branch point: 63.91user 0.37system +>> * my branch, with `@_`: 65.28user 0.29system +>> * my branch, with `@_`: 65.21user 0.28system +>> * my branch, with `$a`: 64.09user 0.28system +>> * my branch, with `$a`: 63.83user 0.36system +>> * not sorted at all: 58.99user 0.29system +>> * not sorted at all: 58.92user 0.29system +>> +>> --s + +> I do notice that `pagespec_match_list` performs the sort before the +> filter by pagespec. Is this a deliberate design choice, or +> coincidence? I can see that when `limit` is used, this could be +> used to only run the pagespec match function until `limit` pages +> have been selected, but the cost is that every page in the wiki +> is sorted. Or, it might be useful to do the filtering first, then +> sort the sub-list thus produced, then finally apply the limit? --s + +>> Yes, it was deliberate, pagespec matching can be expensive enough that +>> needing to sort a lot of pages seems likely to be less work. (I don't +>> remember what benchmarking was done though.) --[[Joey]] + +>>> We discussed this on IRC and Joey pointed out that this also affects +>>> dependency calculation, so I'm not going to get into this now... --s + +Joey pointed out on IRC that the `titlesort` feature duplicates all the +meta titles. I did that in order to sort by the unescaped version, but +I've now changed the branch to only store that if it makes a difference. +--s + +## Documentation from sort-package branch + +### advanced sort orders (conditionally added to [[ikiwiki/pagespec/sorting]]) + +* `title_natural` - Orders by title, but numbers in the title are treated + as such, ("1 2 9 10 20" instead of "1 10 2 20 9") +* `meta(title)` - Order according to the `\[[!meta title="foo" sortas="bar"]]` + or `\[[!meta title="foo"]]` [[ikiwiki/directive]], or the page name if no + full title was set. `meta(author)`, `meta(date)`, `meta(updated)`, etc. + also work. + +### Multiple sort orders (added to [[ikiwiki/pagespec/sorting]]) + +In addition, you can combine several sort orders and/or reverse the order of +sorting, with a string like `age -title` (which would sort by age, then by +title in reverse order if two pages have the same age). + +### meta sortas parameter (added to [[ikiwiki/directive/meta]]) + +[in title] + +An optional `sort` parameter will be used preferentially when +[[ikiwiki/pagespec/sorting]] by `meta(title)`: + + \[[!meta title="The Beatles" sort="Beatles, The"]] + + \[[!meta title="David Bowie" sort="Bowie, David"]] + +[in author] + + An optional `sortas` parameter will be used preferentially when + [[ikiwiki/pagespec/sorting]] by `meta(author)`: + + \[[!meta author="Joey Hess" sortas="Hess, Joey"]] + +### Sorting plugins (added to [[plugins/write]]) + +Similarly, it's possible to write plugins that add new functions as +[[ikiwiki/pagespec/sorting]] methods. To achieve this, add a function to +the IkiWiki::SortSpec package named `cmp_foo`, which will be used when sorting +by `foo` or `foo(...)` is requested. + +The names of pages to be compared are in the global variables `$a` and `$b` +in the IkiWiki::SortSpec package. The function should return the same thing +as Perl's `cmp` and `<=>` operators: negative if `$a` is less than `$b`, +positive if `$a` is greater, or zero if they are considered equal. It may +also raise an error using `error`, for instance if it needs a parameter but +one isn't provided. + +The function will also be passed one or more parameters. The first is +`undef` if invoked as `foo`, or the parameter `"bar"` if invoked as `foo(bar)`; +it may also be passed additional, named parameters. diff --git a/doc/todo/allow_site-wide_meta_definitions.mdwn b/doc/todo/allow_site-wide_meta_definitions.mdwn index 99a9cf1e2..82670250e 100644 --- a/doc/todo/allow_site-wide_meta_definitions.mdwn +++ b/doc/todo/allow_site-wide_meta_definitions.mdwn @@ -36,6 +36,30 @@ definitions essentially. >> I've made may not be acceptable, though -- I'd appreciate someone providing >> some feedback on that hunk! +>>> Well, re that hunk, taint checking is currently disabled, but +>>> if the perl bug that disallows it is fixed and it is turned back on, +>>> the hash values will remain tainted, which will probably lead to +>>> problems. +>>> +>>> I'm also leery of using such a complex data structure in config. +>>> The websetup plugin would be hard pressed to provide a UI for such a +>>> data structure. (It lacks even UI for a single hash ref yet, let alone +>>> a list.) +>>> +>>> Also, it seems sorta wrong to have two so very different syntaxes to +>>> represent the same meta data. A user without a lot of experience will +>>> be hard pressed to map from a directive to this in the setup file. +>>> +>>> All of which leads me to think the setup file could just contain +>>> a text that could hold meta directives. Which generalizes really to +>>> a text that contains any directives, and is, perhaps appended to the +>>> top of every page. Which nearly generalizes to the sidebar plugin, +>>> or perhaps something more general than that... +>>> +>>> However, excessive generalization is the root of all evil, so +>>> I'm not necessarily saying that's a good idea. Indeed, my memory +>>> concerns below invalidate this idea pretty well. --[[Joey]] + diff --git a/IkiWiki/Plugin/meta.pm b/IkiWiki/Plugin/meta.pm index 6fe9cda..2f8c098 100644 --- a/IkiWiki/Plugin/meta.pm @@ -125,6 +149,10 @@ definitions essentially. >> are only relevant to defined fields that you wouldn't want to specify a >> global default for anyway. >> +>>> I generally agree with this. It is *possible* that meta would have a new +>>> field added, that takes parameters and make sense to use globally. +>>> --[[Joey]] +>> >> Due to this, and the added complexity of the second patch (having to adjust >> `IkiWiki/Setup.pm`), I think the first patch makes more sense. I've thus >> reverted to it here. @@ -183,3 +211,36 @@ definitions essentially. >>> ikiwiki for the break, and now I've returned to watching recentchanges. >>> Hopefully I'll be back in the mix soon, too. In the meantime, Joey, have >>> you had a chance to look at this yet? -- [[Jon]] + +>>>> Ping :) Hi. [[Joey]], would you consider this patch for the next +>>>> ikiwiki release? -- [[Jon]] + +>>> For this to work with websetup and --dumpsetup, it needs to define the +>>> `meta_*` settings in the getsetup function. +>>>> +>>>> I think this will be problematic with the current implementation of this +>>>> patch. The datatype here is an array of hash references, with each hash +>>>> having a variable (and arbitrary) number of key/value pairs. I can't +>>>> think of an intuitive way of implementing a way of editing such a +>>>> datatype in the web interface, let alone registering the option in +>>>> getsetup. +>>>> +>>>> Perhaps a limited set of defined meta values could be exposed via +>>>> websetup (the obvious ones: author, copyright, license, etc.) -- [[Jon]] +>>> +>>> I also have some concerns about both these patches, since both throw +>>> a lot of redundant data at meta, which then stores it in a very redundant +>>> way. Specifically, meta populates a per-page `%metaheaders` hash +>>> as well as storing per-page metadata in `%pagestate`. So, if you have +>>> a wiki with 10 thousand pages, and you add a 1k site-wide license text, +>>> that will bloat the memory usage of ikiwiki by in excess of 2 +>>> megabytes. It will also cause ikiwiki to write a similar amount more data +>>> to its state file which has to be loaded back in each +>>> run. +>>> +>>> Seems that this could be managed much more efficiently by having +>>> meta special-case the site-wide settings, not store them in these +>>> per-page data structures, and just make them be used if no per-page +>>> metadata of the given type is present. --[[Joey]] +>>>> +>>>> that should be easy enough to do. I will work on a patch. -- [[Jon]] diff --git a/doc/todo/anon_push_of_comments.mdwn b/doc/todo/anon_push_of_comments.mdwn new file mode 100644 index 000000000..b472ea13f --- /dev/null +++ b/doc/todo/anon_push_of_comments.mdwn @@ -0,0 +1,14 @@ +It should be possible to use anonymous git push to post comments +(created, say, by a ikiwiki-comment program). Currently, that is not +allowed, because users cannot edit, or create internal page files. +But, comments in allowed locations are an exception to that rule, and +that exception should be communicated somehow to `IkiWiki::Receive`. +--[[Joey]] + +> Complications include: +> +> * Hard to see a way to prevent users from committing a comment that +> claims to be written by someone else. +> * `checkcontent` hooks need to be run, but can't accept a comment +> for later moderation, since it's coming in as part of a commit. +> Best they could do is reject the commit. diff --git a/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn b/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn index f1d33114f..7eb404910 100644 --- a/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn +++ b/doc/todo/auto-create_tag_pages_according_to_a_template.mdwn @@ -4,7 +4,7 @@ Tags are mainly specific to the object to which they’re stuck. However, I ofte Also see: <http://madduck.net/blog/2008.01.06:new-blog/> and <http://users.itk.ppke.hu/~cstamas/code/ikiwiki/autocreatetagpage/> -[[!tag wishlist plugins/tag patch]] +[[!tag wishlist plugins/tag patch patch/core]] I would love to see this as well. -- dato @@ -15,88 +15,9 @@ A new setting is used to enable or disable auto-create tag pages, `tag_autocreat The new tag file is created during the preprocess phase. The new tag file is then complied during the change phase. -_tag.pm from version 3.01_ - - - --- tag.pm 2009-02-06 10:26:03.000000000 -0700 - +++ tag_new.pm 2009-02-06 12:17:19.000000000 -0700 - @@ -14,6 +14,7 @@ - hook(type => "preprocess", id => "tag", call => \&preprocess_tag, scan => 1); - hook(type => "preprocess", id => "taglink", call => \&preprocess_taglink, scan => 1); - hook(type => "pagetemplate", id => "tag", call => \&pagetemplate); - + hook(type => "change", id => "tag", call => \&change); - } - - sub getopt () { - @@ -36,6 +37,36 @@ - safe => 1, - rebuild => 1, - }, - + tag_autocreate => { - + type => "boolean", - + example => 0, - + description => "Auto-create the new tag pages, uses autotagpage.tmpl ", - + safe => 1, - + rebulid => 1, - + }, - +} - + - +my $autocreated_page = 0; - + - +sub gen_tag_page($) { - + my $tag=shift; - + - + my $tag_file=$tag.'.'.$config{default_pageext}; - + return if (-f $config{srcdir}.$tag_file); - + - + my $template=template("autotagpage.tmpl"); - + $template->param(tag => $tag); - + writefile($tag_file, $config{srcdir}, $template->output); - + $autocreated_page = 1; - + - + if ($config{rcs}) { - + IkiWiki::disable_commit_hook(); - + IkiWiki::rcs_add($tag_file); - + IkiWiki::rcs_commit_staged( - + gettext("Automatic tag page generation"), - + undef, undef); - + IkiWiki::enable_commit_hook(); - + } - } - - sub tagpage ($) { - @@ -47,6 +78,10 @@ - $tag=~y#/#/#s; # squash dups - } - - + if (defined $config{tag_autocreate} && $config{tag_autocreate} ) { - + gen_tag_page($tag); - + } - + - return $tag; - } - - @@ -125,4 +160,18 @@ - } - } - - +sub change(@) { - + return unless($autocreated_page); - + $autocreated_page = 0; - + - + # This refresh/saveindex is to complie the autocreated tag pages - + IkiWiki::refresh(); - + IkiWiki::saveindex(); - + - + # This refresh/saveindex is to fix the Tags link - + # With out this additional refresh/saveindex the tag link displays ?tag - + IkiWiki::refresh(); - + IkiWiki::saveindex(); - +} - + +*see git history of this page if you want the patch --[[smcv]]* - -This uses a [[template|wikitemplates]] called `autotagpage.tmpl`, here is my template file: +This uses a [[template|templates]] called `autotagpage.tmpl`, here is my template file: \[[!inline pages="link(<TMPL_VAR TAG>)" archive="yes"]] @@ -123,3 +44,208 @@ On the second extra pass, it doesn't notice that it has to update the "?"-link. } is not satisfied for the newly created tag page. I shall put debug msgs into Render.pm to find out better how it works. --Ivan Z. + +--- + +I've made another attempt at fixing this + +The current progress can be found at my [git repository][gitweb] on branch +`autotag`: + + git://git.liegesta.at/git/ikiwiki + +[gitweb]: http://git.liegesta.at/?p=ikiwiki.git;a=shortlog;h=refs/heads/autotag (gitweb for branch autotag) + +It's not entirely finished yet, but already quite usable. Testing and comments +on code quality, implementation details, as well as other patches would be +appreciated. + +Here's what it does right now: + +* enabled by setting `tag_autocreate=1` in the configuration. +* Tag pages will be created in `tagbase` from the template `autotag.tmpl`. +* Will correctly render all links, and dependencies. Well, AFAIK. +* When a tag page is deleted it will automatically recreated from template. (I +consider this a feature, not a bug) +* Requires a rebuild on first use. +* Adds a function `add_autofile()` to the plugin API, to do all this. + +Todo/Bugs: + +* Will still create a page even if there's a page other than `$tag` under +`tagbase` satisfying the tag link. (details? --[[Joey]]) +* Call from `IkiWiki.pm` to `Render.pm`, which adds a module dependency in the +wrong direction. (fixed --[[Joey]] ) +* Add files to RCS. +* Unit tests. +* Proper documentation. (fixed (mostly) --[[Joey]]) + +--[[David_Riebenbauer]] + +> Starting review of this. Some of your commits are to very delicate, +> optimised, and security-sensitive ground, so I have to look at them very +> carefully. --[[Joey]] + +>> First of, sorry that it took me so damn long to answer. I didn't lose +>> interest but it took a while for me to find the time and motivation +>> to address you suggestions. --[[David_Riebenbauer]] + +> * In the refactoring in [f3abeac919c4736429bd3362af6edf51ede8e7fe][], +> you introduced at least 2 bugs, one a possible security hole. +> Now one part of the code tests `if ($file)` and the other +> caller tests `if ($f)`. These two tests both tested `if (! defined $f)` +> before. Notice that the variable needs to be the untainted variable +> for both. Also notice that `if ($f)` fails if `$f` contains `0`, +> which is a very common perl gotcha. +> * Your refactored code changes `-l $_ || -d _` to `-l $file || -d $file`. +> The latter makes one more stat system call; note the use of a +> bare `_` in the first to make perl reuse the stat buffer. +> * (As a matter of style, could you put a space after the commas in your +> perl?) + +>> The first two points should be addressed in +>> [da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0][]. And sure, I can add the +>> spaces. --[[David_Riebenbauer]] + +> I'd like to cherry-pick the above commit, once it's in shape, before +> looking at the rest in detail. So just a few other things that stood out. +> +> * Commit [4af4d26582f0c2b915d7102fb4a604b176385748][] seems unnecessary. +> `srcfile($file, 1)` already is documented to return undef if the +> file does not exist. (But without the second parameter, it throws +> an error.) + +>> You're right. I must have been some confused by some other promplem I +>> introduced then. Reverted. --[[David_Riebenbauer]] + +> * Commit [f58f3e1bec41ccf9316f37b014ce0b373c8e49e1][] adds a line +> that is intented by a space, not a tab. + +>> Sorry, That one was reverted anyway. --[[David_Riebenbauer]] + +> * Commit [f58f3e1bec41ccf9316f37b014ce0b373c8e49e1][] says that auto-added +> files will be recreated if the user deletes them. That seems bad. +> `autoindex` goes to some trouble to not recreate deleted files. + +>> I reverted the commit and addressed the issue in +>> [a358d74bef51dae31332ff27e897fe04834571e6][] and +>> [981400177d68a279f485727be3f013e68f0bf691][]. + --[[David_Riebenbauer]] + +>>> This doesn't seem to have all the refinements that autoindex has: +>>> +>>> * `autoindex` attaches the record of deletions to the `index` page, which +>>> is (nearly) guaranteed to exist; this one attaches the record of +>>> deletions to the deleted page's page state. Won't that tend to result +>>> in losing the record along with the deleted page? + +>>>> This is probably on of the harder things to do, 'cause there are (most of the +>>>> time) several pages that are responsible for the creation of a single tag page. +>>>> Of course I could attach the info to all of them. + +>>>> With current behaviour I think the information in `%pagestate` is kept around +>>>> regardless whether the corresponding page exists or not. +>>>> --[[David_Riebenbauer]] + +>>>>> Sorry, I'll try to be clearer: `autoindex` hard-codes that the [[index]] page +>>>>> of the entire wiki is the one responsible for storing the page state. That +>>>>> page isn't responsible for the creation of the tag page, it's just an +>>>>> arbitrary page that's (more or less) guaranteed to exist. --[[smcv]] + +>>>>> I don't like that [[plugins/autoindex]] has to do that, +>>>>> but `%pagestate` values are only stored for pages that exist, +>>>>> so it was necessary. (Another way to look at this is that +>>>>> `%pagestate` is not the ideal data structure.) --[[Joey]] + +>>>>>> Aha! Having looked at [[plugins/write]] again, it turns out that what this +>>>>>> feature should really use is `%wikistate`, I think? :-) --[[smcv]] + +>>>>>>> Ah, indeed, that came after I wrote autoindex. I've fixed autoindex to +>>>>>>> use it. --[[Joey]] + +>>>>> Ok, now I know what you mean. --[[David_Riebenbauer]] + +>>> * `autoindex` forgets that a page was deleted when that page is +>>> re-created + +>>>> Yes, I forgot about that and that is a bug. I'll fix that. +>>>> --[[David_Riebenbauer]] + +>>>>> In my branch, it keeps a list of autofiles that were created, +>>>>> not deleted. And I think that turns out to be necessary, really. +>>>>> However, I see no way to clean out that list on deletion and +>>>>> manual recreation -- it still needs to remember it was once an autofile, +>>>>> in order to avoid recreating it if it's deleted yet again. --[[Joey]] + +>>>>>> Are these really the semantics we want? It seems strange to me +>>>>>> that this: +>>>>>> +>>>>>> * tag a page as foo +>>>>>> * tags/foo automatically appears +>>>>>> * delete tags/foo +>>>>>> * create tags/foo manually +>>>>>> * delete tags/foo again +>>>>>> * tags/foo isn't automatically created +>>>>>> +>>>>>> isn't the same as this: +>>>>>> +>>>>>> * create tags/foo +>>>>>> * delete tags/foo +>>>>>> * tag a page as foo +>>>>>> * tags/foo automatically appears +>>>>>> +>>>>>> or even this: +>>>>>> +>>>>>> * create tags/foo +>>>>>> * tag a page as foo +>>>>>> * delete tags/foo +>>>>>> * tags/foo automatically appears (?) +>>>>>> +>>>>>> --[[smcv]] + +>>>>>>> I agree that the last of these is not desired. It could be avoided +>>>>>>> by extending the list of autofiles to include those that were not +>>>>>>> created due to the file/page already existing. +>>>>>>> +>>>>>>> Hmm, that would fix the previous scenario too. --[[Joey]] + +>>> * `autoindex` forgets that a page was deleted when it's no longer needed +>>> anyway (this may be harder for `autotag`?) + +>>>> I don't think so. AFAIK ikiwiki can detect whether there are taglinks to a page +>>>> anyway, so it should be quite easy. I'll try to implement that too. +>>>> --[[David_Riebenbauer]] + +>>> It'd probably be an interesting test of the core change to port +>>> `autoindex` to use it? (Adding the file to the RCS would be +>>> necessary to get parity with `autoindex`.) --[[smcv]] + +>>>> Good suggestion. Adding the files to RCS is on my todo list anyway. +>>>> --[[David_Riebenbauer]] + +>>>>> I think it may be better to allow the `add_autofile` caller +>>>>> to specify if it is added to RCS. In my branch, it can do +>>>>> so by just making the callback it registers call `rcs_add`; +>>>>> and I have tag do this. Other plugins might want autofiles +>>>>> that do not get checked in, conceivably. +>>>>> --[[Joey]] + +> Regarding the call from `IkiWiki.pm` to `Render.pm`, wouldn't this be +> quite easy to solve by moving `verify_src_file` to IkiWiki.pm? --[[smcv]] + +>> True. I'll do that. --[[David_Riebenbauer]] +>> Fixed in my branch --[[Joey]] + +[[!template id=gitbranch branch=origin/autotag author="[[Joey]]"]] +I've pushed an autotag branch of my own, which refactors +things a bit and fixes bugs around deletion/recreation. +I've tested it fairly thouroughly. --[[Joey]] + +[f3abeac919c4736429bd3362af6edf51ede8e7fe]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=f3abeac919c4736429bd3362af6edf51ede8e7fe (commitdiff for f3abeac919c4736429bd3362af6edf51ede8e7fe) +[4af4d26582f0c2b915d7102fb4a604b176385748]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=4af4d26582f0c2b915d7102fb4a604b176385748 (commitdiff for 4af4d26582f0c2b915d7102fb4a604b176385748) +[f58f3e1bec41ccf9316f37b014ce0b373c8e49e1]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=f58f3e1bec41ccf9316f37b014ce0b373c8e49e1 (commitdiff for f58f3e1bec41ccf9316f37b014ce0b373c8e49e1) +[da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0 (commitdiff for da5d29f95f6e693e8c14be1b896cf25cf4fdb3c0) +[a358d74bef51dae31332ff27e897fe04834571e6]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=a358d74bef51dae31332ff27e897fe04834571e6 (commitdiff for a358d74bef51dae31332ff27e897fe04834571e6) +[981400177d68a279f485727be3f013e68f0bf691]: http://git.liegesta.at/?p=ikiwiki.git;a=commitdiff;h=981400177d68a279f485727be3f013e68f0bf691 (commitdiff for 981400177d68a279f485727be3f013e68f0bf691) + +[[!tag done]] diff --git a/doc/todo/auto_getctime_on_fresh_build.mdwn b/doc/todo/auto_getctime_on_fresh_build.mdwn new file mode 100644 index 000000000..760c56fa1 --- /dev/null +++ b/doc/todo/auto_getctime_on_fresh_build.mdwn @@ -0,0 +1,13 @@ +[[!tag wishlist]] + +It might be a good idea to enable --gettime when `.ikiwiki` does not +exist. This way a new checkout of a `srcdir` would automatically get +ctimes right. (Running --gettime whenever a rebuild is done would be too +slow.) --[[Joey]] + +Could this be too annoying in some cases, eg, checking out a large wiki +that needs to get set up right away? --[[Joey]] + +> Not for git with the new, optimised --getctime. For other VCS.. well, +> pity they're not as fast as git ;), but it is a one-time expense... +> [[done]] --[[Joey]] diff --git a/doc/todo/auto_publish_expire.mdwn b/doc/todo/auto_publish_expire.mdwn new file mode 100644 index 000000000..7a5a17517 --- /dev/null +++ b/doc/todo/auto_publish_expire.mdwn @@ -0,0 +1,33 @@ +It could be nice to mark some page such that: + +* the page is automatically published on some date (i.e. build, linked, syndicated, inlined/mapped, etc.) +* the page is automatically unpublished at some other date (i.e. removed) + +I know that ikiwiki is a wiki compiler so that something has to refresh the wiki periodically to enforce the rules (a cronjob for instance). It seems to me that the calendar plugin rely on something similar. + +The date for publishing and expiring could be set be using some new directives; an alternative could be to expand the [[plugin/meta]] plugin with [<span/>[!meta date="auto publish date"]] and [<span/>[!meta expires="auto expire date"]]. + +--[[JeanPrivat]] + +> This is a duplicate, and expansion, of +> [[todo/tagging_with_a_publication_date]]. +> There, I suggest using a branch to develop +> prepublication versions of a site, and merge from it +> when the thing is published. +> +> Another approach I've seen used is to keep such pages in a pending/ +> directory, and move them via cron job when their publication time comes. +> But that requires some familiarity with, and access to, cron. +> +> On [[todo/tagging_with_a_publication_date]], I also suggested using meta +> date to set a page's date into the future, +> and adding a pagespec that matches only pages with dates in the past, +> which would allow filtering out the unpublished ones. +> Sounds like you are thinking along these lines, but possibly using +> something other than the page's creation or modification date to do it. +> +> I do think the general problem with that approach is that you have to be +> careful to prevent the unpublished pages from leaking out in any +> inlines, maps, etc. --[[Joey]] + +[[!tag wishlist]] diff --git a/doc/todo/auto_rebuild_on_template_change.mdwn b/doc/todo/auto_rebuild_on_template_change.mdwn new file mode 100644 index 000000000..ea990b877 --- /dev/null +++ b/doc/todo/auto_rebuild_on_template_change.mdwn @@ -0,0 +1,78 @@ +If `page.tmpl` is changed, it would be nice if ikiwiki automatically +noticed, and rebuilt all pages. If `inlinepage.tmpl` is changed, a rebuild +of all pages using it in an inline would be stellar. + +This would allow setting: + + templatedir => "$srcdir/templates", + +.. and then the [[templates]] are managed like other wiki files; and +like other wiki files, a change to them automatically updates dependent +pages. + +Originally, it made good sense not to have the templatedir inside the wiki. +Those templates can be used to bypass the htmlscrubber, and you don't want +just anyone to edit them. But the same can be said of `style.css` and +`ikiwiki.js`, which *are* in the wiki. We rely on `allowed_attachments` +being set to secure those to prevent users uploading replacements. And we +assume that users who can directly (non-anon) commit *can* edit them, and +that's ok. + +So, perhaps the easiest way to solve this [[wishlist]] would be to +make templatedir *default* to "$srcdir/templates/, and make ikiwiki +register dependencies on `page.tmpl`, `inlinepage.tmpl`, etc, as they're +used. Although, having every page declare an explicit dep on `page.tmpl` +is perhaps a bit much; might be better to implement a special case for that +one. Also, having the templates be copied to `destdir` is not desirable. +In a sense, these template would be like internal pages, except not wiki +pages, but raw files. + +The risk is that a site might have `allowed_attachments` set to +`templates/*` or `*.tmpl` something like that. I think such a configuration +is the *only* risk, and it's unlikely enough that a NEWS warning should +suffice. + +(This would also help to clear up the tricky disctinction between +wikitemplates and in-wiki templates.) + +Note also that when using templates from "$srcdir/templates/", `no_includes` +needs to be set. Currently this is done by the two plugins that use +such templates, while includes are allowed in `templatedir`. + +Have started working on this. +[[!template id=gitbranch branch=origin/templatemove author="[[Joey]]"]] + +> But would this require that templates be parseable as wiki pages? Because that would be a nuisance. --[[KathrynAndersen]] + +>> It would be better for them not to be rendered separately at all. +>> --[[Joey]] + +>>> I don't follow you. --[[KathrynAndersen]] + +>>>> If they don't render to output files, they clearly don't +>>>> need to be treated as wiki pages. (They need to be treated +>>>> as raw files anyway, because you don't want random users editing them +>>>> in the online editor.) --[[Joey]] + +>>>>> Just to be clear, the raw files would not be copied across to the output +>>>>> directory? -- [[Jon]] + +>>>>>> Without modifying ikiwiki, they'd be copied to the output directory as +>>>>>> (e.g.) http://ikiwiki.info/templates/inlinepage.tmpl; to not copy them, +>>>>>> it'd either be necessary to make them be internal pages +>>>>>> (templates/inlinepage._tmpl) or special-case them in some other way. +>>>>>> --[[smcv]] + +>>>>>>> In my branch, I left in support for the templatedir, and also +>>>>>>> /usr/share/ikiwiki/templates. So, users do not have to put their +>>>>>>> custom templates in templates/ in the wiki. If they do, +>>>>>>> the templates are copied to the destdir like other non-wiki page files +>>>>>>> are. The templates are not wiki pages, except those used by a few +>>>>>>> things like the [[plugins/template]] plugin. +>>>>>>> +>>>>>>> That seems acceptable, since users probably don't need to modify +>>>>>>> many templates, so the clutter is small. (Especially when +>>>>>>> compared to the other clutter the basewiki always puts in destdir.) +>>>>>>> This could be revisted later. --[[Joey]] + +[[done]] diff --git a/doc/todo/avatar.mdwn b/doc/todo/avatar.mdwn index b8aa2327f..f0599e4ed 100644 --- a/doc/todo/avatar.mdwn +++ b/doc/todo/avatar.mdwn @@ -1,38 +1,19 @@ [[!tag wishlist]] It would be nice if ikiwiki, particularly [[plugins/comments]] -supported user avatar icons. I was considering adding a directive for this, -as designed below. +(but also, ideally, recentchanges) supported user avatar icons. -However, there is no *good* service for mapping openids to avatars -- -openavatar has many issues, including not supporting delegated openids, and -after trying it, I don't trust it to push users toward. -Perhaps instead ikiwiki could get the email address from the openid -provider, though I think the perl openid modules don't support the openid -2.x feature that allows that. - -At the moment, working on this doesn't feel like a good use of my time. ---[[Joey]] - -Hmm.. unless is just always used a single provider (gravatar) and hashed -the openid. Then wavatars could be used to get a unique avatar per openid -at least. --[[Joey]] - ----- - -The directive displays a small avatar image for a user. Pass it the -email address, openid, or wiki username of the user. +Idea is to add a directive that displays a small avatar image for a user. +Pass it a user's the email address, openid, username, or the md5 hash +of their email address: \[[!avatar user@example.com]] \[[!avatar http://joey.kitenet.net/]] \[[!avatar user]] + \[[!avatar hash]] -The avatars are provided by various sites. For email addresses, it uses a -[gravatar](http://gravatar.com/). For openid, -[openavatar](http://www.openvatar.com/) is used. For a wiki username, the -user's email address is looked up and the gravatar for that user is -displayed. (Of course, the user has to have filled in their email address -on their Preferences page for that to work.) +These directives can then be hand-inserted onto pages, or more likely, +included in eg, a comment post via a template. An optional second parameter can be included, containing additional options to pass in the @@ -45,3 +26,35 @@ not have a gravatar, uses a cute auto-generated "wavatar" avatar. The `gravitar_options` setting in the setup file can be used to specify additional options to pass. So for example if you want to use wavatars everywhere, set it to "default=wavatar". + +The avatars are provided by various sites. For email addresses, it uses a +[gravatar](http://gravatar.com/). For a wiki username, the +user's email address is looked up and the gravatar for that user is +displayed. (Of course, the user has to have filled in their email address +on their Preferences page for that to work. Also, when the user changes +their email address in Preferences, the gravatar won't change until the +wiki is rebuilt.) + +For openid, openavatar sucked and is now dead. So we need to use an email +address instead, I guess. Problem is that the email address of a given +openid is only known when that user is logged in and making a change. +And we don't want to leak an openid user's email into a page either. +Hmm. Suppose the gravatar hash could be calculated from the email address +and embedded instead of the openid? That would work for comments, +but not if the directive were used elsewhere. + +Or, for openid, could use <http://paulisageek.com/openidavatar>. Which +works fine, but users are not likely to figure out what they need to do to +get an avatar associated with their openid. + +--- + +Alternative, not overdesigned approach: + +Modify comments plugin to have an option to display avatars. + +When posting a comment, fill in the avatarhash field in the template. +The hash is calculated from the user's email address. If the user's email +is not known, skip it. + +End. :P diff --git a/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn b/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn index fb942a495..fdaa09f26 100644 --- a/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn +++ b/doc/todo/beef_up_sidebar_to_allow_for_multiple_sidebars.mdwn @@ -13,5 +13,113 @@ those contents instead. > In mine I just copied sidebar out and made some extra "sidebars", but they go elsewhere. Ugly hack, but it works. --[[simonraven]] +>> Here a simple [[patch]] for multiple sidebars. Not too fancy but better than having multiple copies of the sidebar plugin. --[[jeanprivat]] + +>>> I made a [[git]] branch for it [[!template id=gitbranch branch="privat/multiple_sidebars" author="[[jeanprivat]]"]] --[[jeanprivat]] + +>>>> Ping for [[Joey]]. Do you have any comment? I could improve it if there is things you do not like. I prefer to have such a feature integrated upstream. --[[JeanPrivat]] + +>>>>> The code is fine. +>>>>> +>>>>> I did think about having it examine +>>>>> the `page.tmpl` for parameters with names like `FOO_SIDEBAR` +>>>>> and automatically enable page `foo` as a sidebar in that case, +>>>>> instead of using the setup file to enable. But I'm not sure about +>>>>> that idea.. +>>>>> +>>>>> The full compliment of sidebars would be a header, a footer, +>>>>> a left, and a right sidebar. It would make sense to go ahead +>>>>> and add the parameters to `page.tmpl` so enabling each just works, +>>>>> and add whatever basic CSS makes sense. Although I don't know +>>>>> if I want to try to get a 3 column CSS going, so perhaps leave the +>>>>> left sidebar out of that. + +------------------- + +<pre> +--- /usr/share/perl5/IkiWiki/Plugin/sidebar.pm 2010-02-11 22:53:17.000000000 -0500 ++++ plugins/IkiWiki/Plugin/sidebar.pm 2010-02-27 09:54:12.524412391 -0500 +@@ -19,12 +19,20 @@ + safe => 1, + rebuild => 1, + }, ++ active_sidebars => { ++ type => "string", ++ example => qw(sidebar banner footer), ++ description => "Which sidebars must be activated and processed.", ++ safe => 1, ++ rebuild => 1 ++ }, + } + +-sub sidebar_content ($) { ++sub sidebar_content ($$) { + my $page=shift; ++ my $sidebar=shift; + +- my $sidebar_page=bestlink($page, "sidebar") || return; ++ my $sidebar_page=bestlink($page, $sidebar) || return; + my $sidebar_file=$pagesources{$sidebar_page} || return; + my $sidebar_type=pagetype($sidebar_file); + +@@ -49,11 +57,17 @@ + + my $page=$params{page}; + my $template=$params{template}; +- +- if ($template->query(name => "sidebar")) { +- my $content=sidebar_content($page); +- if (defined $content && length $content) { +- $template->param(sidebar => $content); ++ ++ my @sidebars; ++ if (defined $config{active_sidebars} && length $config{active_sidebars}) { @sidebars = @{$config{active_sidebars}}; } ++ else { @sidebars = qw(sidebar); } ++ ++ foreach my $sidebar (@sidebars) { ++ if ($template->query(name => $sidebar)) { ++ my $content=sidebar_content($page, $sidebar); ++ if (defined $content && length $content) { ++ $template->param($sidebar => $content); ++ } + } + } + } +</pre> + +---------------------------------------- +## Further thoughts about this + +(since the indentation level was getting rather high.) + +What about using pagespecs in the config to map pages and sidebar pages together? Something like this: + +<pre> + sidebar_pagespec => { + "foo/*" => 'sidebars/foo_sidebar', + "bar/* and !bar/*/*' => 'bar/bar_top_sidebar', + "* and !foo/* and !bar/*" => 'sidebars/general_sidebar', + }, +</pre> + +One could do something similar for *pageheader*, *pagefooter* and *rightbar* if desired. + +Another thing which I find compelling - but probably because I am using [[plugins/contrib/field]] - is to be able to treat the included page as if it were *part* of the page it was included into, rather than as an included page. I mean things like \[[!if ...]] would test against the page name of the page it's included into rather than the name of the sidebar/header/footer page. It's even more powerful if one combines this with field/getfield/ftemplate/report, since one could make "generic" headers and footers that could apply to a whole set of pages. + +Header example: +<pre> +#{{$title}} +\[[!ftemplate id="nice_data_table"]] +</pre> + +Footer example: +<pre> +------------ +\[[!report template="footer_trail" trail="trailpage" here_only=1]] +</pre> + +(Yes, I am already doing something like this on my own site. It's like the PmWiki concept of GroupHeader/GroupFooter) + +-- [[KathrynAndersen]] [[!tag wishlist]] diff --git a/doc/todo/cdate_and_mdate_available_for_templates.mdwn b/doc/todo/cdate_and_mdate_available_for_templates.mdwn new file mode 100644 index 000000000..70d8fc8c9 --- /dev/null +++ b/doc/todo/cdate_and_mdate_available_for_templates.mdwn @@ -0,0 +1,15 @@ +[[!tag wishlist]] + +`CDATE_3339`, `CDATE_822`, `MDATE_3339` and `MDATE_822` template variables would be useful for evey page, at least for my templates with Dublin Core metadata. + +I tried to pick the relevant lines of the [[inline|plugins/inline]] plugin and hack it into a custom plugin, but it failed miserably because of my obvious lack of perl litteracy... + +Anyway, I'm sure this is almost nothing... + +* `sub date_822 ($) {}` +* `sub date_3339 ($) {}` +* and something like `$template->param('cdate_822' => date_822($IkiWiki::pagectime{$page}));` + +Anyone can fill the missing lines? + +-- [[nil]] diff --git a/doc/todo/comment_moderation_feed.mdwn b/doc/todo/comment_moderation_feed.mdwn new file mode 100644 index 000000000..267706b1b --- /dev/null +++ b/doc/todo/comment_moderation_feed.mdwn @@ -0,0 +1,9 @@ +There should be a way to generate a feed that is updated whenever a new +comment needs moderation. Otherwise, it can be hard to remember to check +sites, which may rarely get comments. + +The feed should not include the comment subject or body, but could mention +the author. It would be especially handy if it was generated statically. +One way would be to generate internal pages corresponding to each comment +that needs moderation; then the feed could be constructed via a usual +inline. diff --git a/doc/todo/conflict_free_comment_merges.mdwn b/doc/todo/conflict_free_comment_merges.mdwn index 2cef0ee8c..e84400c17 100644 --- a/doc/todo/conflict_free_comment_merges.mdwn +++ b/doc/todo/conflict_free_comment_merges.mdwn @@ -20,4 +20,4 @@ What do you think [[smcv]]? --[[Joey]] > are quite low since it modifies the input text and adds a date stamp to > it. > -> Anyway, I think it's good, [[[done]] --[[Joey]] +> Anyway, I think it's good, [[done]] --[[Joey]] diff --git a/doc/todo/double-click_protection_for_form_buttons.mdwn b/doc/todo/double-click_protection_for_form_buttons.mdwn new file mode 100644 index 000000000..501be4498 --- /dev/null +++ b/doc/todo/double-click_protection_for_form_buttons.mdwn @@ -0,0 +1,5 @@ +A small piece of JS to prevent double-submitting forms would be quite nice. I seem to have developed a habit of doing this and having to resolve a merge conflict for two initial commits. -- [[Jon]] + +> By the time you see that merge conflict, the first commit has +> already successfully happened, so you can just hit cancel +> and throw away the second submit. --[[Joey]] diff --git a/doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn b/doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn new file mode 100644 index 000000000..4bc10e432 --- /dev/null +++ b/doc/todo/edittemplate_should_look_in_templates_directory_by_default.mdwn @@ -0,0 +1,8 @@ +[[plugins/edittemplate]] looks for the specified template relative to the +page the directive appears on. Which can be handy, eg, make a +blog/mytemplate and put the directive on blog, and it will find +"mytemplate". However, it can also be confusing, since other templates +always are looked for in `templates/`. + +I think it should probably fall back to looking for `templates/$foo`. +--[[Joey]] diff --git a/doc/todo/enable-htaccess-files.mdwn b/doc/todo/enable-htaccess-files.mdwn index 412cb5eba..3b9721d50 100644 --- a/doc/todo/enable-htaccess-files.mdwn +++ b/doc/todo/enable-htaccess-files.mdwn @@ -12,6 +12,13 @@ qr/(^|\/).svn\//, qr/.arch-ids\//, qr/{arch}\//], wiki_link_regexp => qr/\[\[(?:([^\]\|]+)\|)?([^\s\]#]+)(?:#([^\s\]]+))?\]\]/, +> Note that the above patch is **completely broken**. +> It removes the crucial excludes of all files starting with a dot. +> The negative regexps for htaccess have no effect, so the whole +> thing only "works" because it allows *any* file starting with a dot. +> If you applied this patch to your ikiwiki, you opened a huge security +> hole. --[[Joey]] + [[!tag patch patch/core]] This lets the site administrator have a `.htaccess` file in their underlay @@ -57,7 +64,17 @@ It should be off by default of course. --Max --- +1 I want `.htaccess` so I can rewrite some old Wordpress URLs to make feeds work again. --[[hendry]] +> Unless you cannot modify apache's configuration, you do not need htaccess +> to do that. Apache's documentation recommends against using htaccess +> unless you're a user who cannot modify the main server configuration. +> --[[Joey]] + --- +1 for various purposes (but sometimes the filename isn't `.htaccess`, so please make it configurable) --[[schmonz]] > I've described a workaround for one use case at the [[plugins/rsync]] [[plugins/rsync/discussion]] page. --[[schmonz]] + +--- + +[[done]], you can use the `include` setting to override the default +excludes now. Please use extreme caution when doing so. --[[Joey]] diff --git a/doc/todo/finer_control_over___60__object___47____62__s.mdwn b/doc/todo/finer_control_over___60__object___47____62__s.mdwn new file mode 100644 index 000000000..50c4d43bf --- /dev/null +++ b/doc/todo/finer_control_over___60__object___47____62__s.mdwn @@ -0,0 +1,98 @@ +IIUC, the current version of [HTML::Scrubber][] allows for the `object` tags to be either enabled or disabled entirely. However, while `object` can be used to add *code* (which is indeed a potential security hole) to a document, reading [Objects, Images, and Applets in HTML documents][objects-html] reveals that the “dangerous” are not all the `object`s, but rather those having the following attributes: + + classid %URI; #IMPLIED -- identifies an implementation -- + codebase %URI; #IMPLIED -- base URI for classid, data, archive-- + codetype %ContentType; #IMPLIED -- content type for code -- + archive CDATA #IMPLIED -- space-separated list of URIs -- + +It seems that the following attributes are, OTOH, safe: + + declare (declare) #IMPLIED -- declare but don't instantiate flag -- + data %URI; #IMPLIED -- reference to object's data -- + type %ContentType; #IMPLIED -- content type for data -- + standby %Text; #IMPLIED -- message to show while loading -- + height %Length; #IMPLIED -- override height -- + width %Length; #IMPLIED -- override width -- + usemap %URI; #IMPLIED -- use client-side image map -- + name CDATA #IMPLIED -- submit as part of form -- + tabindex NUMBER #IMPLIED -- position in tabbing order -- + +Should the former attributes be *scrubbed* while the latter left intact, the use of the `object` tag would seemingly become safe. + +Note also that allowing `object` (either restricted in such a way or not) automatically solves the [[/todo/svg]] issue. + +For Ikiwiki, it may be nice to be able to restrict [URI's][URI] (as required by the `data` and `usemap` attributes) to, say, relative and `data:` (as per [RFC 2397][]) ones as well, though it requires some more consideration. + +— [[Ivan_Shmakov]], 2010-03-12Z. + +[[wishlist]] + +> SVG can contain embedded javascript. + +>> Indeed. + +>> So, a more general tool (`XML::Scrubber`?) will be necessary to +>> refine both [XHTML][] and SVG. + +>> … And to leave [MathML][] as is (?.) + +>> — [[Ivan_Shmakov]], 2010-03-12Z. + +> The spec that you link to contains +> examples of objects that contain python scripts, Microsoft OLE +> objects, and Java. And then there's flash. I don't think ikiwiki can +> assume all the possibilities are handled securely, particularly WRT XSS +> attacks. +> --[[Joey]] + +>> I've scanned over all the `object` examples in the specification and +>> all of those that hold references to code (as opposed to data) have a +>> distinguishing `classid` attribute. + +>> While I won't assert that it's impossible to reference code with +>> `data` (and, thanks to `text/xhtml+xml` and `image/svg+xml`, it is +>> *not* impossible), throwing away any of the “insecure” +>> attributes listed above together with limiting the possible URI's +>> (i. e., only *local* and certain `data:` ones for `data` and +>> `usemap`) should make `object` almost as harmless as, say, `img`. + +>>> But with local data, one could not embed youtube videos, which surely +>>> is the most obvious use case? + +>>>> Allowing a “remote” object to render on one's page is a + security issue by itself. + Though, of course, having an explicit whitelist of URI's may make + this issue more tolerable. + — [[Ivan_Shmakov]], 2010-03-12Z. + +>>> Note that youtube embedding uses an +>>> object element with no classid. The swf file is provided via an +>>> enclosed param element. --[[Joey]] + +>>>> I've just checked a random video on YouTube and I see that the + `.swf` file is provided via an enclosed `embed` element. Whether + to allow those or not is a different issue. + — [[Ivan_Shmakov]], 2010-03-12Z. + +>> (Though it certainly won't solve the [[SVG_problem|/todo/SVG]] being +>> restricted in such a way.) + +>> Of the remaining issues I could only think of recursive +>> `object` — the one that references its container document. + +>> — [[Ivan_Shmakov]], 2010-03-12Z. + +## See also + +* [Objects, Images, and Applets in HTML documents][objects-html] +* [[plugins/htmlscrubber|/plugins/htmlscrubber]] +* [[todo/svg|/todo/svg]] +* [RFC 2397: The “data” URL scheme. L. Masinter. August 1998.][RFC 2397] +* [Uniform Resource Identifier — the free encyclopedia][URI] + +[HTML::Scrubber]: http://search.cpan.org/~podmaster/HTML-Scrubber-0.08/Scrubber.pm +[MathML]: http://en.wikipedia.org/wiki/MathML +[objects-html]: http://www.w3.org/TR/1999/REC-html401-19991224/struct/objects.html +[RFC 2397]: http://tools.ietf.org/html/rfc2397 +[URI]: http://en.wikipedia.org/wiki/Uniform_Resource_Identifier +[XHTML]: http://en.wikipedia.org/wiki/XHTML diff --git a/doc/todo/git_attribution/discussion.mdwn b/doc/todo/git_attribution/discussion.mdwn index dfb490bc2..6905d9b4b 100644 --- a/doc/todo/git_attribution/discussion.mdwn +++ b/doc/todo/git_attribution/discussion.mdwn @@ -72,7 +72,7 @@ no determination of uniqueness) > GIT_AUTHOR_EMAIL can also be set. > > There is one thing yet to be solved, and that is how to tell the -> difference between a web commit by 'Joey Hess <joey@kitenet.net>', +> difference between a web commit by 'Joey Hess <joey\@kitenet.net>', > and a git commit by the same. I think we do want to differentiate these, > and the best way to do it seems to be to add a line to the end of the > commit message. Something like: "\n\nWeb-commit: true" @@ -94,5 +94,5 @@ no determination of uniqueness) > * github pushes to twitter ;-) > > So while I tried that way at first, I'm now leaning toward encoding the -> username in the email address. Like "user <user@web>", or -> "joey <http://joey.kitenet.net/@web>". +> username in the email address. Like "user <user\@web>", or +> "joey <http://joey.kitenet.net/\@web>". diff --git a/doc/todo/html.mdwn b/doc/todo/html.mdwn index 44f20c876..4f4542be2 100644 --- a/doc/todo/html.mdwn +++ b/doc/todo/html.mdwn @@ -1,6 +1,6 @@ Create some nice(r) stylesheets. Should be doable w/o touching a single line of code, just -editing the [[wikitemplates]] and/or editing [[style.css]]. +editing the [[templates]] and/or editing [[style.css]]. [[done]] ([[css_market]] ..) diff --git a/doc/todo/htpasswd_mirror_of_the_userdb.mdwn b/doc/todo/htpasswd_mirror_of_the_userdb.mdwn new file mode 100644 index 000000000..e4a411780 --- /dev/null +++ b/doc/todo/htpasswd_mirror_of_the_userdb.mdwn @@ -0,0 +1,29 @@ +[[!tag wishlist]] + +Ikiwiki is static, so access control for viewing the wiki must be +implemented on the web server side. Managing wiki users and access +together, we can currently + +* use [[httpauth|plugins/httpauth/]], but some [[passwordauth|plugins/passwordauth]] functionnality [[is missing|todo/httpauth_feature_parity_with_passwordauth/]]; +* use [[passwordauth|plugins/passwordauth]] plus [[an Apache `mod_perl` authentication mechanism|plugins/passwordauth/discussion/]], but this is Apache-centric and enabling `mod_perl` just for auth seems overkill. + +Moreover, when ikiwiki is just a part of a wider web project, we may want +to use the same userdb for the other parts of this project. + +I think an ikiwiki plugin which would (re)generate an htpasswd version of +the user/passwd base (better, two htpasswd files, one with only the wiki +admins and one with everyone) each time an user is added or modified would +solve this problem: + +* access control can be managed from the web server +* user management is handled by the passwordauth plugin +* htpasswd format is understood by various servers (Apache, lighttpd, nginx, ...) and languages commonly used for web development (perl, python, ruby) +* htpasswd files can be mirrored on other machines when the web site is distributed + +-- [[nil]] + +> I think this is a good idea. Although unless the password hashes that +> are stored in the userdb are compatible with htpasswd hashes, +> the htpasswd hashes will need to be stored in the userdb too. Then +> any userdb change can just regenerate the htpasswd file, dumping out +> the right kind of hashes. --[[Joey]] diff --git a/doc/todo/http_bl_support.mdwn b/doc/todo/http_bl_support.mdwn new file mode 100644 index 000000000..f7a46ee6c --- /dev/null +++ b/doc/todo/http_bl_support.mdwn @@ -0,0 +1,67 @@ +[Project Honeypot](http://projecthoneypot.org/) has an HTTP:BL API available to subscribed (it's free, accept donations) people/orgs. There's a basic perl package someone wrote, I'm including a copy here. + +[from here](http://projecthoneypot.org/board/read.php?f=10&i=112&t=112) + +> The [[plugins/blogspam]] service already checks urls against +> the surbl, and has its own IP blacklist. The best way to +> support the HTTP:BL may be to add a plugin +> [there](http://blogspam.repository.steve.org.uk/file/cc858e497cae/server/plugins/). +> --[[Joey]] + +<pre> +package Honeypot; + +use Socket qw/inet_ntoa/; + +my $dns = 'dnsbl.httpbl.org'; +my %types = ( +0 => 'Search Engine', +1 => 'Suspicious', +2 => 'Harvester', +4 => 'Comment Spammer' +); +sub query { +my $key = shift || die 'You need a key for this, you get one at http://www.projecthoneypot.org'; +my $ip = shift || do { +warn 'no IP for request in Honeypot::query().'; +return; +}; + +my @parts = reverse split /\./, $ip; +my $lookup_name = join'.', $key, @parts, $dns; + +my $answer = gethostbyname ($lookup_name); +return unless $answer; +$answer = inet_ntoa($answer); +my(undef, $days, $threat, $type) = split /\./, $answer; +my @types; +while(my($bit, $typename) = each %types) { +push @types, $typename if $bit & $type; +} +return { +days => $days, +threat => $threat, +type => join ',', @types +}; + +} +1; +</pre> + +From the page: + +> The usage is simple: + +> use Honeypot; +> my $key = 'XXXXXXX'; # your key +> my $ip = '....'; the IP you want to check +> my $q = Honeypot::query($key, $ip); + +> use Data::Dumper; +> print Dumper $q; + +Any chance of having this as a plugin? + +I could give it a go, too. Would be fun to try my hand at Perl. --[[simonraven]] + +[[!tag wishlist]] diff --git a/doc/todo/link_plugin_perhaps_too_general__63__.mdwn b/doc/todo/link_plugin_perhaps_too_general__63__.mdwn new file mode 100644 index 000000000..8a5fd50eb --- /dev/null +++ b/doc/todo/link_plugin_perhaps_too_general__63__.mdwn @@ -0,0 +1,25 @@ +[[!tag wishlist blue-sky]] +(This isn't important to me - I don't use MediaWiki or Creole syntax myself - +but just thinking out loud...) + +The [[ikiwiki/wikilink]] syntax IkiWiki uses sometimes conflicts with page +languages' syntax (notably, [[plugins/contrib/MediaWiki]] and [[plugins/Creole]] +want their wikilinks the other way round, like +`\[[plugins/write|how to write a plugin]]`). It would be nice if there was +some way for page language plugins to opt in/out of the normal wiki link +processing - then MediaWiki and Creole could have their own `linkify` hook +that was only active for *their* page types, and used the appropriate +syntax. + +In [[todo/matching_different_kinds_of_links]] I wondered about adding a +`\[[!typedlink to="foo" type="bar"]]` directive. This made me wonder whether +a core `\[[!link]]` directive would be useful; this could be a fallback for +page types where a normal wikilink can't be done for whatever reason, and +could also provide extension points more easily than WikiLinks' special +syntax with extra punctuation, which doesn't really scale? + +Straw-man: + + \[[!link to="ikiwiki/wikilink" desc="WikiLinks"]] + +--[[smcv]] diff --git a/doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn b/doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn new file mode 100644 index 000000000..2b2b0242e --- /dev/null +++ b/doc/todo/mark_edit_as_trivial__44___identify__47__filter_on_trivial_changes.mdwn @@ -0,0 +1,11 @@ +One feature of mediawiki which I quite like is the ability to mark a change as 'minor', or 'trivial'. This can then be used to filter the 'recentchanges' page, to only show substantial edits. + +The utility of this depends entirely on whether the editors use it properly. + +I currently use an inline on the front page of my personal homepage to show the most recent pages (by creation date) within a subsection of my site (a blog). Blog posts are rarely modified much after they are 'created' (or published - I bodge the creation time via meta when I publish a post. It might sit in draft form indefinitely), so this effectively shows only non-trivial changes. + +I would like to have a short list of the most recent modifications to the site on the front page. I therefore want to sort by modified time rather than creation time, but exclude edits that I self-identify as minor. I also only want to take a short number of items, the top 5, and display only their titles (which may be derived from filename, or set via meta again). + +I'm still thinking through how this might be achieved in an ikiwiki-suitable fashion, but I think I need a scheme to identify certain edits as trivial. This would have to work via web edits (easier: could add a check box to the edit form) and plain changes in the VCS (harder: scan for keywords in a commit message? in a VCS-agnostic fashion?) + +[[!tag wishlist]] diff --git a/doc/todo/matching_different_kinds_of_links.mdwn b/doc/todo/matching_different_kinds_of_links.mdwn index 26c5a072b..da3ea49f6 100644 --- a/doc/todo/matching_different_kinds_of_links.mdwn +++ b/doc/todo/matching_different_kinds_of_links.mdwn @@ -36,6 +36,11 @@ Besides pagespecs, the `rel=` attribute could be used for styles. --Ivan Z. > normal links.) Might be better to go ahead and add the variable to > core though. --[[Joey]] +>> I've implemented this with the data structure you suggested, except that +>> I called it `%typedlinks` instead of `%linktype` (it seemed to make more +>> sense that way). I also ported `tag` to it, and added a `tagged_is_strict` +>> config option. See below! --[[smcv]] + I saw somewhere else here some suggestions for the wiki-syntax for specifying the relation name of a link. One more suggestion---[the syntax used in Semantic MediaWiki](http://en.wikipedia.org/wiki/Semantic_MediaWiki#Basic_usage), like this: <pre> @@ -45,3 +50,147 @@ I saw somewhere else here some suggestions for the wiki-syntax for specifying th So a part of the effect of [[`\[[!taglink TAG\]\]`|plugins/tag]] could be represented as something like `\[[tag::TAG]]` or (more understandable relation name in what concerns the direction) `\[[tagged::TAG]]`. I don't have any opinion on this syntax (whether it's good or not)...--Ivan Z. + +------- + +>> [[!template id=gitbranch author="[[Simon_McVittie|smcv]]" branch=smcv/ready/link-types]] +>> [[!tag patch]] + +## Documentation for smcv's branch + +### added to [[ikiwiki/pagespec]] + +* "`typedlink(type glob)`" - matches pages that link to a given page (or glob) + with a given link type. Plugins can create links with a specific type: + for instance, the tag plugin creates links of type `tag`. + +### added to [[plugins/tag]] + +If the `tagged_is_strict` config option is set, `tagged()` will only match +tags explicitly set with [[ikiwiki/directive/tag]] or +[[ikiwiki/directive/taglink]]; if not (the default), it will also match +any other [[WikiLinks|ikiwiki/WikiLink]] to the tag page. + +### added to [[plugins/write]] + +#### `%typedlinks` + +The `%typedlinks` hash records links of specific types. Do not modify this +hash directly; call `add_link()`. The keys are page names, and the values +are hash references. In each page's hash reference, the keys are link types +defined by plugins, and the values are hash references with link targets +as keys, and 1 as a dummy value, something like this: + + $typedlinks{"foo"} = { + tag => { short_word => 1, metasyntactic_variable => 1 }, + next_page => { bar => 1 }, + }; + +Ordinary [[WikiLinks|ikiwiki/WikiLink]] appear in `%links`, but not in +`%typedlinks`. + +#### `add_link($$;$)` + + This adds a link to `%links`, ensuring that duplicate links are not + added. Pass it the page that contains the link, and the link text. + +An optional third parameter sets the link type (`undef` produces an ordinary +[[ikiwiki/WikiLink]]). + +## Review + +Some code refers to `oldtypedlinks`, and other to `oldlinktypes`. --[[Joey]] + +> Oops, I'll fix that. That must mean missing test coverage, too :-( +> --s + +>> A test suite for the dependency resolver *would* be nice. --[[Joey]] + +>>> Bug fixed, I think. A test suite for the dependency resolver seems +>>> more ambitious than I want to get into right now, but I added a +>>> unit test for this part of it... --s + +I'm curious what your reasoning was for adding a new variable +rather than using `pagestate`. Was it only because you needed +the `old` version to detect change, or was there other complexity? +--J + +> You seemed to be more in favour of adding it to the core in +> your proposal above, so I assumed that'd be more likely to be +> accepted :-) I don't mind one way or the other - `%typedlinks` +> costs one core variable, but saves one level of hash nesting. If +> you're not sure either, then I think the decision should come down +> to which one is easier to document clearly - I'm still unhappy with +> my docs for `%typedlinks`, so I'll try to write docs for it as +> `pagestate` and see if they work any better. --s + +>> On reflection, I don't think it's any better as a pagestate, and +>> the contents of pagestates (so far) aren't documented for other +>> plugins' consumption, so I'm inclined to leave it as-is, unless +>> you want to veto that. Loose rationale: it needs special handling +>> in the core to be a dependency type (I re-used the existing link +>> type), it's API beyond a single plugin, and it's really part of +>> the core parallel to pagestate rather than being tied to a +>> specific plugin. Also, I'd need to special-case it to have +>> ikiwiki not delete it from the index, unless I introduced a +>> dummy typedlinks plugin (or just hook) that did nothing... --s + +I have not convinced myself this is a real problem, but.. +If a page has a typed link, there seems to be no way to tell +if it also has a separate, regular link. `add_link` will add +to `@links` when adding a typed, or untyped link. If only untyped +links were recorded there, one could tell the difference. But then +typed links would not show up at all in eg, a linkmap, +unless it was changed to check for typed links too. +(Or, regular links could be recorded in typedlinks too, +with a empty type. (Bloaty.)) --J + +> I think I like the semantics as-is - I can't think of any +> reason why you'd want to ask the question "does A link to B, +> not counting tags and other typed links?". A typed link is +> still a link, in my mind at least. --s + +>> Me neither, let's not worry about it. --[[Joey]] + +I suspect we could get away without having `tagged_is_strict` +without too much transitional trouble. --[[Joey]] + +> If you think so, I can delete about 5 LoC. I don't particularly +> care either way; [[Jon]] expressed concern about people relying +> on the current semantics, on one of the pages requesting this +> change. --s + +>> Removed in a newer version of the branch. --s + +I might have been wrong to introduce `typedlink(tag foo)`. It's not +very user-friendly, and is more useful as a backend for other plugins +that as a feature in its own right - any plugin introducing a link +type will probably also want to have its own preprocessor directive +to set that link type, and its own pagespec function to match it. +I wonder whether to make a `typedlink` plugin that has the typedlink +pagespec match function and a new `\[[!typedlink to="foo" type="bar"]]` +though... --[[smcv]] + +> I agree, per-type matchers are more friendly and I'm not enamored of the +> multi-parameter pagespec syntax. --[[Joey]] + +>> Removed in a newer version of the branch. I re-introduced it as a +>> plugin in `smcv/typedlink`, but I don't think we really need it. --s + +---- + +I am ready to merge this, but I noticed one problem -- since `match_tagged` +now only matches pages with the tag linktype, a wiki will need to be +rebuilt on upgrade in order to get the linktype of existing tags in it +recorded. So there needs to be a NEWS item about this and +the postinst modified to force the rebuild. + +> Done, although you'll need to plug in an appropriate version number when +> you release it. Is there a distinctive reminder string you grep for +> during releases? I've used `UNRELEASED` for now. --[[smcv]] + +Also, the ready branch adds `typedlink()` to [[ikiwiki/pagespec]], +but you removed that feature as documented above. +--[[Joey]] + +> [[Done]]. --s diff --git a/doc/todo/mercurial.mdwn b/doc/todo/mercurial.mdwn index e71c8106a..de1f148e5 100644 --- a/doc/todo/mercurial.mdwn +++ b/doc/todo/mercurial.mdwn @@ -119,3 +119,11 @@ I have a few notes on mercurial usage after trying it out for a while: >> I think the ideal solution would be to build `$destdir/recentchanges/*` directly from the output of `hg log`. --[[buo]] >>>> That would be 100 times as slow, so I chose not to do that. --[[Joey]] + +>>>> Since this is confusing people, allow me to clarify: Ikiwiki's +>>>> recentchanges generation pulls log information directly out of the VCS as +>>>> needed. It caches it in recentchanges/* in the `scrdir`. These cache +>>>> files need not be preserved, should never be checked into VCS, and if +>>>> you want to you can configure your VCSignore file to ignore them, +>>>> just as you can configure it to ignore the `.ikiwiki` directory in the +>>>> `scrdir`. --[[Joey]] diff --git a/doc/todo/more_flexible_inline_postform.mdwn b/doc/todo/more_flexible_inline_postform.mdwn index bc8bc0809..414476bd7 100644 --- a/doc/todo/more_flexible_inline_postform.mdwn +++ b/doc/todo/more_flexible_inline_postform.mdwn @@ -16,3 +16,8 @@ logical first step towards doing comment-like things with inlined pages). > Perhaps what we need is a `postform` plugin/directive that inline depends > on (automatically enables); its preprocess method could automatically be > invoked from preprocess_inline when needed. --[[smcv]] + +>> I've been looking at this stuff again. I think you are right, this would +>> be the right approach. The comments plugin could use it similarly, allowing +>> sites which desire it to have an inline comment submission form on all +>> pages with comments enabled. I'm going to take a look. -- [[Jon]] diff --git a/doc/todo/multiple_template_directories.mdwn b/doc/todo/multiple_template_directories.mdwn index c09a9595f..6a474b4f3 100644 --- a/doc/todo/multiple_template_directories.mdwn +++ b/doc/todo/multiple_template_directories.mdwn @@ -11,3 +11,63 @@ ought to do the trick. > global dir when it cannot find a template. For me, this is good enough. > And it is even documented in the man page. Sigh. I guess this could be > considered [[done]]. + +I have a use case for this, a site composed of blogs and wikis, templates divided in three categories : common, blog and wiki. The only solution I found is maintaining hard links, being able to have multiple template dirs would obviously be better. -- Changaco + +> [[plugins/underlay]] used to allow adding extra templatedirs, but Joey +> removed that functionality when he made templates search the wiki's +> own `templates` directory. +> +> You can get a 3-level hierarchy like this: +> +> * instance-specific overrides: $srcdir/templates +> * common to the entire site: a directory that is the value of all +> instances' `templatedir` parameters +> * common to every ikiwiki in the world: /usr/share/ikiwiki/templates +> (implicitly searched) +> +> (by "instance" I mean an instance of ikiwiki - a .setup file, basically.) +> +> For a more complex hierarchy you'd need the old [[plugins/underlay]] +> functionality, i.e. you'd need to (ask Joey to) revert the patch that +> removed it. For instance, if anyone has a hierarchy like this, then +> they need the old functionality back in order to split the template +> search path for the things marked `(???)`: +> +> every ikiwiki in the world (/usr/share/ikiwiki/templates) +> \--- your site (???) +> \--- your blogs (???) +> \--- travel blog ($srcdir/templates) +> \--- code blog ($srcdir/templates) +> \--- your wikis (???) +> \--- travel wiki ($srcdir/templates) +> \--- code wiki ($srcdir/templates) +> +> This looks pretty hypothetical to me, though... +> --[[smcv]] + +>> The reason I removed it is because the same functionality of having +>> multiple template directories is still present. Just put them in +>> the templates/ subdirectory of multiple underlay directories instead. +>> --[[Joey]] + +>>>Thanks, I didn't realize this was possible. Problem solved. -- Changaco + +>>>> We can consider this [[done]], then. For reference, the solution +>>>> to the hierarchy I mentioned above would be: +>>>> +>>>> all your sites have $your_underlay as an underlay +>>>> +>>>> the blogs and wikis all have $blog_underlay or $wiki_underlay +>>>> (as appropriate) as a higher priority underlay +>>>> +>>>> every ikiwiki in the world (/usr/share/ikiwiki/templates) +>>>> \--- your site ($your_underlay/templates, or templatedir) +>>>> \--- your blogs ($blog_underlay/templates) +>>>> \--- travel blog ($srcdir/templates) +>>>> \--- code blog ($srcdir/templates) +>>>> \--- your wikis ($wiki_underlay/templates) +>>>> \--- travel wiki ($srcdir/templates) +>>>> \--- code wiki ($srcdir/templates) +>>>> +>>>> --[[smcv]] diff --git a/doc/todo/multiple_templates.mdwn b/doc/todo/multiple_templates.mdwn index 72783c556..30fb8d6ee 100644 --- a/doc/todo/multiple_templates.mdwn +++ b/doc/todo/multiple_templates.mdwn @@ -1,4 +1,4 @@ -> Another useful feature might be to be able to choose a different [[template|wikitemplates]] +> Another useful feature might be to be able to choose a different [[template|templates]] > file for some pages; [[blog]] pages would use a template different from the > home page, even if both are managed in the same repository, etc. diff --git a/doc/todo/openid_user_filtering.mdwn b/doc/todo/openid_user_filtering.mdwn index 8b2d0082e..6a318c4c0 100644 --- a/doc/todo/openid_user_filtering.mdwn +++ b/doc/todo/openid_user_filtering.mdwn @@ -7,3 +7,7 @@ So I suggest an ikiwiki configuration like: users => ["*.webvm.net"], Would only allow edits from openIDs of that form. + +> This kind of thing can be [[done]] now: --[[Joey]] +> +> locked_pages => "* and !user(http://*.webvm.net/)" diff --git a/doc/todo/optional_underlaydir_prefix.mdwn b/doc/todo/optional_underlaydir_prefix.mdwn new file mode 100644 index 000000000..06900a904 --- /dev/null +++ b/doc/todo/optional_underlaydir_prefix.mdwn @@ -0,0 +1,46 @@ +For security reasons, symlinks are disabled in IkiWiki. That's fair enough, but that means that some problems, which one could otherwise solve by using a symlink, cannot be solved. The specfic problem in this case is that all underlays are placed at the root of the wiki, when it could be more convenient to place some underlays in specific sub-directories. + +Use-case 1 (to keep things tidy): + +Currently IkiWiki has some javascript files in `underlays/javascript`; that directory is given as one of the underlay directories. Thus, all the javascript files appear in the root of the generated site. But it would be tidier if one could say "put the contents of *this* underlaydir under the `js` directory". + +> Of course, this could be accomplished, if we wanted to, by moving the +> files to `underlays/javascript/js`. --[[Joey]] + +Use-case 2 (a read-only external dir): + +Suppose I want to include a subset of `/usr/local/share/docs` on my wiki, say the docs about `foo`. But I want them to be under the `docs/foo` sub-directory on the generated site. Currently I can't do that. If I give `/usr/local/share/docs/foo` as an underlaydir, then the contents of that will be in the root of the site, rather than under `docs/foo`. And if I give `/usr/local/share/docs` as an underlaydir, then the contents of the `foo` dir will be under `foo`, but it will also include every other thing in `/usr/local/share/docs`. + +Since we can't use symlinks in an underlay dir to link to these directories, then perhaps one could give a specific underlay dir a specific prefix, which defines the sub-directory that the underlay should appear in. + +I'm not sure how this would be implemented, but I guess it could be configured something like this: + + prefixed_underlay => { + 'js' => '/usr/local/share/ikiwiki/javascript', + 'docs/foo' => '/usr/local/share/docs/foo', + } + +> So, let me review why symlinks are an issue. For normal, non-underlay +> pages, users who do not have filesystem access to the server may have +> commit access, and so could commit eg, a symlink to `/etc/passwd` (or +> to `/` !). The guards are there to prevent ikiwiki either exposing the +> symlink target's contents, or potentially overwriting it. +> +> Is this a concern for underlays? Most of the time, certianly not; +> the underlay tends to be something only the site admin controls. +> Not all the security checks that are done on the srcdir are done +> on the underlays, either. Most checks done on files in the underlay +> are only done because the same code handles srcdir files. The one +> exception is the test that skips processing symlinks in the underlay dir. +> (But note that the underlay directory can be a symlinkt to elsewhere +> which the srcdir, by default, cannot.) +> +> So, one way to approach this is to make ikiwiki follow directory symlinks +> inside the underlay directory. Just a matter of passing `follow => 1` to +> find. (This would still not allow individual files to be symlinks, because +> `readfile` does not allow reading symlinks. But I don't see much need +> for that.) --[[Joey]] + +>> If you think that enabling symlinks in underlay directories wouldn't be a security issue, then I'm all for it! That would be much simpler to implement, I'm sure. --[[KathrynAndersen]] + +[[!taglink wishlist]] diff --git a/doc/todo/rewrite_ikiwiki_in_haskell.mdwn b/doc/todo/rewrite_ikiwiki_in_haskell.mdwn index 204c48cd7..48ed744b1 100644 --- a/doc/todo/rewrite_ikiwiki_in_haskell.mdwn +++ b/doc/todo/rewrite_ikiwiki_in_haskell.mdwn @@ -29,6 +29,7 @@ It's appealing for a lot of reasons, including: edit in html editors currently. - This would be a chance to make WikiLinks with link texts read "the right way round" (ie, vaguely wiki creole compatably). + *[See also [[todo/link_plugin_perhaps_too_general?]] --[[smcv]]]* - The data structures would probably be quite different. - I might want to drop a lot of the command-line flags, either requiring a setup file be used for those things, or leaving the diff --git a/doc/todo/salmon_protocol_for_comment_sharing.mdwn b/doc/todo/salmon_protocol_for_comment_sharing.mdwn new file mode 100644 index 000000000..1e56b0a8b --- /dev/null +++ b/doc/todo/salmon_protocol_for_comment_sharing.mdwn @@ -0,0 +1,21 @@ +The <a href="http://www.salmon-protocol.org/home">Salmon protocol</a> +provides for aggregating comments across sites. If a site that syndicates +a feed receives a comment on an item in that feed, it can re-post the +comment to the original source. + +> Ikiwiki does not allow comments to be posted on items it aggregates. +> So salmon protocol support would only need to handle the comment +> receiving side of the protocol. +> +> The current draft protocol document confuses me when it starts talking +> about using OAuth in the abuse prevention section, since their example +> does not show use of OAuth, and it's not at all clear to me where the +> OAuth relationship between aggregator and original source is supposed +> to come from. +> +> Their security model, which goes on to include Webfinger, +> thirdparty validation services, XRD, and Magic Signatures, looks sorta +> like they kept throwing technology, at it, hoping something will stick. :-P +> --[[Joey]] + +[[!tag wishlist]] diff --git a/doc/todo/smarter_sorting.mdwn b/doc/todo/smarter_sorting.mdwn new file mode 100644 index 000000000..901e143a7 --- /dev/null +++ b/doc/todo/smarter_sorting.mdwn @@ -0,0 +1,141 @@ +I benchmarked a build of a large wiki (my home wiki), and it was spending +quite a lot of time sorting; `CORE::sort` was called only 1138 times, but +still flagged as the #1 time sink. (I'm not sure I trust NYTProf fully +about that FWIW, since it also said 27238263 calls to `cmp_age` were +the #3 timesink, and I suspect it may not entirely accurately measure +the overhead of so many short function calls.) + +`pagespec_match_list` currently always sorts *all* pages first, and then +finds the top M that match the pagespec. That's innefficient when M is +small (as for example in a typical blog, where only 20 posts are shown, +out of maybe thousands). + +As [[smcv]] noted, It could be flipped, so the pagespec is applied first, +and then sort the smaller matching set. But, checking pagespecs is likely +more expensive than sorting. (Also, influence calculation complicates +doing that.) + +Another option, when there is a limit on M pages to return, might be to +cull the M top pages without sorting the rest. + +> The patch below implements this. +> +> But, I have not thought enough about influence calculation. +> I need to figure out which pagespec matches influences need to be +> accumulated for in order to determine all possible influences of a +> pagespec are known. +> +> The old code accumulates influences from matching all successful pages +> up to the num cutoff, as well as influences from an arbitrary (sometimes +> zero) number of failed matches. New code does not accumulate influences +> from all the top successful matches, only an arbitrary group of +> successes and some failures. +> +> Also, by the time I finished this, it was not measuarably faster than +> the old method. At least not with a few thousand pages; it +> might be worth revisiting this sometime for many more pages? [[done]] +> --[[Joey]] + +<pre> +diff --git a/IkiWiki.pm b/IkiWiki.pm +index 1730e47..bc8b23d 100644 +--- a/IkiWiki.pm ++++ b/IkiWiki.pm +@@ -2122,36 +2122,54 @@ sub pagespec_match_list ($$;@) { + my $num=$params{num}; + delete @params{qw{num deptype reverse sort filter list}}; + +- # when only the top matches will be returned, it's efficient to +- # sort before matching to pagespec, +- if (defined $num && defined $sort) { +- @candidates=IkiWiki::SortSpec::sort_pages( +- $sort, @candidates); +- } +- ++ # Find the first num matches (or all), before sorting. + my @matches; +- my $firstfail; + my $count=0; + my $accum=IkiWiki::SuccessReason->new(); +- foreach my $p (@candidates) { +- my $r=$sub->($p, %params, location => $page); ++ my $i; ++ for ($i=0; $i < @candidates; $i++) { ++ my $r=$sub->($candidates[$i], %params, location => $page); + error(sprintf(gettext("cannot match pages: %s"), $r)) + if $r->isa("IkiWiki::ErrorReason"); + $accum |= $r; + if ($r) { +- push @matches, $p; ++ push @matches, $candidates[$i]; + last if defined $num && ++$count == $num; + } + } + ++ # We have num natches, but they may not be the best. ++ # Efficiently find and add the rest, without sorting the full list of ++ # candidates. ++ if (defined $num && defined $sort) { ++ @matches=IkiWiki::SortSpec::sort_pages($sort, @matches); ++ ++ for ($i++; $i < @candidates; $i++) { ++ # Comparing candidate with lowest match is cheaper, ++ # so it's done before testing against pagespec. ++ if (IkiWiki::SortSpec::cmptwo($candidates[$i], $matches[-1], $sort) < 0 && ++ $sub->($candidates[$i], %params, location => $page) ++ ) { ++ # this could be done less expensively ++ # using a binary search ++ for (my $j=0; $j < @matches; $j++) { ++ if (IkiWiki::SortSpec::cmptwo($candidates[$i], $matches[$j], $sort) < 0) { ++ splice @matches, $j, $#matches-$j+1, $candidates[$i], ++ @matches[$j..$#matches-1]; ++ last; ++ } ++ } ++ } ++ } ++ } ++ + # Add simple dependencies for accumulated influences. +- my $i=$accum->influences; +- foreach my $k (keys %$i) { +- $depends_simple{$page}{lc $k} |= $i->{$k}; ++ my $inf=$accum->influences; ++ foreach my $k (keys %$inf) { ++ $depends_simple{$page}{lc $k} |= $inf->{$k}; + } + +- # when all matches will be returned, it's efficient to +- # sort after matching ++ # Sort if we didn't already. + if (! defined $num && defined $sort) { + return IkiWiki::SortSpec::sort_pages( + $sort, @matches); +@@ -2455,6 +2473,12 @@ sub sort_pages { + sort $f @_ + } + ++sub cmptwo { ++ $a=$_[0]; ++ $b=$_[1]; ++ $_[2]->(); ++} ++ + sub cmp_title { + IkiWiki::pagetitle(IkiWiki::basename($a)) + cmp +</pre> + +This would be bad when M is very large, and particularly, of course, when +there is no limit and all pages are being matched on. (For example, an +archive page shows all pages that match a pagespec specifying a creation +date range.) Well, in this case, it *does* make sense to flip it, limit by +pagespe first, and do a (quick)sort second. (No influence complications, +either.) + +> Flipping when there's no limit implemented, and it knocked 1/3 off +> the rebuild time of my blog's archive pages. --[[Joey]] + +Adding these special cases will be more complicated, but I think the best +of both worlds. --[[Joey]] diff --git a/doc/todo/structured_page_data.mdwn b/doc/todo/structured_page_data.mdwn index da9da9663..9f21fab7f 100644 --- a/doc/todo/structured_page_data.mdwn +++ b/doc/todo/structured_page_data.mdwn @@ -253,6 +253,9 @@ in a large number of other cases. > dependencies between bugs from arbitrary links. >> This issue (the need for distinguished kinds of links) has also been brought up in other discussions: [[tracking_bugs_with_dependencies#another_kind_of_links]] (deps vs. links) and [[tag_pagespec_function]] (tags vs. links). --Ivan Z. +>>> And multiple link types are now supported; plugins can set the link +>>> type when registering a link, and pagespec functions can match on them. --[[Joey]] + ---- #!/usr/bin/perl diff --git a/doc/todo/svg.mdwn b/doc/todo/svg.mdwn index 0a15af4cd..274ebf3e3 100644 --- a/doc/todo/svg.mdwn +++ b/doc/todo/svg.mdwn @@ -3,6 +3,7 @@ We should support SVG. In particular: * We could support rendering SVGs to PNGs when compiling the wiki. Not all browsers support SVG yet. * We could support editing SVGs via the web interface. SVG can contain unsafe content such as scripting, so we would need to whitelist safe markup. + * I am interested in seeing [svg-edit](http://code.google.com/p/svg-edit/) integrated -- [[EricDrechsel]] --[[JoshTriplett]] @@ -56,3 +57,21 @@ in the trunk if other people think it's useful. [htmlscrubber.pm]:http://xbeta.org/gitweb/?p=xbeta/ikiwiki.git;a=blob;f=IkiWiki/Plugin/htmlscrubber.pm;h=3c0ddc8f25bd8cb863634a9d54b40e299e60f7df;hb=fe333c8e5b4a5f374a059596ee698dacd755182d [diff]: http://xbeta.org/gitweb/?p=xbeta/ikiwiki.git;a=blobdiff;f=IkiWiki/Plugin/htmlscrubber.pm;h=3c0ddc8f25bd8cb863634a9d54b40e299e60f7df;hp=3bdaccea119ec0e1b289a0da2f6d90e2219b8d66;hb=fe333c8e5b4a5f374a059596ee698dacd755182d;hpb=be0b4f603f918444b906e42825908ddac78b7073 + +> Unfortuantly these links are broken. --[[Joey]] + +* * * + +Actually, there's a way to embed SVG into MarkDown sources using the [data: URI scheme][rfc2397], [like this](data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBzdGFuZGFsb25lPSJubyI/Pgo8c3ZnIHdpZHRoPSIxOTIiIGhlaWdodD0iMTkyIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KIDwhLS0gQ3JlYXRlZCB3aXRoIFNWRy1lZGl0IC0gaHR0cDovL3N2Zy1lZGl0Lmdvb2dsZWNvZGUuY29tLyAtLT4KIDx0aXRsZT5IZWxsbywgd29ybGQhPC90aXRsZT4KIDxnPgogIDx0aXRsZT5MYXllciAxPC90aXRsZT4KICA8ZyB0cmFuc2Zvcm09InJvdGF0ZSgtNDUsIDk3LjY3MTksIDk3LjY2OCkiIGlkPSJzdmdfNyI+CiAgIDxyZWN0IHN0cm9rZS13aWR0aD0iNSIgc3Ryb2tlPSIjMDAwMDAwIiBmaWxsPSIjRkYwMDAwIiBpZD0ic3ZnXzUiIGhlaWdodD0iNTYuMDAwMDAzIiB3aWR0aD0iMTc1IiB5PSI2OS42Njc5NjkiIHg9IjEwLjE3MTg3NSIvPgogICA8dGV4dCB4bWw6c3BhY2U9InByZXNlcnZlIiB0ZXh0LWFuY2hvcj0ibWlkZGxlIiBmb250LWZhbWlseT0ic2VyaWYiIGZvbnQtc2l6ZT0iMjQiIHN0cm9rZS13aWR0aD0iMCIgc3Ryb2tlPSIjMDAwMDAwIiBmaWxsPSIjZmZmZjAwIiBpZD0ic3ZnXzYiIHk9IjEwNS42NjgiIHg9Ijk5LjY3MTkiPkhlbGxvLCB3b3JsZCE8L3RleHQ+CiAgPC9nPgogPC9nPgo8L3N2Zz4=). +Of course, this way to display an image one needs to click a link, but it may be considered a feature. +— [[Ivan_Shmakov]], 2010-03-12Z. + +[rfc2397]: http://tools.ietf.org/html/rfc2397 + +> You can do the same with img src actually. +> +> If svg markup allows unsafe elements (ie, javascript), +> which it appears to, +> then this is a security hole, and the htmlscrubber +> needs to lock it down more. Darn, now I have to spend my afternoon making +> security releases! --[[Joey]] diff --git a/doc/todo/tracking_bugs_with_dependencies.mdwn b/doc/todo/tracking_bugs_with_dependencies.mdwn index 5f3ece290..456dadad0 100644 --- a/doc/todo/tracking_bugs_with_dependencies.mdwn +++ b/doc/todo/tracking_bugs_with_dependencies.mdwn @@ -81,6 +81,9 @@ I like the idea of [[tips/integrated_issue_tracking_with_ikiwiki]], and I do so >> I saw that this issue is targeted at by the work on [[structured page data#another_kind_of_links]]. --Ivan Z. +>>> It's fixed now; links can have a type, such as "tag", or "dependency", +>>> and pagespecs can match links of a given typo. --[[Joey]] + Okie - I've had a quick attempt at this. Initial patch attached. This one doesn't quite work. And there is still a lot of debugging stuff in there. diff --git a/doc/todo/two-way_convert_of_wikis.mdwn b/doc/todo/two-way_convert_of_wikis.mdwn new file mode 100644 index 000000000..61f02a30b --- /dev/null +++ b/doc/todo/two-way_convert_of_wikis.mdwn @@ -0,0 +1,15 @@ + +[[!tag wishlist]] + +Ok, the vision is this: Some of you will know git-svn. I want something like +git-svn,, but for wikis. I want to be able to do the following: + +1. Convert a moinmoin (or whatever) wiki to a local ikiwiki on my laptop. +2. Edit my local copy (offline). +3. Preview the changes with my local ikiwki installation + browser. +4. Push the changes back to moinmoin (or whatever) wiki. + +I know, I know, ikiwiki wasn't designed for that, but it would be really cool, +and useful and people ask for that kind of thing too. + +--[[David_Riebenbauer]] diff --git a/doc/todo/user-defined_templates_outside_the_wiki.mdwn b/doc/todo/user-defined_templates_outside_the_wiki.mdwn new file mode 100644 index 000000000..1d72aa6a7 --- /dev/null +++ b/doc/todo/user-defined_templates_outside_the_wiki.mdwn @@ -0,0 +1,10 @@ +[[!tag wishlist]] + +The [[plugins/contrib/ftemplate]] plugin looks for templates inside the wiki +source, but also looks in the system templates directory (the one with +`page.tmpl`). This means the wiki admin can provide templates that can be +invoked via `\[[!template]]`, but don't have to "work" as wiki pages in their +own right. I think the normal [[plugins/template]] plugin could benefit from +this functionality. + +[[done]] --[[Joey]] diff --git a/doc/todo/wrapperuser.mdwn b/doc/todo/wrapperuser.mdwn new file mode 100644 index 000000000..4c42b046f --- /dev/null +++ b/doc/todo/wrapperuser.mdwn @@ -0,0 +1,7 @@ +ikiwiki's .setup file can specify wrappergroup, and ikiwiki will set the group +of the wrappers accordingly. Having had people encounter difficulty before +when trying to do the same thing with users (for instance, making all wrappers +6755 ikiwiki:ikiwiki), I think it would help to have "wrapperuser". This could +only actually take effect if building the wrappers as root (not really the best +plan), but ikiwiki could at least warn if wrapperuser does not match the user +the wrapper will end up with. |