summaryrefslogtreecommitdiff
path: root/doc/tips
diff options
context:
space:
mode:
Diffstat (limited to 'doc/tips')
-rw-r--r--doc/tips/DreamHost/discussion.mdwn13
-rw-r--r--doc/tips/Importing_posts_from_Wordpress.mdwn91
-rw-r--r--doc/tips/add_chatterbox_to_blog.mdwn3
-rw-r--r--doc/tips/comments_feed.mdwn11
-rw-r--r--doc/tips/convert_mediawiki_to_ikiwiki.mdwn163
-rw-r--r--doc/tips/convert_mediawiki_to_ikiwiki/discussion.mdwn63
-rw-r--r--doc/tips/dot_cgi.mdwn4
-rw-r--r--doc/tips/dot_cgi/discussion.mdwn10
-rw-r--r--doc/tips/follow_wikilinks_from_inside_vim.mdwn47
-rw-r--r--doc/tips/github.mdwn2
-rw-r--r--doc/tips/howto_limit_to_admin_users.mdwn9
-rw-r--r--doc/tips/htaccess_file.mdwn27
-rw-r--r--doc/tips/html5.mdwn26
-rw-r--r--doc/tips/ikiwiki_as_a_requirements_management_tool.mdwn95
-rw-r--r--doc/tips/ikiwiki_as_a_requirements_management_tool/discussion.mdwn18
-rw-r--r--doc/tips/importing_posts_from_typo.mdwn1
-rw-r--r--doc/tips/inside_dot_ikiwiki.mdwn7
-rw-r--r--doc/tips/inside_dot_ikiwiki/discussion.mdwn7
-rw-r--r--doc/tips/laptop_wiki_with_git.mdwn20
-rw-r--r--doc/tips/laptop_wiki_with_git_extended.mdwn4
-rw-r--r--doc/tips/mathopd_permissions.mdwn15
-rw-r--r--doc/tips/nearlyfreespeech/discussion.mdwn11
-rw-r--r--doc/tips/optimising_ikiwiki.mdwn188
-rw-r--r--doc/tips/parentlinks_style.mdwn21
-rw-r--r--doc/tips/spam_and_softwaresites.mdwn87
-rw-r--r--doc/tips/switching_to_usedirs.mdwn4
-rw-r--r--doc/tips/untrusted_git_push.mdwn2
-rw-r--r--doc/tips/upgrade_to_3.0.mdwn2
-rw-r--r--doc/tips/vim_syntax_highlighting.mdwn17
-rw-r--r--doc/tips/yaml_setup_files.mdwn12
30 files changed, 949 insertions, 31 deletions
diff --git a/doc/tips/DreamHost/discussion.mdwn b/doc/tips/DreamHost/discussion.mdwn
index 74f48938e..258d385ae 100644
--- a/doc/tips/DreamHost/discussion.mdwn
+++ b/doc/tips/DreamHost/discussion.mdwn
@@ -3,3 +3,16 @@ I managed to install ikiwiki on eggplant farms, with most basic features except
I think ikiwiki is more suitable for VPS/dedicated server. Shared hosting doesn't fit.
I just (2009/04/27) installed ikiwiki on DreamHost and the CPAN instructions here are unnecessarily complicated. I used "cpan" instead of "perl -MCPAN -e shell" and had no trouble with that portion of the install. --[[schmonz]]
+
+After tiring of managing things by hand, I've switched to using
+pkgsrc as an unprivileged user. This uses a bit more disk for my
+own copies of perl, python, etc., but in exchange I can `cd
+.../pkgsrc/www/ikiwiki && make install` and everything just works.
+Plus I get all the benefits of a package system, like easy uninstalling
+and being notified of outdated or insecure software.
+
+The only catch: sometimes the package dependency tree gets too deep
+for DreamHost's user process limit, resulting in build death. I
+work around this by resuming the build partway down the tree, then
+trying again from whatever I was actually trying to install.
+--[[schmonz]]
diff --git a/doc/tips/Importing_posts_from_Wordpress.mdwn b/doc/tips/Importing_posts_from_Wordpress.mdwn
index 59330caa4..1ea82b862 100644
--- a/doc/tips/Importing_posts_from_Wordpress.mdwn
+++ b/doc/tips/Importing_posts_from_Wordpress.mdwn
@@ -1,9 +1,13 @@
Use case: You want to move away from Wordpress to Ikiwiki as your blogging/website platform, but you want to retain your old posts.
-[This](http://git.chris-lamb.co.uk/?p=ikiwiki-wordpress-import.git) is a simple tool that generates [git-fast-import](http://www.kernel.org/pub/software/scm/git/docs/git-fast-import.html)-compatible data from a WordPress export XML file. It retains creation time of each post, so you can use Ikiwiki's <tt>--getctime</tt> to get the preserve creation times on checkout.
+[This](http://git.chris-lamb.co.uk/?p=ikiwiki-wordpress-import.git) is a simple tool that generates [git-fast-import](http://www.kernel.org/pub/software/scm/git/docs/git-fast-import.html)-compatible data from a WordPress export XML file.
WordPress categories are mapped onto Ikiwiki tags. The ability to import comments is planned.
+The script uses the [BeautifulSoup][] module.
+
+[BeautifulSoup]: http://www.crummy.com/software/BeautifulSoup/
+
-----
I include a modified version of this script. This version includes the ability to write \[[!tag foo]] directives, which the original intended, but didn't actually do.
@@ -11,3 +15,88 @@ I include a modified version of this script. This version includes the ability t
-- [[users/simonraven]]
[[ikiwiki-wordpress-import]]
+
+-----
+
+Perhaps slightly insane, but here's an XSLT style sheet that handles my pages. It's basic, but sufficient to get started.
+Note that I had to break up the ikiwiki meta strings to post this.
+
+-- JasonRiedy
+
+ <?xml version="1.0" encoding="UTF-8"?>
+ <xsl:stylesheet version="2.0"
+ xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
+ xmlns:content="http://purl.org/rss/1.0/modules/content/"
+ xmlns:wp="http://wordpress.org/export/1.0/">
+
+ <xsl:output method="text"/>
+ <xsl:output method="text" name="txt"/>
+
+ <xsl:variable name='newline'><xsl:text>
+ </xsl:text></xsl:variable>
+
+ <xsl:template match="channel">
+ <xsl:apply-templates select="item[wp:post_type = 'post']"/>
+ </xsl:template>
+
+ <xsl:template match="item">
+ <xsl:variable name="idnum" select="format-number(wp:post_id,'0000')" />
+ <xsl:variable name="basename"
+ select="concat('wp-posts/post-',$idnum)" />
+ <xsl:variable name="filename"
+ select="concat($basename, '.html')" />
+ <xsl:text>Creating </xsl:text>
+ <xsl:value-of select="concat($filename, $newline)" />
+ <xsl:result-document href="{$filename}" format="txt">
+ <xsl:text>[[</xsl:text><xsl:text>meta title="</xsl:text>
+ <xsl:value-of select="replace(title, '&quot;', '&amp;ldquo;')"/>
+ <xsl:text>"]]</xsl:text><xsl:value-of select="$newline"/>
+ <xsl:text>[[</xsl:text><xsl:text>meta date="</xsl:text>
+ <xsl:value-of select="pubDate"/>
+ <xsl:text>"]]</xsl:text><xsl:value-of select="$newline"/>
+ <xsl:text>[[</xsl:text><xsl:text>meta updated="</xsl:text>
+ <xsl:value-of select="pubDate"/>
+ <xsl:text>"]]</xsl:text> <xsl:value-of select="$newline"/>
+ <xsl:value-of select="$newline"/>
+ <xsl:value-of select="content:encoded"/>
+ <xsl:text>
+
+ </xsl:text>
+ <xsl:apply-templates select="category[@domain='tag' and not(@nicename)]">
+ <xsl:sort select="name()"/>
+ </xsl:apply-templates>
+ </xsl:result-document>
+ <xsl:apply-templates select="wp:comment">
+ <xsl:sort select="date"/>
+ <xsl:with-param name="basename">$basename</xsl:with-param>
+ </xsl:apply-templates>
+ </xsl:template>
+
+ <xsl:template match="wp:comment">
+ <xsl:param name="basename"/>
+ <xsl:variable name="cnum" select="format-number(wp:comment_id, '000')" />
+ <xsl:variable name="filename" select="concat($basename, '/comment_', $cnum, '._comment')"/>
+ <xsl:variable name="nickname" select="concat(' nickname=&quot;', wp:comment_author, '&quot;')" />
+ <xsl:variable name="username" select="concat(' username=&quot;', wp:comment_author_url, '&quot;')" />
+ <xsl:variable name="ip" select="concat(' ip=&quot;', wp:comment_author_IP, '&quot;')" />
+ <xsl:variable name="date" select="concat(' date=&quot;', wp:comment_date_gmt, '&quot;')" />
+ <xsl:result-document href="{$filename}" format="txt">
+ <xsl:text>[[</xsl:text><xsl:text>comment format=html</xsl:text><xsl:value-of select="$newline"/>
+ <xsl:value-of select="$nickname"/>
+ <xsl:value-of select="$username"/>
+ <xsl:value-of select="$ip"/>
+ <xsl:value-of select="$date"/>
+ <xsl:text>subject=""</xsl:text><xsl:value-of select="$newline"/>
+ <xsl:text>content="""</xsl:text><xsl:value-of select="$newline"/>
+ <xsl:value-of select="wp:comment_content"/>
+ <xsl:value-of select="$newline"/>
+ <xsl:text>"""]]</xsl:text><xsl:value-of select="$newline"/>
+ </xsl:result-document>
+ </xsl:template>
+
+ <xsl:template match="category">
+ <xsl:text>[</xsl:text><xsl:text>[</xsl:text><xsl:text>!tag "</xsl:text><xsl:value-of select="."/><xsl:text>"]]</xsl:text>
+ <xsl:value-of select="$newline"/>
+ </xsl:template>
+
+ </xsl:stylesheet>
diff --git a/doc/tips/add_chatterbox_to_blog.mdwn b/doc/tips/add_chatterbox_to_blog.mdwn
index aa35b9331..e07e36b07 100644
--- a/doc/tips/add_chatterbox_to_blog.mdwn
+++ b/doc/tips/add_chatterbox_to_blog.mdwn
@@ -18,4 +18,7 @@ from there, like I have on [my blog](http://kitenet.net/~joey/blog/)
show=5 feeds=no]]
"""]]
+* To filter out `@-replies`, append "and !*@*" to the [[ikiwiki/PageSpec]].
+ The same technique can be used for other filtering.
+
Note: Works best with ikiwiki 3.10 or better.
diff --git a/doc/tips/comments_feed.mdwn b/doc/tips/comments_feed.mdwn
index 6f8137256..3d6a8c449 100644
--- a/doc/tips/comments_feed.mdwn
+++ b/doc/tips/comments_feed.mdwn
@@ -3,8 +3,15 @@ blog can have comments added to them. Pages with comments even have special
feeds that can be used to subscribe to those comments. But you'd like to
add a feed that contains all the comments posted to any page. Here's how:
- \[[!inline pages="internal(*/comment_*)" template=comment]]
+ \[[!inline pages="comment(*)" template=comment]]
The special [[ikiwiki/PageSpec]] matches all comments. The
-[[template|wikitemplates]] causes the comments to be displayed formatted
+[[template|templates]] causes the comments to be displayed formatted
nicely.
+
+---
+
+It's also possible to make a feed of comments that are held pending
+moderation.
+
+ \[[!inline pages="comment_pending(*)" template=comment]]
diff --git a/doc/tips/convert_mediawiki_to_ikiwiki.mdwn b/doc/tips/convert_mediawiki_to_ikiwiki.mdwn
index f03703b46..38de01109 100644
--- a/doc/tips/convert_mediawiki_to_ikiwiki.mdwn
+++ b/doc/tips/convert_mediawiki_to_ikiwiki.mdwn
@@ -1,4 +1,159 @@
-[[sabr]] explains how to [import MediaWiki content into
-git](http://u32.net/Mediawiki_Conversion/index.html?updated), including
-full edit hostory. The [[plugins/contrib/mediawiki]] plugin can then be
-used by ikiwiki to build the wiki.
+[[!toc levels=2]]
+
+Mediawiki is a dynamically-generated wiki which stores it's data in a
+relational database. Pages are marked up using a proprietary markup. It is
+possible to import the contents of a Mediawiki site into an ikiwiki,
+converting some of the Mediawiki conventions into Ikiwiki ones.
+
+The following instructions describe ways of obtaining the current version of
+the wiki. We do not yet cover importing the history of edits.
+
+Another set of instructions and conversion tools (which imports the full history)
+can be found at <http://github.com/mithro/media2iki>
+
+## Step 1: Getting a list of pages
+
+The first bit of information you require is a list of pages in the Mediawiki.
+There are several different ways of obtaining these.
+
+### Parsing the output of `Special:Allpages`
+
+Mediawikis have a special page called `Special:Allpages` which list all the
+pages for a given namespace on the wiki.
+
+If you fetch the output of this page to a local file with something like
+
+ wget -q -O tmpfile 'http://your-mediawiki/wiki/Special:Allpages'
+
+You can extract the list of page names using the following python script. Note
+that this script is sensitive to the specific markup used on the page, so if
+you have tweaked your mediawiki theme a lot from the original, you will need
+to adjust this script too:
+
+ import sys
+ from xml.dom.minidom import parse, parseString
+
+ dom = parse(sys.argv[1])
+ tables = dom.getElementsByTagName("table")
+ pagetable = tables[-1]
+ anchors = pagetable.getElementsByTagName("a")
+ for a in anchors:
+ print a.firstChild.toxml().\
+ replace('&amp;','&').\
+ replace('&lt;','<').\
+ replace('&gt;','>')
+
+Also, if you have pages with titles that need to be encoded to be represented
+in HTML, you may need to add further processing to the last line.
+
+Note that by default, `Special:Allpages` will only list pages in the main
+namespace. You need to add a `&namespace=XX` argument to get pages in a
+different namespace. (See below for the default list of namespaces)
+
+Note that the page names obtained this way will not include any namespace
+specific prefix: e.g. `Category:` will be stripped off.
+
+### Querying the database
+
+If you have access to the relational database in which your mediawiki data is
+stored, it is possible to derive a list of page names from this. With mediawiki's
+MySQL backend, the page table is, appropriately enough, called `table`:
+
+ SELECT page_namespace, page_title FROM page;
+
+As with the previous method, you will need to do some filtering based on the
+namespace.
+
+### namespaces
+
+The list of default namespaces in mediawiki is available from <http://www.mediawiki.org/wiki/Manual:Namespace#Built-in_namespaces>. Here are reproduced the ones you are most likely to encounter if you are running a small mediawiki install for your own purposes:
+
+[[!table data="""
+Index | Name | Example
+0 | Main | Foo
+1 | Talk | Talk:Foo
+2 | User | User:Jon
+3 | User talk | User_talk:Jon
+6 | File | File:Barack_Obama_signature.svg
+10 | Template | Template:Prettytable
+14 | Category | Category:Pages_needing_review
+"""]]
+
+## Step 2: fetching the page data
+
+Once you have a list of page names, you can fetch the data for each page.
+
+### Method 1: via HTTP and `action=raw`
+
+You need to create two derived strings from the page titles: the
+destination path for the page and the source URL. Assuming `$pagename`
+contains a pagename obtained above, and `$wiki` contains the URL to your
+mediawiki's `index.php` file:
+
+ src=`echo "$pagename" | tr ' ' _ | sed 's,&,&amp;,g'`
+ dest=`"$pagename" | tr ' ' _ | sed 's,&,__38__,g'`
+
+ mkdir -p `dirname "$dest"`
+ wget -q "$wiki?title=$src&action=raw" -O "$dest"
+
+You may need to add more conversions here depending on the precise page titles
+used in your wiki.
+
+If you are trying to fetch pages from a different namespace to the default,
+you will need to prefix the page title with the relevant prefix, e.g.
+`Category:` for category pages. You probably don't want to prefix it to the
+output page, but you may want to vary the destination path (i.e. insert an
+extra directory component corresponding to your ikiwiki's `tagbase`).
+
+### Method 2: via HTTP and `Special:Export`
+
+Mediawiki also has a special page `Special:Export` which can be used to obtain
+the source of the page and other metadata such as the last contributor, or the
+full history, etc.
+
+You need to send a `POST` request to the `Special:Export` page. See the source
+of the page fetched via `GET` to determine the correct arguments.
+
+You will then need to write an XML parser to extract the data you need from
+the result.
+
+### Method 3: via the database
+
+It is possible to extract the page data from the database with some
+well-crafted queries.
+
+## Step 3: format conversion
+
+The next step is to convert Mediawiki conventions into Ikiwiki ones.
+
+### categories
+
+Mediawiki uses a special page name prefix to define "Categories", which
+otherwise behave like ikiwiki tags. You can convert every Mediawiki category
+into an ikiwiki tag name using a script such as
+
+ import sys, re
+ pattern = r'\[\[Category:([^\]]+)\]\]'
+
+ def manglecat(mo):
+ return '\[[!tag %s]]' % mo.group(1).strip().replace(' ','_')
+
+ for line in sys.stdin.readlines():
+ res = re.match(pattern, line)
+ if res:
+ sys.stdout.write(re.sub(pattern, manglecat, line))
+ else: sys.stdout.write(line)
+
+## Step 4: Mediawiki plugin
+
+The [[plugins/contrib/mediawiki]] plugin can be used by ikiwiki to interpret
+most of the Mediawiki syntax.
+
+## External links
+
+[[sabr]] used to explain how to [import MediaWiki content into
+git](http://u32.net/Mediawiki_Conversion/index.html?updated), including full
+edit history, but as of 2009/10/16 that site is not available. A copy of the
+information found on this website is stored at <http://github.com/mithro/media2iki>
+
+
diff --git a/doc/tips/convert_mediawiki_to_ikiwiki/discussion.mdwn b/doc/tips/convert_mediawiki_to_ikiwiki/discussion.mdwn
index b3fe9f86c..d67a9131b 100644
--- a/doc/tips/convert_mediawiki_to_ikiwiki/discussion.mdwn
+++ b/doc/tips/convert_mediawiki_to_ikiwiki/discussion.mdwn
@@ -1,3 +1,11 @@
+20100428 - I just wrote a simple ruby script which will connect to a mysql server and then recreate the pages and their revision histories with Grit. It also does one simple conversion of equals titles to pounds. Enjoy!
+
+<http://github.com/docunext/mediawiki2gitikiwiki>
+
+-- [[users/Albert]]
+
+----
+
The u32 page is excellent, but I wonder if documenting the procedure here
would be worthwhile. Who knows, the remote site might disappear. But also
there are some variations on the approach that might be useful:
@@ -13,9 +21,31 @@ Also, some detail on converting mediawiki transclusion to ikiwiki inlines...
-- [[users/Jon]]
+----
+
> "Who knows, the remote site might disappear.". Right now, it appears to
> have done just that. -- [[users/Jon]]
+I have manage to recover most of the site using the Internet Archive. What
+I was unable to retrieve I have rewritten. You can find a copy of the code
+at <http://github.com/mithro/media2iki>
+
+> This is excellent news. However, I'm still keen on there being a
+> comprehensive and up-to-date set of instructions on *this* site. I wouldn't
+> suggest importing that material into ikiwiki like-for-like (not least for
+> [[licensing|freesoftware]] reasons), but it's excellent to have it available
+> for reference, especially since it (currently) is the only set of
+> instructions that gives you the whole history.
+>
+> The `mediawiki.pm` that was at u32.net is licensed GPL-2. I'd like to see it
+> cleaned up and added to IkiWiki proper (although I haven't requested this
+> yet, I suspect the way it (ab)uses linkify would disqualify it at present).
+>
+> I've imported Scott's initial `mediawiki.pm` into a repository at
+> <http://github.com/jmtd/mediawiki.pm> as a start.
+> -- [[Jon]]
+
+----
The iki-fast-load ruby script from the u32 page is given below:
@@ -286,7 +316,7 @@ Mediawiki.pm - A plugin which supports mediawiki format.
}
- # Called to handle bookmarks like [[#heading]] or <span class="createlink"><a href="http://u32.net/cgi-bin/ikiwiki.cgi?page=%20text%20&amp;from=Mediawiki_Plugin%2Fmediawiki&amp;do=create" rel="nofollow">?</a>#a</span>
+ # Called to handle bookmarks like \[[#heading]] or <span class="createlink"><a href="http://u32.net/cgi-bin/ikiwiki.cgi?page=%20text%20&amp;from=Mediawiki_Plugin%2Fmediawiki&amp;do=create" rel="nofollow">?</a>#a</span>
sub generate_fragment_link
{
my $url = shift;
@@ -316,10 +346,10 @@ Mediawiki.pm - A plugin which supports mediawiki format.
# Ikiwiki's link link plugin wrecks this line when displaying on the site.
# Until the code highlighter plugin can turn off link finding,
- # always escape double brackets in double quotes: [[
+ # always escape double brackets in double quotes: \[[
if($inlink eq '..') {
- # Mediawiki doesn't touch links like [[..#hi|ho]].
- return "[[" . $inlink . ($anchor?"#$anchor":"") .
+ # Mediawiki doesn't touch links like \[[..#hi|ho]].
+ return "\[[" . $inlink . ($anchor?"#$anchor":"") .
($title?"|$title":"") . "]]" . $trailing;
}
@@ -380,7 +410,7 @@ Mediawiki.pm - A plugin which supports mediawiki format.
add_depends($page, $redir_page);
my $link=bestlink($page, underscorize(translate_path($page,$redir_page)));
if (! length $link) {
- return "<b>Redirect Error:</b> <nowiki>[[$redir_page]] not found.</nowiki>";
+ return "<b>Redirect Error:</b> <nowiki>\[[$redir_page]] not found.</nowiki>";
}
$value=urlto($link, $page);
@@ -393,7 +423,7 @@ Mediawiki.pm - A plugin which supports mediawiki format.
my %seen;
while (exists $pagestate{$at}{mediawiki}{redir}) {
if ($seen{$at}) {
- return "<b>Redirect Error:</b> cycle found on <nowiki>[[$at]]</nowiki>";
+ return "<b>Redirect Error:</b> cycle found on <nowiki>\[[$at]]</nowiki>";
}
$seen{$at}=1;
$at=$pagestate{$at}{mediawiki}{redir};
@@ -612,3 +642,24 @@ Mediawiki.pm - A plugin which supports mediawiki format.
}
1
+
+----
+
+Hello. Got ikiwiki running and I'm planning to convert my personal
+Mediawiki wiki to ikiwiki so I can take offline copies around. If anyone
+has an old copy of the instructions, or any advice on where to start I'd be
+glad to hear it. Otherwise I'm just going to chronicle my journey on the
+page.--[[users/Chadius]]
+
+> Today I saw that someone is working to import wikipedia into git.
+> <http://www.gossamer-threads.com/lists/wiki/foundation/181163>
+> Since wikipedia uses mediawiki, perhaps his importer will work
+> on mediawiki in general. It seems to produce output that could be
+> used by the [[plugins/contrib/mediawiki]] plugin, if the filenames
+> were fixed to use the right extension. --[[Joey]]
+
+>> Here's another I found while browsing around starting from the link you gave Joey<br />
+>> <http://github.com/scy/levitation><br />
+>> As I don't run mediawiki anymore, but I still have my xz/gzip-compressed XML dumps,
+>> it's certainly easier for me to do it this way; also a file or a set of files is easier to lug
+>> around on some medium than a full mysqld or postgres master and relevant databases.
diff --git a/doc/tips/dot_cgi.mdwn b/doc/tips/dot_cgi.mdwn
index 4532c84cd..da55c1f1c 100644
--- a/doc/tips/dot_cgi.mdwn
+++ b/doc/tips/dot_cgi.mdwn
@@ -56,6 +56,10 @@ rule that allow `ikiwiki.cgi` to be executed.
server (offline). I am not sure of how secure this approach is.
If you have any thought about it, feel free to let me know.
+## nginx
+
+* To run CGI under nginx, just use a FastCGI wrapper like [this one](http://technotes.1000lines.net/?p=23). The wrapper must be started somehow just like any other FastCGI program. I use launchd on OSX.
+
## boa
Edit /etc/boa/boa.conf and make sure the following line is not commented:
diff --git a/doc/tips/dot_cgi/discussion.mdwn b/doc/tips/dot_cgi/discussion.mdwn
index 124b9edff..a8854565c 100644
--- a/doc/tips/dot_cgi/discussion.mdwn
+++ b/doc/tips/dot_cgi/discussion.mdwn
@@ -34,3 +34,13 @@ there), and so I need to choose the more secure solution. --Ivan Z.
>> The easiest way though is probably
>> to add your ssh key to the special user's `.ssh/authorized_keys`
>> and push that way. --[[Joey]]
+
+## apache2 - run from userdir
+Followed instructions but couldn't get it right to run from user dir (running ubuntu jaunty),
+Finally got it working once I've sym linked as follow (& restarted apache):
+\# ln -s ../mods-available/userdir.load .
+\# ln -s ../mods-available/userdir.conf .
+\# pwd
+/etc/apache2/mods-enabled
+
+
diff --git a/doc/tips/follow_wikilinks_from_inside_vim.mdwn b/doc/tips/follow_wikilinks_from_inside_vim.mdwn
new file mode 100644
index 000000000..015a4ecee
--- /dev/null
+++ b/doc/tips/follow_wikilinks_from_inside_vim.mdwn
@@ -0,0 +1,47 @@
+The [ikiwiki-nav](http://www.vim.org/scripts/script.php?script_id=2968) plugin
+for vim eases the editing of IkiWiki wikis, by letting you "follow" the
+wikilinks on your file (page), by loading the file associated with a given
+wikilink in vim. The plugin takes care of following the ikiwiki linking rules
+to figure out which file a wikilink points to
+
+The plugin also includes commands (and mappings) to make the cursor jump to the
+previous/next wikilink in the current file
+
+## Jumping to pages
+
+To open the file associated to a wikilink, place the cursor over its text, and
+hit Enter (`<CR>`). This functionality is also available through the
+`:IkiJumpToPage` command
+
+## Moving to next/previous wikilink in current file
+
+`Ctrl-j` will move the cursor to the next wikilink. `Ctrl-k` will move it to the
+previous wikilink. This functionality is also available through the
+`:IkiNextWikiLink` command. This command takes one argument, the direction to
+move into
+
+ * `:IkiNextWikiLink 0` will look forward for the wikilink
+ * `:IkiNextWikiLink 1` will look backwards for the wikilink
+
+## Installation
+
+Copy the `ikiwiki_nav.vim` file to your `.vim/ftplugin` directory.
+
+## Current issues:
+
+ * The plugin only works for wikilinks contained in a single text line;
+ multiline wikilinks are not (yet) seen as such
+
+## Notes
+
+The official releases of the plugin are in the
+[vim.org script page](http://www.vim.org/scripts/script.php?script_id=2968)
+
+The latest version of this script can be found in the following location
+
+<http://git.devnull.li/cgi-bin/gitweb.cgi?p=ikiwiki-nav.git;a=blob;f=ftplugin/ikiwiki_nav.vim;hb=HEAD>
+
+Any feedback you can provide is appreciated; the contact details can be found
+inside the plugin
+
+[[!tag vim]]
diff --git a/doc/tips/github.mdwn b/doc/tips/github.mdwn
index c3fdab734..9bdf15751 100644
--- a/doc/tips/github.mdwn
+++ b/doc/tips/github.mdwn
@@ -24,7 +24,7 @@ for more space, or you can migrate your site elsewhere.
## Local Setup
* On your laptop, create two empty git repositories to correspond to the github repositories: <br />
- `YOU = your github username here` <br />
+ `YOU=your github username here` <br />
`mkdir ~/$YOU.github.com` <br />
`cd ~/$YOU.github.com` <br />
`git init` <br />
diff --git a/doc/tips/howto_limit_to_admin_users.mdwn b/doc/tips/howto_limit_to_admin_users.mdwn
new file mode 100644
index 000000000..4d579327a
--- /dev/null
+++ b/doc/tips/howto_limit_to_admin_users.mdwn
@@ -0,0 +1,9 @@
+Enable [[plugins/lockedit]] in your setup file.
+
+For example:
+
+ add_plugins => [qw{goodstuff table rawhtml template embed typography sidebar img remove lockedit}],
+
+And to only allow admin users to edit the page, simply specify a pagespec for everything in the .setup:
+
+ locked_pages => '*',
diff --git a/doc/tips/htaccess_file.mdwn b/doc/tips/htaccess_file.mdwn
new file mode 100644
index 000000000..6964cf24e
--- /dev/null
+++ b/doc/tips/htaccess_file.mdwn
@@ -0,0 +1,27 @@
+If you try to include a `.htaccess` file in your wiki's source, in order to
+configure the web server, you'll find that ikiwiki excludes it from
+processing. In fact, ikiwiki excludes any file starting with a dot, as well
+as a lot of other files, for good security reasons.
+
+You can tell ikiwiki not to exclude the .htaccess file by adding this to
+your setup file:
+
+ include => '^\.htaccess$',
+
+Caution! Before you do that, please think for a minute about who can edit
+your wiki. Are attachment uploads enabled? Can users commit changes
+directly to the version control system? Do you trust everyone who can
+make a change to not do Bad Things with the htaccess file? Do you trust
+everyone who *might* be able to make a change in the future? Note that a
+determined attacker who can write to the htaccess file can probably get a
+shell on your web server.
+
+If any of these questions have given you pause, I suggest you find a
+different way to configure the web server. One way is to not put the
+`.htaccess` file under ikiwiki's control, and just manually install it
+in the destdir. --[[Joey]]
+
+[Apache's documentation](http://httpd.apache.org/docs/2.2/howto/htaccess.html)
+says:
+> In general, you should never use .htaccess files unless you don't have
+> access to the main server configuration file.
diff --git a/doc/tips/html5.mdwn b/doc/tips/html5.mdwn
new file mode 100644
index 000000000..945efc4bc
--- /dev/null
+++ b/doc/tips/html5.mdwn
@@ -0,0 +1,26 @@
+First, if you just want to embed videos using the html5 `<video>` tag,
+you can do that without switching anything else to html5.
+However, if you want to fully enter the brave new world of html5, read on..
+
+Currently, ikiwiki does not use html5 by default. There is a `html5`
+setting that can be turned on, in your setup file. Rebuild with it set, and
+lots of fancy new semantic tags will be used all over the place.
+
+You may need to adapt your CSS for html5. While all the class and id names
+are the same, some of the `div` elements are changed to other things.
+Ikiwiki's default CSS will work in both modes.
+
+The html5 support is still experimental, and may break in some browsers.
+No care is taken to add backwards compatability hacks for browsers that
+are not html5 aware (like MSIE). If you want to include the javascript with
+those hacks, you can edit `page.tmpl` to do so.
+[Dive Into HTML5](http://diveintohtml5.org/) is a good reference for
+current compatability issues and workarounds with html5.
+
+---
+
+Known ikiwiki-specific issues:
+
+* [[plugins/htmltidy]] uses `tidy`, which is not html5 aware, so if you
+ have that enabled, it will mangle it back to html4.
+* [[plugins/toc]] does not understand the html5 outline algorithm.
diff --git a/doc/tips/ikiwiki_as_a_requirements_management_tool.mdwn b/doc/tips/ikiwiki_as_a_requirements_management_tool.mdwn
new file mode 100644
index 000000000..6bef2619e
--- /dev/null
+++ b/doc/tips/ikiwiki_as_a_requirements_management_tool.mdwn
@@ -0,0 +1,95 @@
+[[!template id=note text="**Table of contents** [[!toc ]]"]]
+
+Introduction
+------------
+At work textual requirements and traceability are daily use terms, often used as contracts with clients or among stakeholders, but at the moment the only way we specify requirements is via a word processor, and traceability is managed manually (ouch!) unless we use a commercial UML (Unified Modeling Language) tool that handles office files, an also allows traceability from design, code and test artifacts. But functionality of that tool is less than basic for requirements.
+
+We are considering the use of a specific requirements management tool, but the problem and something that gets me really frustrated is the extremely expensive price of the licenses of the "de facto" commercial tools we should use. One floating license for both tools (requirements management and modeling) can go beyond $20,000. Of course we can purchase cheaper ones, but I'm tired of this licensing nightmare of worrying about how many licenses are being used, praying not to exceed the limit or restarting a dead license server. We pay companies to not trust us. Taking a look at the FLOSS world doesn't seem to add any reasonable alternative.
+
+These are the raw high level features of the tool I'd like to use:
+
+ * Requirements edition
+ * Requirements attributes edition
+ * Traceability: edition, coverage analysis and navigation
+ * External traceability: from requirements in one document/module to requirements in other one (eg: software requirements tracing to system requirements). Note: a set of requirements will be called "module" hereafter in this page.
+ * Requirements identifiers management
+ * Requirements history and diff/blame
+ * Team work
+ * Easy integration with other software lifecycle tools: modeling (eg. BOUML), project management (eg. Trac)
+ * Support for other formats such as HTML, office...
+ * Filtering and searching
+ * Export facilities to create standards compliant documentation.
+
+Initial idea was to develop a simple web solution using XHTML files. These files would be created in a web broser with existing WYSIWIM editors and store all the stuff in Subversion. All requirements would be stored at the same level (no hierarchies among requirements of the same module) and atomically accessible via a simple web browser. No server side programming would be needed to read requirements. Also special XHTML files (let's call it "views") would be necessary to group requirements hierarchically in a requirements document fashion, using xinclude.
+
+When I first played with ikiwiki I was so happy that many of the ideas I worked on were already in use in this marvelous piece of software, specially the decision to use well-known RCS software to manage history instead of reinventing the wheel, opening also one interesting feature: off-line edition. Other similarity was the absence of special processing for read-only navigation.
+
+So, let's now take all the features above and describe how to make them real using ikiwiki and some simple conventions. Some features would need new functionality and improvement, I'd really appreciate additional ideas on how to better get to the point.
+
+Requirements edition
+--------------------
+Suppose that all requirements would reside under a concrete folder. We will call it "reqs", and under "reqs" we add as folders as requirements modules we want to use for a system called "foo" (eg. "foo_sss" for system requirements, "foo_srs" for software requirements...). Index file for each document shall be a page summarizing the module: number of requirements, basic coverage information... Other similar pages under the "views" folder could be used in order to have different sets of requirements including additional stuff: introduction, document identification, etc... The rest of the files - actually requirements - shall be markdown files. So editing a requirement would be as simple as adding a page to the wiki.
+
+To create the summary and views, just [[ikiwiki/directive/inline]] and [[ikiwiki/directive/pagecount]] directives could provide nice pages. The uncomfortable part is having to use many [[pagespecs|ikiwiki/PageSpec]] to create the whole views, but it actually shoud work. One possible workaround would be an external tool to handle this and create directives automatically or graphically.
+
+Requirements attributes
+-----------------------
+There are lots of useful data to associate to a requirement. Eg:
+
+ * If it is traceable or not
+ * Its criticality level
+ * Its priority
+ * If it is funtional or not
+
+How to implement this? Using [[ikiwiki/directive/meta]] could be a solution, not tried yet, I'd rather keep requirements content alone. Storing this information in SVN it is easy, although ikiwiki does not provide a way to do it it would imply really little effort. The requirement in itself is the content of the file; attributes are stored as key-value pairs in the file's properties. AFAIK this is a feature available only in SVN, although git has something similar (gitattributes) although path based, but anyway whichever RCS is used, a ".properties" file could be created always when a requirement file is created.
+
+Traceability: edition, coverage analysis and navigation
+--------------------------------------------------------
+This is the most important feature of a requirements engineering tool. How to do this with ikiwiki? There are some ways, from extremely simple ones to more sophisticated:
+
+ * One simple solution: Links. Just link from one requirement to another one to create a traceable directional connection
+ * One harder: file attributes (see section about requirements just above)
+
+For coverage analysis , using [[ikiwiki/directive/pagecount]] is the perfect solution to summarize and show covered and uncovered requirements. We could add several pages per module - probably using template pages- with ready made coverage analysis reports... Wow!!! [[ikiwiki/directive/linkmap]] directive can show traceability information graphically.
+
+Navigating among requirements needs... Nothing!!! Just follow the links of referring pages that ikiwiki adds by default.
+
+External traceability
+---------------------
+Being just different folders under the wiki, external traceability is as easy as internal.
+
+Requirements identifiers management
+-----------------------------------
+Another useful convention: requirement identifier shall be the name of the requirements file. In ikiwiki page title is the same as requirement Id. No trouble, it works. I personally prefer to keep title as page title and create a short auto-increasing numeric codes with prefixes and/or suffixes as file name (eg. SRS_FOO_0001, SSS_FOO_002), hope to have somethig running soon.
+
+Requirements history and diff/blame
+-----------------------------------
+Out of the box! And really much more useful than average diffing components of requirement management tools. There are plenty online front ends to use and for offline work tools like meld are awesome.
+
+Team work
+---------
+Also no need to do anything, RCS software does it all. Also for experienced users merging and conflict solving can provide much more practical solutions (most requirements management tools work blocking).
+
+Easy integration with other software lifecycle tools
+----------------------------------------------------
+Modeling tools: as a general rule, store model elements in the most atomic parts: classes, enums, actors, use cases... and use again file attributes to store traceability information. Other way is transforming files representing these atomic model parts in independent mdwn files under, for example, a "mdl" folder
+
+Trac integration is so simple... As simple as any PM tool that acceses the same RCS as ikiwiki does. Diffing, blaming, even navigating directly to ikiwiki generated pages. Integration of a ticketing system will give awesome power to all the team.
+
+Support for other formats
+-------------------------
+Out of the box, at least for wiki and mathematical formats, and creating additional ones shouldn't be so difficult.
+
+Filtering and searching
+-----------------------
+Look that box in the top right corner?
+
+Export facilities
+-----------------
+Views with custom styles and html conversion tools would be enough for most purposes.
+
+That's all!
+
+One funny thing: our "de facto future" requirements management tool, after years of research included some years ago a really nice feature: a Discussion tag per each requirement... See this in ikiwiki? Again out of the box!!!
+
+Comments are really welcome!!!
diff --git a/doc/tips/ikiwiki_as_a_requirements_management_tool/discussion.mdwn b/doc/tips/ikiwiki_as_a_requirements_management_tool/discussion.mdwn
new file mode 100644
index 000000000..94f0f8b4b
--- /dev/null
+++ b/doc/tips/ikiwiki_as_a_requirements_management_tool/discussion.mdwn
@@ -0,0 +1,18 @@
+How about using tags/links to associate attributes with requirements?
+This could be as simple as adding a link, fo e.g. :
+
+ * If it is traceable or not
+ + \[[attributes/traceable]]
+ + \[[attributes/untraceable]]
+ * Its criticality level
+ + \[[attributes/level/critical]]
+ + \[[attributes/level/important]]
+ + etc.
+ * Its priority
+ + \[[attributes/priority/low]]
+ + \[[attributes/priority/high]]
+ * If it is functional or not
+ + \[[attributes/functional]]
+ + \[[attributes/non-functional]]
+
+You just have to create pages for each attribute you want and then pagespec could be used to filter requirements by attributes. I think something similar is used to trac bug with ikiwiki (linking to a \[[done]] page, etc.).
diff --git a/doc/tips/importing_posts_from_typo.mdwn b/doc/tips/importing_posts_from_typo.mdwn
new file mode 100644
index 000000000..1b87e7dae
--- /dev/null
+++ b/doc/tips/importing_posts_from_typo.mdwn
@@ -0,0 +1 @@
+[Here](http://blog.spang.cc/posts/migrating_from_typo_to_ikiwiki/) is a blog post that gives instructions and a script for importing posts from [Typo](http://typosphere.org/), a Ruby-on-Rails based blogging engine.
diff --git a/doc/tips/inside_dot_ikiwiki.mdwn b/doc/tips/inside_dot_ikiwiki.mdwn
index b81ffae8d..a74d00f47 100644
--- a/doc/tips/inside_dot_ikiwiki.mdwn
+++ b/doc/tips/inside_dot_ikiwiki.mdwn
@@ -6,9 +6,10 @@ you need/want to.
## the index
-`.ikiwiki/indexdb` contains a cache of information about pages, as well
-as all persisitant state about pages. It used to be a (semi) human-readable
-text file, but is not anymore.
+`.ikiwiki/indexdb` contains a cache of information about pages.
+This information can always be recalculated by rebuilding the wiki.
+(So the file is safe to delete and need not be backed up.)
+It used to be a (semi) human-readable text file, but is not anymore.
To dump the contents of the file, enter a perl command like this.
diff --git a/doc/tips/inside_dot_ikiwiki/discussion.mdwn b/doc/tips/inside_dot_ikiwiki/discussion.mdwn
index 34d5b9252..69df369ec 100644
--- a/doc/tips/inside_dot_ikiwiki/discussion.mdwn
+++ b/doc/tips/inside_dot_ikiwiki/discussion.mdwn
@@ -6,14 +6,15 @@ My database appears corrupted:
No idea how this happened. I've blown it away and recreated it but, for future reference, is there any less violent way to recover from this situation? I miss having the correct created and last edited times. --[[sabr]]
> update: fixed ctimes and mtimes using [these instructions](http://u32.net/Mediawiki_Conversion/Git_Import/#Correct%20Creation%20and%20Last%20Edited%20time) --[[sabr]]
-> That's overly complex. Just run `ikiwiki -setup your.setup -getctime`.
+> That's overly complex. Just run `ikiwiki -setup your.setup -gettime`.
> BTW, I'd be interested in examining such a corrupt storable file to try
> to see what happened to it. --[[Joey]]
->> --getctime appears to only set the last edited date. It's not supposed to set the creation date, is it? The only place that info is stored is in the git repo.
+>> --gettime appears to only set the last edited date. It's not supposed to set the creation date, is it? The only place that info is stored is in the git repo.
>>> Pulling the page creation date out of the git history is exactly what
->>> --getctime does. --[[Joey]]
+>>> --gettime does. (It used to be called --getctime, and only do that; now
+>>> it also pulls out the last modified date). --[[Joey]]
>> Alas, I seem to have lost the bad index file to periodic /tmp wiping; I'll send it to you if it happens again. --[[sabr]]
diff --git a/doc/tips/laptop_wiki_with_git.mdwn b/doc/tips/laptop_wiki_with_git.mdwn
index 9758beb80..cfa565d1a 100644
--- a/doc/tips/laptop_wiki_with_git.mdwn
+++ b/doc/tips/laptop_wiki_with_git.mdwn
@@ -1,3 +1,5 @@
+[[!toc]]
+
Using ikiwiki with the [[rcs/git]] backend, some interesting things can be done
with creating mirrors (or, really, branches) of a wiki. In this tip, I'll
assume your wiki is located on a server, and you want to take a copy with
@@ -8,6 +10,8 @@ version on the laptop, perhaps while offline. You can browse and edit the
wiki using a local web server. When you're ready, you can manually push the
changes to the main wiki on the server.
+## simple clone approach
+
First, set up the wiki on the server, if it isn't already. Nothing special
needs to be done here, just follow the regular instructions in [[setup]]
for setting up ikiwiki with git.
@@ -49,3 +53,19 @@ update the wiki, with a command such as `ikiwiki -setup wiki.setup -refresh`.
If you'd like it to automatically update when changes are merged in, you
can simply make a symlink `post-merge` hook pointing at the `post-update`
hook ikiwiki created.
+
+## bare mirror approach
+
+As above, set up a normal ikiwiki on the server, with the usual bare repository.
+
+Next, `git clone --mirror server:/path/to/bare/repository`
+
+This will be used as the $REPOSITORY on the laptop. Then you can follow
+the instructions in [[setup by hand|/setup/byhand]] as per a normal ikiwiki
+installation. This means that you can clone from the local bare repository
+as many times as you want (thus being able to have a repository which is
+used by the ikiwiki CGI, and another which you can use for updating via
+git).
+
+Use standard git commands, run in the laptop's bare git repository
+to handle pulling from and pushing to the server.
diff --git a/doc/tips/laptop_wiki_with_git_extended.mdwn b/doc/tips/laptop_wiki_with_git_extended.mdwn
index 620370218..0666da450 100644
--- a/doc/tips/laptop_wiki_with_git_extended.mdwn
+++ b/doc/tips/laptop_wiki_with_git_extended.mdwn
@@ -10,7 +10,7 @@ a bare repo on `gitserver`, and clone that to a workingdir on gitserver.
Next create a setup file for the laptop with
gitorigin_branch=> "",
- wrapper => "/working/dir/.git/hooks/post-commit",
+ git_wrapper => "/working/dir/.git/hooks/post-commit",
At this point, assuming you followed page above, and not my hasty summary,
@@ -21,7 +21,7 @@ a bare repo on `gitserver`, and clone that to a workingdir on gitserver.
2. Now create a setup file for the server (I call it server.setup).
gitorigin_branch=> "origin",
- wrapper => "/repo/wiki.git/hooks/post-update.ikiwiki"
+ git_wrapper => "/repo/wiki.git/hooks/post-update.ikiwiki"
Note the non-standard and bizzare name of the hook.
diff --git a/doc/tips/mathopd_permissions.mdwn b/doc/tips/mathopd_permissions.mdwn
new file mode 100644
index 000000000..c0425b9ca
--- /dev/null
+++ b/doc/tips/mathopd_permissions.mdwn
@@ -0,0 +1,15 @@
+When using [mathopd](http://www.mathopd.org) to serve ikiwiki, be careful of your Umask settings in the mathopd.conf.
+
+With `Umask 026` in mathopd.conf, editing pages resulted in the following errors and a 404 page when the wiki tried to take me to the updated page.
+
+ append_indexes: cannot open .../[destdir]/[outputfile].html
+ open: Permission denied
+
+With `Umask 022` in mathopd.conf, editing pages works.
+
+Hopefully this prevents someone else from spending ~2 hours figuring out why this wouldn't work. ;)
+
+> More generally, if your web server uses a nonstandard umask
+> or you're getting permissions related problems in the cgi log
+> when using ikiwiki, you can force ikiwiki to use a sane umask
+> via the `umask` setting in ikiwiki's own setup file. --[[Joey]]
diff --git a/doc/tips/nearlyfreespeech/discussion.mdwn b/doc/tips/nearlyfreespeech/discussion.mdwn
new file mode 100644
index 000000000..a003760b9
--- /dev/null
+++ b/doc/tips/nearlyfreespeech/discussion.mdwn
@@ -0,0 +1,11 @@
+with version 3.141592 I get
+<pre>
+HOME=/home/me /usr/bin/perl -Iblib/lib ikiwiki.out -libdir . -dumpsetup ikiwiki.setup
+Failed to load plugin IkiWiki::Plugin::inline: Can't use global $_ in "my" at IkiWiki/Plugin/inline.pm line 198, near "my $_"
+Compilation failed in require at (eval 19) line 2.
+BEGIN failed--compilation aborted at (eval 19) line 2.
+</pre>
+
+perl is 5.8.9
+
+> This is fixed in 3.1415926. --[[Joey]]
diff --git a/doc/tips/optimising_ikiwiki.mdwn b/doc/tips/optimising_ikiwiki.mdwn
new file mode 100644
index 000000000..caed75ba6
--- /dev/null
+++ b/doc/tips/optimising_ikiwiki.mdwn
@@ -0,0 +1,188 @@
+Ikiwiki is a wiki compiler, which means that, unlike a traditional wiki,
+all the work needed to display your wiki is done up front. Where you can
+see it and get annoyed at it. In some ways, this is better than a wiki
+where a page view means running a program to generate the page on the fly.
+
+But enough excuses. If ikiwiki is taking too long to build your wiki,
+let's fix that. Read on for some common problems that can be avoided to
+make ikiwiki run quick.
+
+[[!toc]]
+
+(And if none of that helps, file a [[bug|bugs]]. One other great thing about
+ikiwiki being a wiki compiler is that it's easy to provide a test case when
+it's slow, and get the problem fixed!)
+
+## rebuild vs refresh
+
+Are you building your wiki by running a command like this?
+
+ ikiwiki -setup my.setup
+
+If so, you're always telling ikiwiki to rebuild the entire site, from
+scratch. But, ikiwiki is smart, it can incrementally update a site,
+building only things affected by the changes you make. You just have to let
+it do so:
+
+ ikiwiki -setup my.setup -refresh
+
+Ikiwiki automatically uses an incremental refresh like this when handing
+a web edit, or when run from a [[rcs]] post-commit hook. (If you've
+configured the hook in the usual way.) Most people who have run into this
+problem got in the habit of running `ikiwiki -setup my.setup` by hand
+when their wiki was small, and found it got slower as they added pages.
+
+## use the latest version
+
+If your version of ikiwiki is not [[!version]], try upgrading. New
+optimisations are frequently added to ikiwiki, some of them yielding
+*enormous* speed increases.
+
+## run ikiwiki in verbose mode
+
+Try changing a page, and run ikiwiki with `-v` so it will tell you
+everything it does to deal with that changed page. Take note of
+which other pages are rebuilt, and which parts of the build take a long
+time. This can help you zero in on individual pages that contain some of
+the expensive things listed below.
+
+## expensive inlines
+
+Do you have an archive page for your blog that shows all posts,
+using an inline that looks like this?
+
+ \[[!inline pages="blog/*" show=0]]
+
+Or maybe you have some tag pages for your blog that show all tagged posts,
+something like this?
+
+ \[[!inline pages="blog/* and tagged(foo)" show=0]]
+
+These are expensive, because they have to be updated whenever you modify a
+matching page. And, if there are a lot of pages, it generates a large html
+file, which is a lot of work. And also large RSS/Atom files, which is even
+more work!
+
+To optimise the inline, consider enabling quick archive mode. Then the
+inline will only need to be updated when new pages are added; no RSS
+or Atom feeds will be built, and the generated html file will be much
+smaller.
+
+ \[[!inline pages="blog/*" show=0 archive=yes quick=yes]]
+
+ \[[!inline pages="blog/* and link(tag)" show=0 archive=yes quick=yes]]
+
+Only downsides: This won't show titles set by the [[ikiwiki/directive/meta]]
+directive. And there's no RSS feed for users to use -- but if this page
+is only for the archives or tag for your blog, users should be subscribing
+to the blog's main page's RSS feed instead.
+
+For the main blog page, the inline should only show the latest N posts,
+which won't be a performance problem:
+
+ \[[!inline pages="blog/*" show=30]]
+
+## expensive maps
+
+Do you have a sitemap type page, that uses a map directive like this?
+
+ \[[!map pages="*" show=title]]
+
+This is expensive because it has to be updated whenever a page is modified.
+The resulting html file might get big and expensive to generate as you
+keep adding pages.
+
+First, consider removing the "show=title". Then the map will not show page
+titles set by the [[ikiwiki/directive/meta]] directive -- but will also
+only need to be generated when pages are added or removed, not for every
+page change.
+
+Consider limiting the map to only show the toplevel pages of your site,
+like this:
+
+ \[[!map pages="* and !*/*" show=title]]
+
+Or, alternatively, to drop from the map parts of the site that accumulate
+lots of pages, like individual blog posts:
+
+ \[[!map pages="* and !blog/*" show=title]]
+
+## sidebar issues
+
+If you enable the [[plugins/sidebar]] plugin, be careful of what you put in
+your sidebar. Any change that affects what is displayed by the sidebar
+will require an update of *every* page in the wiki, since all pages include
+the sidebar.
+
+Putting an expensive map or inline in the sidebar is the most common cause
+of problems. At its worst, it can result in any change to any page in the
+wiki requiring every page to be rebuilt.
+
+## avoid htmltidy
+
+A few plugins do neat stuff, but slowly. Such plugins are tagged
+[[plugins/type/slow]].
+
+The worst offender is possibly [[plugins/htmltidy]]. This runs an external
+`tidy` program on each page that is built, which is necessarily slow. So don't
+use it unless you really need it; consider using the faster
+[[plugins/htmlbalance]] instead.
+
+## be careful of large linkmaps
+
+[[plugins/Linkmap]] generates a cool map of links between pages, but
+it does it using the `graphviz` program. And any changes to links between
+pages on the map require an update. So, avoid using this to map a large number
+of pages with frequently changing links. For example, using it to map
+all the pages on a traditional, highly WikiLinked wiki, is asking for things
+to be slow. But using it to map a few related pages is probably fine.
+
+This site's own [[plugins/linkmap]] rarely slows it down, because it
+only shows the [[index]] page, and the small set of pages that link to it.
+That is accomplished as follows:
+
+ \[[!linkmap pages="index or (backlink(index)"]]
+
+## overhead of the search plugin
+
+Be aware that the [[plugins/search]] plugin has to update the search index
+whenever any page is changed. This can slow things down somewhat.
+
+## profiling
+
+If you have a repeatable change that ikiwiki takes a long time to build,
+and none of the above help, the next thing to consider is profiling
+ikiwiki.
+
+The best way to do it is:
+
+* Install [[!cpan Devel::NYTProf]]
+* `PERL5OPT=-d:NYTProf`
+* `export PER5OPT`
+* Now run ikiwiki as usual, and it will generate a `nytprof.out` file.
+* Run `nytprofhtml` to generate html files.
+* Those can be examined to see what parts of ikiwiki are being slow.
+
+## scaling to large numbers of pages
+
+Finally, let's think about how huge number of pages can affect ikiwiki.
+
+* Every time it's run, ikiwiki has to scan your `srcdir` to find
+ new and changed pages. This is similar in speed to running the `find`
+ command. Obviously, more files will make it take longer.
+
+* Also, to see what pages match a [[ikiwiki/PageSpec]] like "blog/*", it has
+ to check if every page in the wiki matches. These checks are done quite
+ quickly, but still, lots more pages will make PageSpecs more expensive.
+
+* The backlinks calculation has to consider every link on every page
+ in the wiki. (In practice, most pages only link to at most a few dozen
+ other pages, so this is not a `O(N^2)`, but closer to `O(N)`.)
+
+* Ikiwiki also reads and writes an `index` file, which contains information
+ about each page, and so if you have a lot of pages, this file gets large,
+ and more time is spent on it. For a wiki with 2000 pages, this file
+ will run about 500 kb.
+
+If your wiki will have 100 thousand files in it, you might start seeing
+the above contribute to ikiwiki running slowly.
diff --git a/doc/tips/parentlinks_style.mdwn b/doc/tips/parentlinks_style.mdwn
index 5294e5452..f9dfa8f55 100644
--- a/doc/tips/parentlinks_style.mdwn
+++ b/doc/tips/parentlinks_style.mdwn
@@ -6,7 +6,7 @@ a subset of a page's parents. It also provides a few bonus
possibilities, such as styling the parent links depending on their
place in the path.
-[[!toc ]]
+[[!toc levels=2]]
Content
=======
@@ -77,10 +77,29 @@ following lines in `page.tmpl`:
<a href="<TMPL_VAR NAME="URL">" class="height<TMPL_VAR NAME="HEIGHT">">
<TMPL_VAR NAME="PAGE">
</a> /
+ </TMPL_IF>
</TMPL_LOOP>
Then write the appropriate CSS bits for `a.height1`, etc.
+Avoid showing title of toplevel index page
+------------------------------------------
+
+If you don't like having "index" appear on the top page of the wiki,
+but you do want to see the name of the page otherwise, you can use a
+special `HAS_PARENTLINKS` template variable that the plugin provides.
+It is true for every page *except* the toplevel index.
+
+Here is an example of using it to hide the title of the toplevel index
+page:
+
+ <TMPL_LOOP NAME="PARENTLINKS">
+ <a href="<TMPL_VAR NAME=URL>"><TMPL_VAR NAME=PAGE></a>/
+ </TMPL_LOOP>
+ <TMPL_IF HAS_PARENTLINKS>
+ <TMPL_VAR TITLE>
+ </TMPL_IF>
+
Full-blown example
------------------
diff --git a/doc/tips/spam_and_softwaresites.mdwn b/doc/tips/spam_and_softwaresites.mdwn
new file mode 100644
index 000000000..a07889e6b
--- /dev/null
+++ b/doc/tips/spam_and_softwaresites.mdwn
@@ -0,0 +1,87 @@
+Any wiki with a form of web-editing enabled will have to deal with
+spam. (See the [[plugins/blogspam]] plugin for one defensive tool you
+can deploy).
+
+If:
+
+ * you are using ikiwiki to manage the website for a [[examples/softwaresite]]
+ * you allow web-based commits, to let people correct documentation, or report
+ bugs, etc.
+ * the documentation is stored in the same revision control repository as your
+ software
+
+It is undesirable to have your software's VCS history tainted by spam and spam
+clean-up commits. Here is one approach you can use to prevent this. This
+example is for the [[git]] version control system, but the principles should
+apply to others.
+
+## Isolate web commits to a specific branch
+
+Create a separate branch to contain web-originated edits (named `doc` in this
+example):
+
+ $ git checkout -b doc
+
+Adjust your setup file accordingly:
+
+ gitmaster_branch => 'doc',
+
+## merging good web commits into the master branch
+
+You will want to periodically merge legitimate web-based commits back into
+your master branch. Ensure that there is no spam in the documentation
+branch. If there is, see 'erase spam from the commit history', below, first.
+
+Once you are confident it's clean:
+
+ # ensure you are on the master branch
+ $ git branch
+ doc
+ * master
+ $ git merge --ff doc
+
+## removing spam
+
+### short term
+
+In the short term, just revert the spammy commit.
+
+If the spammy commit was the top-most:
+
+ $ git revert HEAD
+
+This will clean the spam out of the files, but it will leave both the spam
+commit and the revert commit in the history.
+
+### erase spam from the commit history
+
+Git allows you to rewrite your commit history. We will take advantage of this
+to eradicate spam from the history of the doc branch.
+
+This is a useful tool, but it is considered bad practise to rewrite the
+history of public repositories. If your software's repository is public, you
+should make it clear that the history of the `doc` branch in your repository
+is unstable.
+
+Once you have been spammed, use `git rebase` to remove the spam commits from
+the history. Assuming that your `doc` branch was split off from a branch
+called `master`:
+
+ # ensure you are on the doc branch
+ $ git branch
+ * doc
+ master
+ $ git rebase --interactive master
+
+In your editor session, you will see a series of lines for each commit made to
+the `doc` branch since it was branched from `master` (or since the last merge
+back into `master`). Delete the lines corresponding to spammy commits, then
+save and exit your editor.
+
+Caveat: if there are no commits you want to keep (i.e. all the commits since
+the last merge into master are either spam or spam reverts) then `git rebase`
+will abort. Therefore, this approach only works if you have at least one
+non-spam commit to the documentation since the last merge into `master`. For
+this reason, it's best to wait until you have at least one
+commit you want merged back into the main history before doing a rebase,
+and until then, tackle spam with reverts.
diff --git a/doc/tips/switching_to_usedirs.mdwn b/doc/tips/switching_to_usedirs.mdwn
index 183ce00ac..92871439f 100644
--- a/doc/tips/switching_to_usedirs.mdwn
+++ b/doc/tips/switching_to_usedirs.mdwn
@@ -8,9 +8,7 @@ to usedirs, or edit your setup file and turn usedirs back off.
or manually.
* Since usedirs is enabled, ikiwiki will have created a bunch of new
html files. Where before ikiwiki generated a `dest/foo.html`, now it will
- generate `dest/foo/index.html`. But, the old html files will still be
- present too. Remove them:
- find dest -name \*.html -not -name index.html -exec rm {} \;
+ generate `dest/foo/index.html`. The old html files will be removed.
* If you have a blog that is aggregated on a Planet or similar, all the
items in the RSS or atom feed will seem like new posts, since their URLs
have changed. See [[howto_avoid_flooding_aggregators]] for a workaround.
diff --git a/doc/tips/untrusted_git_push.mdwn b/doc/tips/untrusted_git_push.mdwn
index aef67a3db..3573a0ddf 100644
--- a/doc/tips/untrusted_git_push.mdwn
+++ b/doc/tips/untrusted_git_push.mdwn
@@ -74,7 +74,7 @@ Once you're done modifying the setup file, don't forget to run
You'll need to arrange the permissions on your bare git repository so that
user anon can write to it. One way to do it is to create a group, and put
-both anon and your regular user in that group. Then make make the bare git
+both anon and your regular user in that group. Then make the bare git
repository owned and writable by the group. See [[rcs/git]] for some more
tips on setting up a git repository with multiple committers.
diff --git a/doc/tips/upgrade_to_3.0.mdwn b/doc/tips/upgrade_to_3.0.mdwn
index d22813bf2..05b6d6fbd 100644
--- a/doc/tips/upgrade_to_3.0.mdwn
+++ b/doc/tips/upgrade_to_3.0.mdwn
@@ -45,7 +45,7 @@ contain the above, then run `ikiwiki-transition prefix_directives your.setup`
## GlobLists
In 3.0, the old "GlobList" syntax for [[PageSpecs|ikiwiki/PageSpec]] is no
-longer supported. A GlobList contains multiple term, but does not separate
+longer supported. A GlobList contains multiple terms, but does not separate
them with "and" or "or":
sandbox !*/Discussion
diff --git a/doc/tips/vim_syntax_highlighting.mdwn b/doc/tips/vim_syntax_highlighting.mdwn
index 172b763c3..bf7104aec 100644
--- a/doc/tips/vim_syntax_highlighting.mdwn
+++ b/doc/tips/vim_syntax_highlighting.mdwn
@@ -1,4 +1,15 @@
-[[ikiwiki.vim]] is a vim syntax highlighting file for ikiwiki
-[[ikiwiki/markdown]] files.
+[ikiwiki-syntax](http://www.vim.org/scripts/script.php?script_id=3156) is a vim
+syntax highlighting file for ikiwiki [[ikiwiki/markdown]] files. It highlights
+directives and wikilinks. It only supports prefixed directives, i.e.,
+\[[!directive foo=bar baz]], not the old format with spaces.
-Installation instructions are at the top of the file.
+See also: [[follow_wikilinks_from_inside_vim]]
+
+------
+
+The previous syntax definition for ikiwiki links is at [[ikiwiki.vim]]; however,
+it seems to not be [[maintained
+anymore|forum/navigation_of_wiki_pages_on_local_filesystem_with_vim#syn-maintenance]],
+and it has some [[issues|forum/ikiwiki_vim_syntaxfile]].
+
+[[!tag vim]]
diff --git a/doc/tips/yaml_setup_files.mdwn b/doc/tips/yaml_setup_files.mdwn
new file mode 100644
index 000000000..e8ab4f144
--- /dev/null
+++ b/doc/tips/yaml_setup_files.mdwn
@@ -0,0 +1,12 @@
+Here's how to convert your existing standard format ikiwiki setup file into
+the new YAML format recently added to ikiwiki.
+
+1. First, make sure you have the [[!cpan YAML]] perl module installed.
+ (Run: `apt-get install libyaml-perl`)
+2. Run: `ikiwiki -setup my.setup -dumpsetup my.setup --set setuptype=Yaml`
+
+The format of the YAML setup file should be fairly self-explanatory.
+
+(To convert the other way, use "setuptype=Standard" instead.)
+
+--[[Joey]]