Age | Commit message (Collapse) | Author |
|
|
|
Turns out duplicate index files do not need to be stored when usedirs is in
use, just when it's not. Ikiwiki is quite consistent about using page/ when
usedirs is in use. (The only exception is the search plugin, which needs
fixing.)
This also includes significant code cleanup, removal of a incorrect special
case for empty files, and addition of a workaround for a bug in the amazon
perl module.
|
|
|
|
|
|
|
|
|
|
number of system calls in half. (Still room for improvement.)
|
|
|
|
|
|
stuck on shared hosting without cron. (Sheesh.) Enabled via the `aggregate_webtrigger` configuration optiom.
|
|
srcfile now has an optional second parameter to avoid it throwing an error
if the source file does not exist.
|
|
|
|
anonymous users to edit only matching pages. Closes: #478892
|
|
|
|
This allows the toc to be displayed when previewing an edit. It also avoids headers in the page template from showing up in the toc.
|
|
Also, simplified finding the url to the top of the site.
|
|
tag 473987 +patch
thanks
Hi,
The issue is that we need to convert relative links to absolute
ones for atom and rss feeds -- but there are two types of
relative links. The first kind, relative to the current
document ( href="some/path") is handled correctly. The second
kind of relative url is is relative to the http server
base (href="/semi-abs/path"), and that broke.
It broke because we just prepended the url of the current
document to the href (http://host/path/to/this-doc/ + link),
which gave us, in the first place:
http://host/path/to/this-doc/some/path [correct], and
http://host/path/to/this-doc//semi-abs/path [wrong]
The fix is to calculate the base for the http server (the base of
the wiki does not help, since the base of the wiki can be
different from the base of the http server -- I have, for example,
"url => http://host.name.mine/blog/manoj/"), and prepend that to
the relative references that start with a /.
This has been tested.
Signed-off-by: Manoj Srivastava <srivasta@debian.org>
|
|
lacking one.
|
|
destpage does not normally need to be worried about when creating other files
as part of the process of rendering a page. Using destpage results in
inlined pages creating two copies of such files. It works to not use destpage
in this case because the inlining page depends on the source page, so if the
source page is modified or deleted the inlining page will be updated.
|
|
for "show".
|
|
|
|
Instead of using the XML-RPC v2 extension <nil/>, which Perl's
XML::RPC::Parser does not (yet) support (Joey's patch is pending), we
agreed on a sentinel: {'null':''}, that is, a hash with a single key
"null" pointing to the empty string.
The Python proxy automatically converts None appropriately and raises an
exception if a hook function should, by weird coincidence, attempt to
return {'null':''}.
Signed-off-by: martin f. krafft <madduck@madduck.net>
|
|
|
|
|
|
Markdown is slow. Especially if it has to process an enormous page. The
most common enormous page is currently the recentchanges page, which gets
processed a lot, and contains very little actual markdown. Most of it is a
big <div>, which markdown skips ... slowly.
This is a rather sick optimisation to work around markdown's speed issues.
Now inline inserts a small, dummy div, allows markdown to quickly render
the actual page content, then replaces the dummy with the actual inlined
pages later.
Results: Rendering just a recentchanges page, with diffs included, dropped
from 4.5 seconds to 2.7 seconds on my laptop. Building the entire wiki
dropped from 46.6 seconds to 39.5 seconds.
(It would be better if inline were a *post*-processor directive.)
|
|
I had to move it to sanitize so all the markup is htmlized, so it can scan
for <pre> and <code>.
|
|
I'm suprised that the second m//g didn't seem to clobber @-, but I don't
want to rely on that, so preserve it beforehand.
|
|
|
|
|
|
|
|
This makes the CGI about .2 seconds faster when editing pages etc.
|
|
Same fix as in d7f1292c3134fd9464ca4005f48b9274be861c10
|
|
for consistentcy with getargv, which returns one
|
|
It was incorrectly setting the value to the number of items in @_, ie,
always 1.
|
|
xml rpc only allows functions to return a single value, no lists. So getargv
needs to return a list reference, which means that the caller will see an xml
rpc array.
|
|
@ARGV.
|
|
|
|
|
|
|
|
Used in several subs, not all of which load it on demand, this seems simpler.
|
|
|
|
Closes: #470530
|
|
This works around a perl crasher bug, and also avoids bloating pages
with enormous diffs.
rcs_recentchanges modified to return a list in an array context.
|
|
set to the destination page. This avoids need for hacks to munge the urls
in preview mode, which fixes several bugs.
* Several destpage fixes in plugins.
|
|
|
|
This was a silly typo, sorry. <meta ...> takes an attribute content, not
value.
Signed-off-by: martin f. krafft <madduck@madduck.net>
|
|
This causes meta.openid to also generate the openid2 headers.
Signed-off-by: martin f. krafft <madduck@madduck.net>
|
|
Adds an optional xrds-location parameter to the openid meta handler,
which allows for XRDS delegation.
A good document on XRDS is
http://www.windley.com/archives/2007/05/using_xrds.shtml
Signed-off-by: martin f. krafft <madduck@madduck.net>
|
|
Markdown is such a splintered mess.. The current debian package provides
only Text::Markdown::Markdown, while all versions of Text::Markdown support
Text::Markdown::markdown, and old versions also support the capitalised version,
while new ones don't.
It's getting to the point where `grep /markdown/i %symbol_table` is the only
sane way to figure out what function to call..
|
|
no longer supports Text::Markdown::Markdown. All old versions of
Text::Markdown also support the lower-case version.
|