Age | Commit message (Collapse) | Author |
|
This should be more efficient than pagespec_match_list since it short-circuits
after the first match is found.
The other problem with using pagespec_match_list here is it may throw an
error if a bad or failing pagespec somehow got into the dependencies.
|
|
This reverts commit e4cd168ebedd95585290c97ff42234344bfed46c.
There was no benefit to this change.
|
|
|
|
|
|
|
|
The new dependency handling works better (eliminates more duplicates) if
dependencies are split up. On the same wiki mentioned in the previous
commit, this saves about a second (i.e. 4%) on the same test.
|
|
On a large wiki you can spend a lot of time reading through large lists
of dependencies to see whether files need to be rebuilt (album, with its
one-page-per-photo arrangement, suffers particularly badly from this).
The dependency list is currently a single pagespec, but it's not used like
a normal pagespec - in practice, it's a list of pagespecs joined with the
"or" operator.
Accordingly, change it to be stored as a list of pagespecs. On a wiki
with many tagged photo albums, this reduces the time to refresh after
`touch tags/*.mdwn` from about 31 to 25 seconds.
Getting the benefit of this change on an existing wiki requires a rebuild.
|
|
can evaluate them, check them in the wrapper right off the bat.
This doesn't prevent the deadlock in web commits that need to cvs
add directories, but I'm committing so Joey can take a look if he
wants.
|
|
|
|
|
|
assumption that uploading an entire site is efficient.
|
|
|
|
case with a getopt hook directly in my plugin. If the wrapper change
is safe, we won't need a wrapper wrapper.
|
|
|
|
|
|
This is both faster, and propigates any error in processing the feedpages
pagespec out to display on the page. Which may have been why I didn't use
it before, but currently seems like a good thing to do, since it explains
why your feeds are empty..
|
|
If a page is taken from the underlay, and one of the specified languages
does not have po files in the underlay, it would create a broken link
to the translated version of the page for that language.
With this change, there's no broken link.
|
|
I think the N/A was not intended to be visible, but it can show up as the
percent translated to a language. This happens if the page is located in an
underlay, and not translated to the language in any other underlay.
|
|
|
|
|
|
Previously, [[!meta redir="foo"]] on bar, where bar/foo exists, would
depend on "foo" (which matches nothing, probably) rather than "bar/foo".
(cherry picked from commit f27ec09b72f886415e63fe394e18d9c3cb3913bf)
|
|
Previously, [[!img bar.jpg]] on foo, where foo/bar.jpg exists, would
get a dependency equivalent to "glob(bar.jpg)" (which might not match
anything), rather than the correct "glob(foo/bar.jpg)".
(cherry picked from commit 85b2ec49ecd12dd23e5c432933457a72744ce7cb)
|
|
During backlink calulation, all links are examined and broken links can
be detected for free, so store a list of broken links and have brokenlinks
use it.
Exposing the %brokenlinks structure is a bit ugly, but the speedup seems
worth it: Around 1 second for wikis the size of the doc wiki that use
brokenlinks.
|
|
This plugin was building essentially the same data that is built to handle
backlinks, so reuse that as an optimisation.
|
|
This was tricky. $links{$page/discussion} must be checked; with it in
lowercase.
|
|
By adding this setting, we get both more configurability, and a minor
optimisation too, since gettext does not need to be called continually
to get the Discussion value.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
version, but continue. Closes: #541205
|
|
When first editing a page that was in the underlay, avoid losing
the translation by copying the po file over from the underlay.
|
|
paranoia; I was thinking about XSS attacks specificaly
|
|
|
|
Conflicts:
debian/changelog
|
|
|
|
|
|
|
|
This was impressively broken. add_depends was being called with params
backwards, and on parameter was set to the name of the generated
file, which isn't in the source.
Now updates to images will update the page that contains them, thus
updating them. This is unncessary for fullsize images, so skipped.
|
|
|
|
Many variables and functions are exported.
|
|
Serving up images etc. as text/plain; charset=utf-8 is unlikely to work
very well, and there's no point in having this CGI action for attachments
(since they're copied into the output as-is anyway).
|
|
Also restructure so we return early on missing pages.
|
|
match the default
IkiWiki mostly assumes that pages are in UTF-8; anyone this doesn't work
for can override it in the setup file.
|
|
As I suggested when reviewing Will's code, calling loadindex() should be
sufficient.
|
|
|
|
Signed-off-by: Simon McVittie <smcv@ http://smcv.pseudorandom.co.uk/>
|
|
|