Age | Commit message (Collapse) | Author |
|
|
|
It exports gettext and stuff by default, which conflicts with IkiWiki
exports.
|
|
|
|
|
|
|
|
Conflicts:
debian/changelog
|
|
|
|
|
|
|
|
|
|
On various sites I have two IkiWiki instances running from the same
repository: one accessible via http and only accepting openid logins,
and one accessible via authenticated https and only accepting httpauth.
The https version should still pretty-print OpenIDs seen in git history,
even though it does not itself accept OpenID logins.
|
|
|
|
|
|
openiduser previously used a constructor that no longer works in 2.x.
However, all we actually want is the (undocumented) DisplayOfURL function
that is invoked by the display method, so try to use that.
(cherry picked from commit c3dd0ff5c7c10743107f203a5b456fdcd1b171df)
|
|
|
|
|
|
If given instead of pages, this is interpreted as a space-separated
list of links to pages (with the same LinkingRules as in a WikiLink),
and they are inlined in exactly the order given. The sort and pages
parameters cannot be used in conjunction with this one.
|
|
Besides being wrong to do, this could lead to the wrong item
being expired, as follows: If B is added and at the same time
A is changed, then A's ctime may be set to the current time,
while B's is set to its creation time. Thus the new item, A,
is incorrectly removed as older.
(This interacted especially badly with the bug fixed by
90b4d079605b72bb50d1da41402d994960e10937.)
|
|
The aggregate state merge code neglected to merge changes to the md5
field of an item. Therefore, if an item's md5 changed after initial
aggregation, it would be updated, and rewritten, each time thereafter.
This was wasteful and indirectly led to some expire problems.
|
|
|
|
|
|
Conflicts:
debian/changelog
debian/control
Signed-off-by: intrigeri <intrigeri@boum.org>
|
|
support several cases including mercurial's long user names on the RecentChanges page, and urls with spaces being handled by the 404 plugin.
|
|
|
|
Another benefit is that consistently using gettext("Discussion")
eliminates the need to translate one string.
|
|
Signed-off-by: intrigeri <intrigeri@boum.org>
|
|
and document the comment directive syntax.
Rationalle: Comments need to be user-editable so that they can be posted
via git commit etc.
The _comment directive is still supported, for back-compat.
|
|
See #530654
|
|
Setting up a new highlighter object is slightly expensive since it
reads and parses the langfile each time. So cache them.
This also speeds up ext2langfile by avoiding it needing to check for the
existence of a language file in some cases.
|
|
|
|
format: Provide a htmlizefallback hook that other plugins can use to
handle formats that are not suitable for general-purpose htmlize hooks.
highlight: Use the hook to allow formatting of any language/extension,
without it needing to be enabled for standalone source files.
highlight: If the highlight perl binding is not available, fallback
safely to a passthrough mode.
|
|
|
|
* debian/control: Add suggests for libhighlight-perl, although
that package is not yet created by Debian's highlight source package.
(See #529869)
|
|
directive starting with _ is likewise internal.
|
|
Also, sort the list of page types.
|
|
Conflicts:
debian/changelog
|
|
|
|
|
|
|
|
|
|
We build an array of [ plugin name, long name ] pairs, where long name
is an optional argument to hook(). So, a syntax plugin could define
long "friendly" name, such as "Markdown" instead of mdwn, and we would
then pass this array to formbuilder to populate the drop-down on the
edit page.
|
|
Not needed since it returns a list of pages, not a fail/success object.
|
|
|
|
This is sorta an optimisation, and sorta a bug fix. In one
test case I have available, it can speed a page build up from 3
minutes to 3 seconds.
The root of the problem is that $links{$page} contains arrays of
links, rather than hashes of links. And when a link is found,
it is just pushed onto the array, without checking for dups.
Now, the array is emptied before scanning a page, so there
should not be a lot of opportunity for lots of duplicate links
to pile up in it. But, in some cases, they can, and if there
are hundreds of duplicate links in the array, then scanning it
for matching links, as match_link and some other code does,
becomes much more expensive than it needs to be.
Perhaps the real right fix would be to change the data structure
to a hash. But, the list of links is never accessed like that,
you always want to iterate through it.
I also looked at deduping the list in saveindex, but that does
a lot of unnecessary work, and doesn't completly solve the problem.
So, finally, I decided to add an add_link function that handles deduping,
and make ikiwiki-transition remove the old dup links.
|
|
When finding the pageurl, it was calling bestlink unnecessarily.
Since at this point $page contains the full name of the page that
is being inlined, there is no need to do bestlink's scan
for it.
This is only a minor optimisation, since bestlink is only called
once per displayed, inlined page.
|
|
|
|
This reverts commit 2f96c49bd1826ecb213ae025ad456a714aa04863.
I forgot about internal pages. We don't want * matching them!
I left the optimisation in pagecount, where it used to live.
Internal pages probably don't matter when they're just being
counted.
|
|
I forgot to check if it was called from preprocess, and it is
not; it's called by a format hook. If an error is thrown from
a format hook, wiki build fails, so we don't want that.
|
|
|
|
Add an optimisation for the semi-common case of a "*" pagespec. Can
avoid doing any real processing in this case.
|