Age | Commit message (Collapse) | Author |
|
|
|
|
|
Need to handle the case where url is not set.
|
|
from JasonBlevins
|
|
|
|
Having a always current relative date on recentchanges is very, very nice.
|
|
relative, in a very nice way, if I say so myself.
|
|
* Add an underlay for javascript, and add ikiwiki.js containing some utility
code.
* toggle: Stop embedding the full toggle code on each page using it, and
move it to toggle.js in the javascript underlay.
|
|
in the future.
|
|
|
|
|
|
|
|
plugins. Closes: #502047
|
|
|
|
Google allows has a nice feature, sitesearch, that allows anyone to
limit search results to a specific site. Obviously, this feature can be
used to provide a search engine for the local ikiwiki site without the
need to install any additional software. Just enable the 'google' plugin
and make sure that --url uses the proper hostname. Thanks to Joey for
helping to get the Perl implementation right.
|
|
|
|
for. This supports most of the ACL type things users have been wanting to be done. Closes: #443346 (It does not control who can read a page, but that's out of scope for ikiwiki.)
|
|
These were probably not currently buggy, but let's avoid bugs being
introduced by the functions called clobbering $_.
|
|
This avoids another one of those $_ scoping issues where a deep call to a
function that changes $_ clobbers the array that is being looped over.
|
|
|
|
|
|
Whenever the edit form is submitted, but not saved, the page location
select should reduce to the currently selected value. This was only done
when previewing before, but is also needed in order to support the case of
adding an attachment to a page that is just being created.
Before this change, the attachment plugin would get a weird value in
$form->field("page"), that did not reflect the actual page location.
|
|
It makes sense to use bestlink to determine which page rootpage refers to,
but if no page matches, just use the raw value.
|
|
If indexpages is enabled, then foo/index.mdwn will look like a subpage
of foo, so an additional check is needed to avoid trying to rename it
twice.
|
|
|
|
newpagefile.
Note that newpagefile is not used here (or in recentchanges) because
the internal use pages they generate are transient and unlikely to
benefit from being put each in their own subdir.
|
|
|
|
Note that the page filename code used here and in editpage are identical..
|
|
Initial draft, may need to factor new page filename code out into helper
function if other plugins need to do the same..
|
|
I noticed that ikiwiki/formatting was beilg rebuilt when any page changed.
This turned out to be because it contained a complex conditional
"enabled(foo) or enabled(bar)", and the conditional plugin did not notice
that this consisted only of enabled() tests, and copied it unchanged into
add_depends. Thus, the page's dependencies were satisfied by any page
change.
The fix is to beef up the parser so that it can handle that and more
complex conditionals, and detect if they consist only of such tests.
|
|
|
|
To handle this, avoid populating %renderedfiles in preview,
and in expiry, check if the file is in %renderedfiles, if it is
do not delete it since it was saved.
|
|
files rendered during page preview.
|
|
Upgrades to the new index format should be transparent.
The version field is 3, because 1 was the old textual index, 2 was the
pre-versioned format.
This also includes some efficiency improvements to index loading, by
not copying a hash and using a reference.
|
|
toplevel templates directory.
|
|
* htmltidy: Avoid returning undef if tidy fails. Also avoid returning the
untidied content if tidy crashes. In either case, it seems best to tidy
the content to nothing.
* htmltidy: Avoid spewing tidy errors to stderr.
|
|
|
|
|
|
|
|
Conflicts:
IkiWiki/Plugin/recentchanges.pm
Note that smcv's approach of using urlto also gets the url right when
redirecting to a non-html file, which is a better approach than my recent
fix to recentchanges
|
|
|
|
acting on a set of pages.
|
|
|
|
tagbase, when it's set.
|
|
Seems that the problem is that once the \nnn coming from git is converted
to a single character, decode_utf8 decides that this is a standalone
character, and not part of a multibyte utf-8 sequence, and so does nothing.
I tried playing with the utf-8 flag, but that didn't work. Instead, use
decode("utf8"), which doesn't have the same qualms, and successfully
decodes the octets into a utf-8 character.
Rant:
Think for a minute about fact that any and every program that parses git-log,
or git-show, etc output to figure out what files were in a commit needs to
contain this snippet of code, to convert from git-log's wacky output to a
regular character set:
if ($file =~ m/^"(.*)"$/) {
($file=$1) =~ s/\\([0-7]{1,3})/chr(oct($1))/eg;
}
(And it's only that "simple" if you don't care about filenames with
embedded \n or \t or other control characters.)
Does that strike anyone else as putting the parsing and conversion in the
wrong place (ie, in gitweb, ikiwiki, etc, etc)? Doesn't anyone who actually
uses git with utf-8 filenames get a bit pissed off at seeing \xxx\xxx
instead of the utf-8 in git-commit and other output?
|
|
|
|
I saw this in the wild, apparently a page was not present on disk, but was
in the aggregate db, and not marked as expired either. Not sure how that
happened, but such pages should get marked as expired since they have an
effectively zero ctime.
|
|
|
|
|
|
most edge cases seem handled too
|