Age | Commit message (Collapse) | Author |
|
People seem to be able to expect to enter www.foo.com and get away with it.
The resulting my.wiki/www.foo.com link was not ideal.
To fix it, use URI::Heuristic to expand such things into a real url. It
even looks up hostnames in the DNS if necessary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
sorting in inlines.
|
|
aggregation is run, even if the usual time has not passed. Closes: #508622 (Michael Gold)
|
|
|
|
|
|
rather than closing the pipe, which it dislikes. (thm)
|
|
titlepage normally escapes, but so does formbuilder.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Had forgot to include it in the option list.
|
|
|
|
|
|
The old method failed for '[' x 3.
|
|
time).
|
|
|
|
Used the contrib version of the plugin page since it seemed better than the
other one.
|
|
|
|
Since ikiwiki uses open :utf8, perl assumes that files contain valid utf-8.
If it turns out to be malformed it may later crash while processing strings
read from them, with 'Malformed UTF-8 character (fatal)'.
As at least a quick fix, use utf8::valid as soon as data is read, and if
it's not valid, call encode_utf8 on the string, thus clearing the utf-8
flag. This may cause follow-on encoding problems, but will avoid this
crash, and the input file was broken anyway, so GIGO is a reasonable
response. (I looked at calling decode_utf8 after, but it seemed to cause
more trouble than it was worth. BTW, use open ':encoding(utf8)' avaoids
this problem, but the corrupted data later causes Storable to crash when
writing the index.)
This is a quick fix, clearly imperfect:
- It might be better to explicitly call decode_utf8 when reading files,
rather than using the IO layer.
- Data read other than by readfile() can still sneak in bad utf-8. While
ikiwiki does very little file input not using it, stdin for the CGI
would be one way.
|
|
This is necessary so that things that fork to the background,
like pinger, and inline ping, don't block other cgis from running.
Note that websetup also calls unlockwiki, before refreshing / rebuilding
the wiki. It makes perfect sense for that not to block other cgis.
|
|
* Stop busy-waiting in lockwiki, as this could delay ikiwiki from waking up
for up to one second. The bailout code is no longer needed.
* Remove support for unused optional wait parameter from lockwiki.
|
|
|
|
|
|
|
|
earlier added to edit links.
|
|
This fixes a problem exposed by the recent change to tags
(a2839de9362187b67b0e3a564461e272e64fd9b4). That recorded tag links as
absolute by including a leading slash in the link. The same could also be
done with an absolute wikilink.
In either case, link() would not match such links, unless the leading slash
was included in the link to match. But that's not right, because pagespecs
match absolute by default. So strip the leading slash.
Note that to keep any existing `link(/foo)` pagespecs working after this
change, the leading slash is removed from there, too.
|
|
parsing of any directives on the page.
|
|
|
|
links. Since this needs the just released XML::Feed 0.3, as well as a not yet released XML::RSS, it will fall back to the old method if no xml:base info is available.
|
|
|
|
|
|
|
|
(ie, otl inside a mdwn page, or syntax highlighted code inside a page).
|
|
In this mode, rebuild mode should not be on
|
|
The syslog value from the setup file is purposfully ignored when doing
ikiwiki -setup, so that it will output to stdout (while generating wrappers
that do use the syslog). But that caused -dumpsetup to not preserve
the syslog value from the setup file.
|
|
Move shortcut processing back to checkconfig, and avoid it failing if the
srcdir is not defined.
|
|
|
|
This speeds up web commits by 1/4th of a second or so, since perl does
not have to start up for the post commit hook.
perl's locking is completly FuBar, since it's impossible to tell what perl
flock() really does, and thus difficult to write code in other languages
that interoperates with perl's locking. (Let alone interoperating with
existing fcntl locking from perl...)
In this particular case, I think I was able to find a way to avoid the
insanity, mostly. The C code does a true flock(2), and if perl is using an
incompatable lock method that does not use the same locking primative at
the kernel level, then the C code's test will fail, and it will go ahead
and run the perl code. Then the perl code's test will test the right thing.
On Debian, at least lately, perl's flock() does a true flock(2), so the
optimisation does work.
|
|
Still need to investigate possible races, and test some more.
|