Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
Since ikiwiki uses open :utf8, perl assumes that files contain valid utf-8.
If it turns out to be malformed it may later crash while processing strings
read from them, with 'Malformed UTF-8 character (fatal)'.
As at least a quick fix, use utf8::valid as soon as data is read, and if
it's not valid, call encode_utf8 on the string, thus clearing the utf-8
flag. This may cause follow-on encoding problems, but will avoid this
crash, and the input file was broken anyway, so GIGO is a reasonable
response. (I looked at calling decode_utf8 after, but it seemed to cause
more trouble than it was worth. BTW, use open ':encoding(utf8)' avaoids
this problem, but the corrupted data later causes Storable to crash when
writing the index.)
This is a quick fix, clearly imperfect:
- It might be better to explicitly call decode_utf8 when reading files,
rather than using the IO layer.
- Data read other than by readfile() can still sneak in bad utf-8. While
ikiwiki does very little file input not using it, stdin for the CGI
would be one way.
|
|
|
|
|
|
This is necessary so that things that fork to the background,
like pinger, and inline ping, don't block other cgis from running.
Note that websetup also calls unlockwiki, before refreshing / rebuilding
the wiki. It makes perfect sense for that not to block other cgis.
|
|
Signed-off-by: intrigeri <intrigeri@boum.org>
|
|
|
|
* Stop busy-waiting in lockwiki, as this could delay ikiwiki from waking up
for up to one second. The bailout code is no longer needed.
* Remove support for unused optional wait parameter from lockwiki.
|
|
|
|
Fixed by making the cgi wrapper wait on a cgilock.
If you had to set apache's MaxClients low to avoid ikiwiki thrashing
your server, you can now turn it up to a high value.
The downside to this is that a cgi call that doesn't need to call lockwiki
will be serialised by this so only one can run at a time. (For example,
do=search.) There are few such calls, and all of them call loadindex,
so each still eats gobs of memory, so serialising them still seems ok.
|
|
|
|
|
|
|
|
... that I previously completely missed.
Signed-off-by: intrigeri <intrigeri@boum.org>
|
|
|
|
|
|
Signed-off-by: intrigeri <intrigeri@boum.org>
|
|
This is not needed now that tagpage returns a page name starting with a
slash.
(Also fixes a minor bug that the edit links started with double slashes due
to the hack.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Yahoo! has been pounding on ikiwiki.cgi again. While I'd prefer ikiwiki to
generate sites that avoided robots hitting it in other ways, I'm adding a
robots.txt, at least temporarily.
|
|
earlier added to edit links.
|
|
ikiwiki/markdown is a basewiki page and shouldn't link to pages in tips.
Instead, make the tips link to it, so backlinks will point back to them.
While I'm at it, move the info about the emacs mode to a tip.
|
|
|
|
There was already a tip about it; move the plasticboy version to there.
|
|
|
|
This fixes a problem exposed by the recent change to tags
(a2839de9362187b67b0e3a564461e272e64fd9b4). That recorded tag links as
absolute by including a leading slash in the link. The same could also be
done with an absolute wikilink.
In either case, link() would not match such links, unless the leading slash
was included in the link to match. But that's not right, because pagespecs
match absolute by default. So strip the leading slash.
Note that to keep any existing `link(/foo)` pagespecs working after this
change, the leading slash is removed from there, too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
parsing of any directives on the page.
|
|
|
|
|
|
|
|
links. Since this needs the just released XML::Feed 0.3, as well as a not yet released XML::RSS, it will fall back to the old method if no xml:base info is available.
|
|
|
|
|
|
|