Age | Commit message (Collapse) | Author |
|
mode, use time tag.
|
|
I forgot CGI::Formbuilder's horrible interface that needs template
parameters instead of a constructed object.
|
|
Remove wikitemplates page; fold its contents into templates page.
Update all backlinks. Document new ability to put templates inside srcdir.
|
|
|
|
|
|
This entailed changing template_params; it no longer takes the template
filename as its first parameter.
Add template_depends to api and replace calls to template() with
template_depends() in appropriate places, where a dependency should be
added on the template.
Other plugins don't use template(), so will need further work.
Also, includes are disabled for security. Enabling includes only when using
templates from the templatedir would be nice, but would add a lot of
complexity to the implementation.
|
|
|
|
|
|
|
|
|
|
* Automatically run --gettime the first time ikiwiki is run on
a given srcdir.
* Optimise --gettime for git, so it's appropriatly screamingly
fast. (This could be done for other backends too.)
* However, --gettime for git no longer follows renames.
* Use above to fix up timestamps on docwiki, as well as ensure that
timestamps on basewiki files shipped in the deb are sane.
|
|
* Rename --getctime to --gettime. (The old name still works for
backwards compatability.)
* --gettime now also looks up last modification time.
* Add rcs_getmtime to plugin API; currently only implemented
for git.
|
|
There's a gotcha where pagespec_match_list is used with a dependency type
that is not a full content dependency, and so ikiwiki does not know that
a content change to a page that sorted too low to match needs to trigger
a rebuild, since its sort order may have changed.
Inline is mostly ok re this, as it does use content dependencies. Except
for in the case of raw mode. But then, page metadata is documented to not
be loaded, so it doesn't make sense to use sortspecs that depend on
metadata. I hope. :)
|
|
Conflicts:
debian/NEWS
|
|
|
|
|
|
|
|
Also rename cmpspec_translate (internal function) to sortspec_translate
for consistency.
|
|
|
|
|
|
|
|
|
|
As I was adding ngettext support, I realized I could optimize the gettext
functions by memoizing the creation of the gettext object. Note that
the object creation is still deferred until a gettext function is called,
to avoid unnecessary startup penalties on code paths that do not need
gettext.
A side benefit is that separate stub functions are no longer needed to
handle the C language case.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
plugins from the setup file.
|
|
|
|
|
|
template is filled out. This improves the search plugin's indexing, since it will not include navigational elements from the page template or sidebar.
|
|
I made match_* functions whose influences can vary depending on the page
matched set a special "" influence to indicate this.
Then add_depends can try just one page, and if static influences are found,
stop there.
|
|
|
|
|
|
To avoid breaking plugins, also support the old pagespec_match_list
calling convention, with a deprecation warning.
|
|
Taking advantage of every single one of its features, of course.
Even had to add one more..
|
|
|
|
|
|
Also update docs, test suite.
|
|
|
|
|
|
|
|
|
|
Conflicts:
debian/changelog
|
|
|
|
This is sorta an optimisation, and sorta a bug fix. In one
test case I have available, it can speed a page build up from 3
minutes to 3 seconds.
The root of the problem is that $links{$page} contains arrays of
links, rather than hashes of links. And when a link is found,
it is just pushed onto the array, without checking for dups.
Now, the array is emptied before scanning a page, so there
should not be a lot of opportunity for lots of duplicate links
to pile up in it. But, in some cases, they can, and if there
are hundreds of duplicate links in the array, then scanning it
for matching links, as match_link and some other code does,
becomes much more expensive than it needs to be.
Perhaps the real right fix would be to change the data structure
to a hash. But, the list of links is never accessed like that,
you always want to iterate through it.
I also looked at deduping the list in saveindex, but that does
a lot of unnecessary work, and doesn't completly solve the problem.
So, finally, I decided to add an add_link function that handles deduping,
and make ikiwiki-transition remove the old dup links.
|