A few bits about the RCS backends
[[toc ]]
Terminology
``web-edit'' means that a page is edited by using the web (CGI) interface
as opposed to using a editor and the RCS interface.
[[Subversion]]
Subversion was the first RCS to be supported by ikiwiki.
How does it work internally?
Master repository M.
RCS commits from the outside are installed into M.
There is a working copy of M (a checkout of M): W.
HTML is generated from W. rcs_update() will update from M to W.
CGI operates on W. rcs_commit() will commit from W to M.
For all the gory details of how ikiwiki handles this behind the scenes,
see [[commit-internals]].
You browse and web-edit the wiki on W.
W "belongs" to ikiwiki and should not be edited directly.
darcs (not yet included)
Support for using darcs as a backend is being worked on by Thomas
Schwinge, although development is on hold curretly.
There is a patch in [[todo/darcs]].
How will it work internally?
``Master'' repository R1.
RCS commits from the outside are installed into R1.
HTML is generated from R1. HTML is automatically generated (by using a
``post-hook'') each time a new change is installed into R1. It follows
that rcs_update() is not needed.
There is a working copy of R1: R2.
CGI operates on R2. rcs_commit() will push from R2 to R1.
You browse the wiki on R1 and web-edit it on R2. This means for example
that R2 needs to be updated from R1 if you are going to web-edit a page,
as the user otherwise might be irritated otherwise...
How do changes get from R1 to R2? Currently only internally in
rcs_commit(). Is rcs_prepedit() suitable?
It follows that the HTML rendering and the CGI handling can be completely
separated parts in ikiwiki.
What repository should [[RecentChanges]] and [[History]] work on? R1?
Rationale for doing it differently than in the Subversion case
darcs is a distributed RCS, which means that every checkout of a
repository is equal to the repository it was checked-out from. There is
no forced hierarchy.
R1 is nevertheless called the master repository. It's used for
collecting all the changes and publishing them: on the one hand via the
rendered HTML and on the other via the standard darcs RCS interface.
R2, the repository the CGI operates on, is just a checkout of R1 and
doesn't really differ from the other checkouts that people will branch
off from R1.
(To be continued.)
Another possible approach
Here's what I (tuomov) think, would be a “cleaner” approach:
- Upon starting to edit, Ikiwiki gets a copy of the page, and
darcs changes --context
.
This context and the present version of the page are stored in as the “version” of the
page in a hidden control of the HTML.
Thus the HTML includes all that is needed to generate a patch wrt. to the state of the
repository at the time the edit was started. This is of course all that darcs needs.
- Once the user is done with editing, Ikiwiki generates a patch bundle for darcs.
This should be easy with existing
Text::Diff
or somesuch modules, as the Web edits
only concern single files. The reason why the old version of the page is stored in
the HTML (possibly compressed) is that the diff can be generated.
- Now this patch bundle is applied with
darcs apply
, or sent by email for moderation…
there are many possibilities.
This approach avoids some of the problems of concurrent edits that the previous one may have,
although there may be conflicts, which may or may not propagate to the displayed web page.
(Unfortunately there is not an option to darcs apply
to generate some sort of ‘confliction resolution
bundle’.) Also, only one repository is needed, as it is never directly modified
by Ikiwiki.
This approach might be applicable to other distributed VCSs as well, although they're not as oriented
towards transmitting changes with standalone patch bundles (often by email) as darcs is.
The mercurial plugin seems to just use one repo and edit it directly - is
there some reason that's okay there but not for darcs? I agree with tuomov
that having just the one repo would be preferable; the point of a dvcs is
that there's no difference between one repo and another. I've got a
darcs.pm based on mercurial.pm, that's almost usable... --bma
IMHO it comes down to whatever works well for a given RCS. Seems like
the darcs approach could be done with most any distributed system, but
it might be overkill for some (or all?) While there is the incomplete darcs
plugin in [[todo/darcs]], if you submit one that's complete, I will
probably accept it into ikiwiki.. --[[Joey]]
[[Git]]
Regarding the Git support, Recai says:
I have been testing it for the past few days and it seems satisfactory. I
haven't observed any race condition regarding the concurrent blog commits
and it handles merge conflicts gracefully as far as I can see.
As you may notice from the patch size, GIT support is not so trivial to
implement (for me, at least). Being a fairly fresh code base it has some
bugs. It also has some drawbacks (especially wrt merge which was the hard
part). GIT doesn't have a similar functionality like 'svn merge -rOLD:NEW
FILE' (please see the relevant comment in mergepast for more details), so I
had to invent an ugly hack just for the purpose.
By design, Git backend uses a "master-clone" repository pair approach in contrast
to the single repository approach (here, clone may be considered as the working
copy of a fictious web user). Even though a single repository implementation is
possible, it somewhat increases the code complexity of backend (I couldn't figure
out a uniform method which doesn't depend on the prefered repository model, yet).
By exploiting the fact that the master repo and web user's repo (srcdir
) are all
on the same local machine, I suggest to create the latter with the "git clone -l -s
"
command to save disk space.
Note that, as a rule of thumb, you should always put the rcs wrapper (post-update
)
into the master repository (.git/hooks/
) as can be noticed in the Git wrappers of
the sample [[ikiwiki.setup]].
[[Mercurial]]
The Mercurial backend is still in a early phase, so it may not be mature
enough, but it should be simple to understand and use.
As Mercurial is a distributed RCS, it lacks the distinction between
repository and working copy (every wc is a repo).
This means that the Mercurial backend uses directly the repository as
working copy (the master M and the working copy W described in the svn
example are the same thing).
You only need to specify 'srcdir' (the repository M) and 'destdir' (where
the HTML will be generated).
Master repository M.
RCS commit from the outside are installed into M.
M is directly used as working copy (M is also W).
HTML is generated from the working copy in M. rcs_update() will update
to the last committed revision in M (the same as 'hg update').
If you use an 'update' hook you can generate automatically the HTML
in the destination directory each time 'hg update' is called.
CGI operates on M. rcs_commit() will commit directly in M.
If you have any question or suggestion about the Mercurial backend
please refer to Emanuele
[[tla]]
There is an unfinished patch in [[bugs/Monotone_rcs_support]].
In normal use, monotone has a local database as well as a workspace/working copy.
In ikiwiki terms, the local database takes the role of the master repository, and
the srcdir is the workspace. As all monotone workspaces point to a default
database, there is no need to tell ikiwiki explicitly about the "master" database. It
will know. (BTW - this is also true of subversion. It might be possible to simplify the svn config?)
The patch currently supports normal committing and getting the history of the page.
To understand the parallel commit approach, you need to understand monotone's
approach to conflicts:
Monotone allows multiple micro-branches in the database. There is a command,
mtn merge
, that takes the heads of all these branches and merges them back together
(turning the tree of branches into a dag). Conflicts in monotone (at time of writing)
need to be resolved interactively during this merge process.
It is important to note that having multiple heads is not an error condition in a
monotone database. This condition will occur in normal use. In this case
'update' will choose a head if it can, or complain and tell the user to merge.
For the ikiwiki plugin, the monotone ikiwiki plugin borrows some ideas from the svn ikiwiki plugin.
On prepedit() we record the revision that this change is based on (I'll refer to this as the prepedit revision). When the web user
saves the page, we check if that is still the current revision. If it is, then we commit.
If it isn't then we check to see if there were any changes by anyone else to the file
we're editing while we've been editing (a diff bewteen the prepedit revision and the current rev).
If there were no changes to the file we're editing then we commit as normal.
All of this should work with the current patch.
It is only if there have been parallel changes to the file we're trying to commit that
things get hairy. In this case the current (implemented but untested) approach is to
commit the web changes as a branch from the prepedit revision. This
will leave the repository with multiple heads. At this stage, all data is saved, but there
is no way to resolve the potential conflict using the web interface.
In the specific case of a branch caused by a web edit, it may be possible to
make monotone use the current web interface. This may be possible because we
know that merging between the two revisions we have (the new branch
and the prepedit revision) involves at most one conflicted file.
We could use mtn explicit_merge
to merge the revisions. If that
succeeds without conflicts then good. If that fails, then we could
use a special lua merge hook to spit out the conflict marked file
and hand it back to the web interface and then abort the merge. At the same time, we'd have
to modify the 'prepedit' data to include both parents so that when
the user saves again we know we're in this case.
If you get a commit and your prepedit data includes two revids then
we form a commit manually using the automate interface - same way
we currently build the micro-branch. However, while conflicts were being resolved,
someone could have come
along and introduced another one. So after forming this merge revision,
you need to go back and check to see if the workspace revision has changed
and possibly go through the whole process again. The repeats until you're
merged.
The end result of all of this is a system that can resolve all web conflicts without race
conditions. (And because of the way monotone works it saves all data, including
both sides of the merge, before the merge. You can go back later and check that
the merge was reasonable.) It still doesn't provide a web-based way of merging multiple
heads that come in through non-web interaction with monotone.