bug in get_extra commuting patch

This bug is triggered if for whatever reason Darcs finds itself in a position where a patch which is shared by both repos appears to depend on a patch which is local to one of them only. Such a dependency would be absurd. We can divide this error into three known cases:

  1. Darcs thinks a patch is shared, but it’s actually local to one of the repositories
  2. The contrary: Darcs mistakenly believes a shared patch is local
  3. There is a real dependency and it indicates a deep Darcs bug (see issue1014, where due to Darcs 2 semantics wrt duplicate patches a common patch really can depend on two different local patches)

So what do you do when you see an error message like this? Contact the darcs team ( so that we can try to develop a workaround together. Try also to determine if you have been bitten by one of the known cases above or if you may have found yet another way to trigger this error :-(

darcs convert fails!

The problem: Suppose you have a plain darcs 1 repository, and want to convert it to darcs 2 format, and get an error message of the following type:

darcs failed:  Error applying hunk to file foo

What are possible problems, and their solutions?

One possible cause is that you have some “corrupt hunks” within some of your patches. Old fashioned repositories used by Darcs 1 were prone to this sort of corruption. Darcs-2 uses hashed repositories that avoid this problem. See DarcsTwo for more details.

An example: Suppose in file foo contains only the line ‘abc’

A patch that deletes the first line ‘abc’ and replaces it with ‘def’ (created e.g. with diff -u) might contain something like the following

@@ -1 +1 @@

The problem now arises if you have a patch that tries to delete the first line ‘ghi’ and replace it with ‘def’. This will not work, because the first line actually reads ‘abc’, and not ‘ghi’. Such a patch might might look like this:

@@ -1 +1 @@

The original, plain darcs 1 repositories can (with some ingenuity…) be brought into a state where you have such corrupt hunks that fail to apply. This can become obvious when you try to darcs convert because the new formats try to prevent this kind of corruption.

How can you fix it?

  1. Create a new repository with darcs init
  2. Pull your patches from the dubious repository, one by one, until you identify a patch that fails. Write down the timestamp associated with the patch.
  3. Figure out why the patch fails: create a patch file manually: darcs diff -p 'thefailingpatchname' filename > fail.patch where ‘thefailingpatchname’ is a pattern that identifies the patch, filename is the name of the file for which the hunk fails to apply
  4. Try to apply the patch manually:

    patch filename fail.patch

    and see what happens. It also helps to look at the patch, and the file it is trying to patch, you can for instance see if the patch wants to change a line in the file which does not exist.
  5. Try to find a way to make the patch apply. Use darcs revert to revert the file to the original state, and then edit the patch by hand. For example, if the problem is that the patch should be replacing the line ‘abc’ with ‘def’ instead of trying to replace ‘ghi’ with ‘def’, replace ‘ghi’ with ‘abc’.
  6. Once you have a version of the patch that applies,

    1. make a temporary copy of the dubious repository (e.g. with cp -R),
    2. and change the corresponding patch in the temporary repository: The patches are in _darcs/patches, and filenames start with a number that indicates the timestamp (see step 2). 20091001104115…. is a patch that was made on the 1st of October 2009, at 10:41 and 15 seconds (GMT). To edit the patches you need to gunzip them, make the changes, and then gzip them.
  7. Repeat steps 1 and 2, pulling from the temporary copy of the dubious repository, and see whether you can now pull the fixed patch.

Repeat for any remaining failing patches.

  1. Once you have a repository with no more remaining failing patches, you can run darcs convert.

Darcs send

Very large patch bundles

Check to see if the remote repository has tagged recently.

If they have not tagged in a while, it might be a good time for them to do so. In general, a good time to tag would be when you make a new release.

If they have tagged in a while, have a look at your inventory file (_darcs/hashed_inventory). Do you see a long inventory with tag(s) in it? Try darcs optimize reorder.

What may have happened is that you pulled the tag on top of some local patches. The optimize reorder command rearranges your repository to give you “clean” tags (tags that only follow patches they depend on), which results in much smaller bundles.

If none of this makes any sense, give us a shout on or on the IRC channel.

Problems applying patches from

This is due to an incorrect MIME parser (issue26) in Darcs. The current workaround is to either open your mailbox in an alternative client like mutt, or just save the raw message and view it with mutt -f

Darcs push (ssh stdout problems)


$darcs push user@remote:/path/to/repo

 "darcs failed:  Not a repository: user@remote:/path/to/repo ((scp)
 failed to fetch: user@remote:/path/to/repo/_darcs/inventory."


Customized rc files (.bashrc, .zshrc) on the remote that print to the stdout when sourced interfere with the communication between the remote repository and darcs. If you use darcs push with the verbose flag:

$darcs push -v user@remote:/path/to/repo

you should see the remote computer’s stdout interfere with darcs.


Removing informative statements, such as ‘echo date’, that dump to the stdout on login should fix the issue. You could probably also script around the problem (ignore ‘x’ statements if connection is ssh).

HTTP problems

The official darcs statically linked binary doesn’t honor the http_proxy environment variable

Trying to use darcs clone over HTTP proxy results with the following error:

No proxy support for HTTP package yet (try libcurl)!

Please use the dynamically linked binary instead.

If you see something like:

darcs: error while loading shared libraries: cannot open shared object file: No such file or directory

a quick fix like creating a proper symlink in /usr/lib might help.

The dynamically linked binary can fetch data over HTTP proxies because it makes use of libcurl.

Fail: : hGetChar: end of file

This can happen with darcs record.

It can be caused by piping data into it when it expects to work interactively. Here’s a workaround if you are trying to pipe in a list of files to record like this:

$ find . -name '*.cgi' | grep www | xargs darcs record

Better yet, you can achieve the same thing like this, and still work interactively with record:

$ darcs record `find . -name '*.cgi' | grep www`

Waiting for lock ..

Followed by “Couldn’t get lock”, see below.

Couldn’t get lock …/lock

This means that darcs tried to access a repository that is locked, i.e. marked as being currently accessed by a different copy of darcs.

If you are confident the repo shouldn’t be locked (there’s no other copy of darcs running), you can unlock it manually by deleting the lock file _darcs/lock. Then, run darcs check.

Heap exhausted; Current maximum heap size is 268435456 bytes (256 Mb)

I got this error on win32 pushing a 394MB local repo with text and large binary files to an empty local repo. The error follows up with use `+RTS -M<size>' to increase it. This doesn’t completely describe the commandline to fix this, since you also need to mark the beginning of the command line args. A complete command would read something like darcs +RTS -M512M -RTS push <YOUR_REPO>

darcs: getCurrentDirectory: resource exhausted (Too many open files)

This may happen when attempting to pull many patches at once.

By default, Mac OS X only allows each process to have 256 files open. For performance reasons, darcs keeps a lot of files open when pulling patches and may exceed this limit. In most cases, the solution is to increase the limit. This can be done by using the ulimit bash command:

$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) 6144 file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ ulimit -n unlimited $ ulimit -n 256

Sometimes, calling ulimit will result in the following error:

-bash: ulimit: open files: cannot modify limit: Operation not permitted

If this happens, start a new shell and try it in the new shell. (There should be a more convenient way to do it but I don’t know of one.)

If this doesn’t work for some reason, the issue can usually be worked around by pulling fewer patches.