Subversion - Troubleshoot Problems
My repository seems to get
stuck all the time, giving me errors about needing recovery (DB_RUNRECOVERY).
What could be the cause?
The Berkeley DB database
in your repository is sensitive to interruptions. If a process accessing the
database exits without "cleanly" closing the environment, then the
database is left in an inconsistent state. Common causes of this include:
·
the process exiting when it hits a permission problem
·
the process crashing/segfaulting
·
the process being forcibly killed
·
running out of disk space
For most of these cases,
you should run "svnadmin recover", which rewinds the repository back
to a consistent state; see this question for details. Note that
running out of disk space, combined with frequent checkouts or updates, can
cause the repository to crash in a way where recovery is not possible (so keep
backups).
Segfaults, forced
killings, and running out of disk space are pretty rare. Permission problems
are far more common: one process accesses the repository and accidentally
changes ownership or permissions, then another process tries to access and
chokes on the permissions.
The best way to prevent
this is to get your repository permissions and ownership set up correctly.
See here for our recommendations.
Every time I try to access
my repository, the process just hangs. Is my repository corrupt?
Your repository is not corrupt, nor is your data lost. If your
process accesses the repository directly (mod_dav_svn, svnlook, svnadmin, or if
you access a `file://' URL), then it's using Berkeley DB to access your data.
Berkeley DB is a journaling system, meaning that it logs everything it is about
to do before it does so. If your process is interrupted (Control-C, or
segfault), then a lockfile is left behind, along with a logfile describing
unfinished business. Any other process that attempts to access the database
will just hang, waiting for the lockfile to disappear. To awaken your
repository, you need to ask Berkeley DB to either finish the work, or rewind
the database to a previous state that is known to be consistent.
WARNING: you can seriously
corrupt your repository if you run recover and another process accesses the
repository.
Make absolutely sure you disable all access to the repository
before doing this (by shutting down Apache, removing executable permissions
from 'svn'). Make sure you run this command as the user that owns and manages
the database, and not as root, else it will leave root-owned files in the db
directory which cannot be opened by the non-root user that manages the
database, which is typically either you or your Apache process. Also be sure to
have the correct umask set when you run recover, since failing to do so will
lock out users that are in the group allowed to access the repository.
Simply run:
svnadmin recover /path/to/repos
Once the command has completed, check the permissions in
the db directory of the repository.
Sometimes "svnadmin recover" doesn't work. You may
see it give errors like this:
Repository lock acquired.
Please wait; recovering the repository may
take some time...
svnadmin: DB_RUNRECOVERY: Fatal error, run
database recovery
svnadmin: bdb: Recovery function for LSN 175
7066018 failed on backward pass
svnadmin: bdb: PANIC: No such file or
directory
svnadmin: bdb: PANIC: fatal region error
detected; run recovery
or like this:
Repository lock acquired.
Please wait; recovering the repository may
take some time...
svn: DB_RUNRECOVERY: Fatal error, run
database recovery
svn: bdb: DB_ENV->log_flush: LSN of
115/802071 past current end-of-log
of 115/731460
svn: bdb: Database environment corrupt; the
wrong log files may have
been removed or incompatible database files
imported from another
environment
[...]
svn: bdb: changes: unable to flush page: 0
svn: bdb: txn_checkpoint: failed to flush the
buffer cache Invalid argument
svn: bdb: PANIC: Invalid argument
svn: bdb: PANIC: fatal region error detected;
run recovery
svn: bdb: PANIC: fatal region error detected;
run recovery
[...]
In that case, try Berkeley DB's native db_recover utility
(see db_recover documentation). It usually lives in
a "bin/" subdirectory of the Berkeley DB installation, for example if
you installed Berkeley DB from source, it might
be /usr/local/BerkeleyDB.4.2/bin/db_recover; or on systems where Berkeley
DB comes prepackaged it might just be/usr/bin/db_recover. If you have multiple
versions of Berkeley DB installed, make sure that the version of db_recover you
use matches the version of Berkeley DB with which your repository was created.
Run db_recover with the "-c" ("catastrophic
recovery") flag. You can also add "-v" for verbosity, and
"-h" with an argument telling it what db environment to recover (so
you don't have to cd into that directory). Thus:
db_recover -c -v -h /path/to/repos/db
Run this command as the same user that owns the repository, and
again, make absolutely sure that no other processes are accessing the
repository while you do this (e.g., shut down svnserve or Apache).
My repository keeps giving
errors saying "Cannot allocate memory". What should I do?
If you're using http:// access, "Cannot allocate memory"
errors show up in the httpd error log and look something like this:
[Wed Apr 07 04:26:10 2004]
[error] [client 212.151.130.227] (20014)
Error string not specified
yet: Berkeley DB error while opening
'strings' table for
filesystem /usr/local/svn/repositories/svn/db:
Cannot allocate memory
[Wed Apr 07 04:26:10 2004]
[error] [client 212.151.130.227]
Could not fetch resource
information. [500, #0]
[Wed Apr 07 04:26:10 2004]
[error] [client 212.151.130.227]
Could not open the
requested SVN filesystem [500,
#160029]
[Wed Apr 07 04:26:10 2004]
[error] [client 212.151.130.227] (17)
File exists: Could not
open the requested SVN filesystem [500,
#160029]
It usually means that a Berkeley DB repository has run out of
database locks (this does not happen with FSFS repositories). It shouldn't
happen in the course of normal operations, but if it does, the solution is to
run database recovery as described here. If it happens often, you probably need
to raise the default lock parameters (set_lk_max_locks,set_lk_max_lockers,
and set_lk_max_objects) in the db/DB_CONFIG file. When changing DB_CONFIG
in an existing repository, remember to run recovery afterwards.
Every time I try to run a
svn command, it says my working copy is locked. Is my working copy corrupt?
Your working copy is not corrupt, nor is your data lost.
Subversion's working copy is a journaling system, meaning that it logs
everything it is about to do before it does so. If the svn client program is
interrupted violently (segfault or killed, not with Control-C), then one or
more lockfiles are left behind, along with logfiles describing unfinished
business. (The `svn status' command will show an 'L' next to locked directories.)
Any other process that attempts to access the working copy will fail when it
sees the locks. To awaken your working copy, you need to tell the svn client to
finish the work. Simply run:
svn cleanup working-copy
I'm trying to commit, but
Subversion says my working copy is out of date?
Three kinds of situation that can cause this:
1.
Debris from a failed commit is littering your working copy.
You may have had a commit that
went sour between the time the new revision was added in the server and the
time your client performed its post-commit admin tasks (including refreshing
your local text-base copy). This might happen for various reasons including
(rarely) problems in the database back end or (more commonly) network dropouts
at exactly the wrong time.
If this happens, it's possible
that you have already committed the very changes you are trying now to commit.
You can use 'svn log -rHEAD' to see if your supposed-failed commit actually
succeeded. If it did, run 'svn revert' to revert your local changes, then run
'svn update' to get your own changes back from the server. (Note that only 'svn
update' brings your local copies up-to-date; revert doesn't do that.)
2.
Mixed revisions.
When Subversion commits, the
client only bumps the revision numbers of the nodes the commit touches, not all
nodes in the working copy. This means that in a single working copy, the files
and subdirectories might be at different revisions, depending on when you last
committed them. In certain operations (for example, directory property modifications),
if the repository has a more recent version of the node, the commit will be
rejected, to prevent data loss. See Mixed revisions have limitations in
the Version
Control with Subversion book for details.
You can fix the problem by running
'svn update' in the working copy.
3.
You might be genuinely out of date — that is, you're
trying to commit a change to a file that has been changed by someone else since
you last updated your copy of that file. Again, 'svn update' is the way to fix
this.
I've contributed a patch
to a project and the patch added a new file. Now svn update does not
work.
In order to include your new file in the patch you likely ran
the svn add command so that the svn diff command would
include the new file in the patch. If your patch is committed to the code base
and you run an svn update, then you might receive an error message of:
"svn: Failed to add file 'my.new.file': object of the same name already
exists".
The reason that you received this error is that you still have
your local copy of the file in your working copy. The steps to correct this
problem are:
1.
Run the svn revert command to remove the scheduled add
within Subversion.
2.
Delete the file or move it to a location outside your working
copy.
3.
Now you should be able to run the svn update command.
You might want to compare the new file from the repository with
your original file.
I just built the
distribution binary, and when I try to check out Subversion, I get an error
about an "Unrecognized URL scheme." What's up with that?
Subversion uses a plugin system to allow access to repositories.
Currently there are three of these plugins: ra_local allows access to a local
repository, ra_neon or ra_serf which allow access to a repository via WebDAV,
and ra_svn allows local or remote access via the svnserve server. When you
attempt to perform an operation in Subversion, the program tries to dynamically
load a plugin based on the URL scheme. A `file://' URL will try to load
ra_local, and an `http://' URL will try to load ra_neon or ra_serf.
The error you are seeing means that the dynamic linker/loader
can't find the plugins to load. For `http://' access, this normally means that
you have not linked Subversion to neon or serf when compiling it (check the
configure script output and the config.log file for information about this). It
also happens when you build Subversion with shared libraries, then attempt to
run it without first running 'make install'. Another possible cause is that you
ran make install, but the libraries were installed in a location that the
dynamic linker/loader doesn't recognize. Under Linux, you can allow the linker/loader
to find the libraries by adding the library directory to /etc/ld.so.conf and
running ldconfig. If you don't wish to do this, or you don't have root access,
you can also specify the library directory in the LD_LIBRARY_PATH environment
variable.
How can I specify a
Windows drive letter in a file: URL?
Like this:
svn import
file:///d:/some/path/to/repos/on/d/drive
See Subversion Repository URLs in the
Subversion Book for more details.
Why does SVN log say
"(no author)" for files committed or imported via Apache (ra_dav)?
If you allow anonymous write access to the repository via Apache,
the Apache server never challenges the SVN client for a username, and instead
permits the write operation without authentication. Since Subversion has no
idea who did the operation, this results in a log like this:
$ svn log
------------------------------------------------------------------------
rev 24: (no author)
| 2003-07-29 19:28:35 +0200 (Tue, 29 Jul 2003)
See the Subversion book to learn about
configuring access restrictions in Apache.
I can see my repository in
a web browser, but 'svn checkout' gives me an error about "301 Moved
Permanently". What's wrong?
It means your httpd.conf is misconfigured. Usually this error
happens when you've defined the Subversion virtual "location" to
exist within two different scopes at the same time.
For example, if you've exported a repository as <Location
/www/foo>, but you've also set your DocumentRoot to be /www,
then you're in trouble. When the request comes in for/www/foo/bar, apache
doesn't know whether to find a real file
named /foo/bar within your DocumentRoot, or whether to ask
mod_dav_svn to fetch a file /bar from the /www/foorepository.
Usually the former case wins, and hence the "Moved Permanently"
error.
The solution is to make sure your
repository <Location> does not overlap or live
within any areas already exported as normal web shares.
It's also possible that you have an object in the web root which
has the same name as your repository URL. For example, imagine your web
server's document root is /var/www and your Subversion repository is
located at /home/svn/repo. You then configure Apache to serve the
repository at http://localhost/myrepo. If you then create the
directory/var/www/myrepo/ this will cause a 301 error to occur.
Why doesn't HTTP Digest
auth work?
This is probably due to a known bug in Apache HTTP Server
(versions 2.0.48 and earlier), for which a patch is available, see http://nagoya.apache.org/bugzilla/show_bug.cgi?id=25040.
You may also want to read over https://issues.apache.org/jira/browse/SVN-1608 to
see if the description there matches your symptoms.
I checked out a directory
non-recursively (with -N), and now I want to make certain subdirectories
"appear". But svn up subdir doesn't work.
See issue 695. The current implementation
of svn checkout -N is quite broken. It results in a working copy
which has missing entries, yet is ignorant of its "incompleteness".
Apparently a whole bunch of CVS users are fairly dependent on this paradigm,
but none of the Subversion developers were. For now, there's really no
workaround other than to change your process: try checking out separate
subdirectories of the repository and manually nesting your working copies.
Why aren't my repository
hooks working?
They're supposed to invoke external programs, but the invocations
never seem to happen.
Before Subversion calls a hook script, it removes all variables
-- including $PATH on Unix, and %PATH% on Windows -- from the environment.
Therefore, your script can only run another program if you spell out that
program's absolute name.
Make sure the hook script is named correctly: for example, the
post-commit hook should be named post-commit (without extension) on
Unix, and post-commit.bat or post-commit.exe on Windows.
Debugging tips:
If you're using Linux or Unix, try running the script "by
hand", by following these steps:
1.
Use "su", "sudo", or something similar, to
become the user who normally would run the script. This might
be httpd or www-data, for example, if you're using Apache; it
might be a user like svn if you're running svnserve and a special
Subversion user exists. This will make clear any permissions problems that the
script might have.
2.
Invoke the script with an empty environment by using the
"env" program. Here's an example for the post-commit hook:
3.
$ env -
./post-commit /var/lib/svn-repos 1234
Note the first argument to
"env" is a dash; that's what ensures the environment is empty.
4.
Check your console for errors.
I can't hotbackup my
repository, svnadmin fails on files larger than 2Gb!
Early versions of APR on its 0.9 branch, which Apache 2.0.x and
Subversion 1.x use, have no support for copying large files (2Gb+). A fix which
solves the 'svnadmin hotcopy' problem has been applied and is included in APR
0.9.5+ and Apache 2.0.50+. The fix doesn't work on all platforms, but works on
Linux.
I cannot see the log entry
for the file I just committed. Why?
Assume you run 'svn checkout' on a repository and receive a
working copy at revision 7 (aka, r7) with one file in it called foo.c. You
spend some time modifying the file and then commit it successfully. Two things
happen:
·
The repository moves to a new HEAD revision on the server. The
number of the new HEAD revision depends on how many other commits were made
since your working copy was checked out. For example, the new HEAD revision
might be r20.
·
In your working copy, only the file foo.c moves to r20.
The rest of your working copy remains at r7.
You now have what is known as a mixed revision working copy. One file is
at r20, but all other files remain at r7 until they too are committed, or until
'svn update' is run.
$ svn -v status
7
7 nesscg .
20
20 nesscg foo.c
$
If you run the 'svn log' command without any arguments, it
prints the log information for the current directory (named '.' in the above
listing). Since the directory itself is still at r7, you do not see the log
information for r20.
To see the latest logs, do one of the following:
1.
Run 'svn log -rHEAD'.
2.
Run 'svn log URL', where URL is the repository URL. If
the current directory is a working copy you can abbreviate the URL to the
repository root as ^/ to save some typing. Note that on Windows the
"^" symbol is special and must be quoted.
E.g.: svn log "^/" --limit 10
3.
Run 'svn log URL', where URL is the URL of the
subdirectory you want to see the log for, for
example: svn log ^/trunk
4.
Ask for just that file's log information, by running
'svn log foo.c'.
5.
Update your working copy so it's all at r20, then run
'svn log'.
After upgrading to
Berkeley DB 4.3 or later, I'm seeing repository errors.
Prior to Berkeley DB 4.3, svnadmin recover worked to
upgrade a Berkeley DB repository in-place. However, due to a change in the
behaviour of Berkeley DB in version 4.3, this now fails.
Use this procedure to upgrade your repository in-place to Berkeley
DB 4.3 or later:
·
Make sure no process is accessing the repository (stop Apache,
svnserve, restrict access via file://, svnlook, svnadmin, etc.)
·
Using an older svnadmin binary (that is,
linked to an older Berkeley DB):
1.
Recover the repository:
'svnadmin recover /path/to/repository'
2.
Make a backup of the repository.
3.
Delete all unused log files. You can see them by running
'svnadmin list-unused-dblogs /path/to/repeository'
4.
Delete the shared-memory files. These are files in the
repository's db/ directory, of the form __db.00*
The repository is now usable by Berkeley DB 4.3.
I can't add a directory
because Subversion says it's "already under version control".
The directory you're trying to add already contains
a .svn subdirectory — it is a working copy — but
it's from a different repository location than the directory to which you're
trying to add it. This probably happened because you used your operating
system's "copy" command (instead of svn copy) to copy a
subdirectory in this working copy, or to copy some other working copy into this
one.
The quick and dirty solution is to delete
all .svn directories contained in the directory you're trying to add;
this will let the "add" command complete. If you're using Unix, this
command will delete .svn directories under dir:
find dir -type d -name .svn -exec rm -rf {}
\;
However, if the copy was from the same repository, you should
ideally delete or move aside the copy, and use svn copy to make a
proper copy, which will know its history and save space in the repository.
If it was from a different repository, you should ask yourself why you
made this copy; and you should ensure that by adding this directory, you won't
be making an unwanted copy of it in your repository.
Why doesn't svn
switch work in some cases?
In some cases where there are unversioned (and maybe ignored) items
in the working copy, svn switch can get an error. The switch stops,
leaving the working copy half-switched.
Unfortunately, if you take the wrong corrective action you can end
up with an unusable working copy. Sometimes with these situations, the user is
directed to do svn cleanup. But thesvn cleanup may also encounter an
error. See issue #2505.
The user can manually remove the directories or files causing the
problem, and then run svn cleanup, and continue the switch, to recover
from this situation.
Note that a switch from a pristine clean checkout
always works without error. There are three ways of working if you are
using svn switch as part of your development process:
1.
Fully clean your working copy of unversioned (including ignored)
files before switching.
WARNING! This deletes all unversioned dirs/files. Be VERY sure that you do not need anything that will be removed.
WARNING! This deletes all unversioned dirs/files. Be VERY sure that you do not need anything that will be removed.
2.
3.
# Check and delete svn unversioned files:
4.
svn status --no-ignore | grep '^[I?]' | sed 's/^[I?]//'
5.
svn status --no-ignore | grep '^[I?]' | sed 's/^[I?]//' | xargs rm
-rf
6.
Keep a pristine clean checkout. Update that, then
copy it, and switch the copy when a switch to another branch is desired.
7.
Live dangerously :). Switch between branches without cleaning up
BUT if you encounter a switch error know that you have to recover from this
properly. Delete the unversioned files and the directory that the error was
reported on. Then "svn cleanup" if needed and then resume the switch.
Unless you delete all unversioned files, you may have to
repeat this process multiple times.
Some examples are detailed here
in issue 2505. The problem is that the svn client plays it safe and
doesn't want to delete anything unversioned.
Two specific examples are detailed here to illustrate a problem
like this. There are also other svn switch errors, not covered here, which you
can avoid by switching only from a pristinecheckout.
1.
If any directory has been moved or renamed between the branches,
then anything unversioned will cause a problem. In this case, you'll see this
error:
2.
3.
wc/$ svn switch $SVNROOT/$project/branches/$ticket-xxx
4.
svn: Won't delete locally modified directory '<dir>'
5.
svn: Left locally modified or unversioned files
Removing all unversioned files,
and continuing the switch will recover from this.
6.
If a temporary build file has ever been added and removed, then a
switch in a repository with that unversioned file (likely after a build) fails.
You'll see the same error:
7.
8.
wc/$ svn switch $SVNROOT/$project/branches/$ticket-xxx
9.
svn: Won't delete locally modified directory '<dir>'
10.
svn: Left locally modified or unversioned files
In this case, just removing the
unversioned items will not recover. A cleanup fails, but "svn switch"
directs you to run "svn cleanup".
wc/$ svn switch
$SVNROOT/$project/branches/$ticket-xxx
svn: Directory
'<dir>/.svn' containing working copy admin area is missing
wc/$ svn cleanup
svn: '<dir>' is not
a working copy directory
wc/$ svn switch
$SVNROOT/$project/branches/$ticket-xxx
svn: Working copy '.'
locked
svn: run 'svn cleanup' to
remove locks (type 'svn help cleanup' for details)
Removing the directory (and all
other unversioned files, to prevent "switch" from breaking on a
similar error repeatedly), and continuing the switch will recover from this.
The TortoiseSVN cleanup error is a bit different. You might
encounter this:
Subversion reported an
error while doing a cleanup!
<dir>/<anotherdir>
is not a working copy directory
In each case here, the "svn switch" breaks leaving you
with a half-switched working copy. "svn status" will show items with
S for switched items (different from top directory), ! for directories with
problems, and ~ for the files that are the problem (and with maybe L for
locked). Like this:
wc/$ svn status
! .
! <dir>
S
<switched_things>
~
<dir>/<thing_that_is_now_unversioned>
Why am I getting a tree
conflict upon update even though no one else has committed conflicting changes?
When you commit, only the files/directories that are actually
changed by the commit get their base revisions bumped to HEAD in the working
copy. The other files/directories (possibly including the directory you
committed from!) don't get their base revisions bumped, which means Subversion
still considers them to be based on outdated revisions. See also this question and this section of the Subversion book.
This can be confusing, in particular because of tree conflicts you
can inflict upon yourself. E.g. if you add a file to a directory and commit,
and then locally move that directory somewhere else, and then try to commit,
this second commit will fail with an out-of-date error since the directory
itself is still based on an out-of-date revision. When updating, a tree
conflict will be flagged.
Subversion has currently no way of knowing that you yourself just
committed the change which caused the directory to be out-of-date during the
second commit. And allowing an out-of-date directory to be committed may cause
certain tree conflicts not to be detected, so Subversion can't allow you to do
this.
To avoid this problem, make sure to update your entire working
copy before making structural changes such as deleting, adding, or moving files
or directories.
I get "Error
validating server certificate" error even though I configure the SSL
certificates correctly in the server.
This error occurs if the certificate issuer is not recognized as
'Trusted' by the SVN client. Subversion will ask you whether you trust the
certificate and if you want to store this certificate.
$ svn info https://mysite.com/svn/repo
Error validating server certificate for 'https://mysite.com:443':
- The certificate is not issued by a trusted authority. Use the
fingerprint to validate the certificate manually!
Certificate information:
- Hostname: mysite.com
- Valid: from Wed, 18 Jan 2012 00:00:00 GMT until Fri, 18 Jan 2013
23:59:59 GMT
- Issuer: Google Inc, US
- Fingerprint:
34:4b:90:e7:e3:36:81:0d:53:1f:10:c0:4c:98:66:90:4a:9e:05:c9
(R)eject, accept (t)emporarily or accept (p)ermanently?
Error validating server certificate for 'https://mysite.com:443':
- The certificate is not issued by a trusted authority. Use the
fingerprint to validate the certificate manually!
Certificate information:
- Hostname: mysite.com
- Valid: from Wed, 18 Jan 2012 00:00:00 GMT until Fri, 18 Jan 2013
23:59:59 GMT
- Issuer: Google Inc, US
- Fingerprint:
34:4b:90:e7:e3:36:81:0d:53:1f:10:c0:4c:98:66:90:4a:9e:05:c9
(R)eject, accept (t)emporarily or accept (p)ermanently?
In some cases, even if you accept this by entering 'p' option, the
next time you access SVN, the same error appears again. There can be multiple
reasons. The problem may be your ~/.subversion directory has wrong permissions,
so that each time you want to permanently add the credentials, svn actually
cannot do so, and also doesn't inform you that it can't.
This can be solved by either fixing the permissions with chmod 644
in
~/.subversion/auth/svn.ssl.server
directory or by deleting the directory contents. If deleted, the
directory gets populated automatically the next time you access the repository.
After importing files to
my repository, I don't see them in the repository directory. Where are they?
The files are in the repository; you can verify this by running
commands such as svn ls -R, or by trying to checkout a working copy from
the repository:
$ pwd
/var/srv/repositories/foo
$ ls -1
conf
db
format
hooks
locks
README.txt
$ svnlook youngest /var/srv/repositories/foo
1
$ svn ls file:///var/srv/repositories/foo
trunk/
tags/
branches/
The versioned files and directories are simply not stored on-disk
in a tree format (like CVS repositories used to), but instead are stored in
database files. The BDB backend uses Berkeley DB databases, and the FSFS
backend uses both a custom file format and may in the future use SQLite
databases.
When does svn
copy create svn:mergeinfo properties?
In general, to avoid some kinds of spurious merge conflicts, the
following rules can be kept in mind:
·
When copying/renaming a file or directory within the
trunk or a branch, perform the copy/rename in a working copy. For renames, the
working copy should not be a mixed-revision working copy.
·
When copying/renaming an entire branch, perform the copy/rename in
the repository (i.e. via URLs).
During copies where the source is a URL, and the target is either
a URL or in a working copy, explicit mergeinfo is created on the copy target.
This is done so that when a branch is created with
svn copy ^/trunk
^/branches/mybranch
and later an ancestrally unrelated subtree is copied into the
branch using an invocation such as
svn copy
^/branches/another-branch/foo ^/branches/mybranch/bar
the directory /branches/mybranch/bar does not inherit
mergeinfo from its parent /branches/mybranch. Mergeinfo inherited from the
parent might not reflect the factually correct merge history of the new child.
During copies where both the source and the target are within a
working copy, no mergeinfo is created on the copy target (as of Subversion
1.5.5). This assumes the case where a new child is added on the trunk (or a
branch), and this addition is merged to another branch which is kept in sync
using periodic catch-up merges. In this case, the inherited mergeinfo of the
branch's new child is correct, and the creation of explicit mergeinfo could
cause spurious merge conflicts due to apparent, but factually inaccurate,
differences in the child's and parent's merge histories.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home