The "--no-patch" option, which can be used to get a high-level
overview without the actual line-by-line patch difference shown, of
the "range-diff" command was earlier broken, which has been
corrected.
* ab/range-diff-no-patch:
range-diff: make diff option behavior (e.g. --stat) consistent
range-diff: fix regression in passing along diff options
range-diff doc: add a section about output stability
"git merge" and "git pull" that merges into an unborn branch used
to completely ignore "--verify-signatures", which has been
corrected.
* jk/verify-sig-merge-into-void:
pull: handle --verify-signatures for unborn branch
merge: handle --verify-signatures for unborn branch
merge: extract verify_merge_signature() helper
Various functions have been audited for "-Wunused-parameter" warnings
and bugs in them got fixed.
* jk/unused-parameter-fixes:
midx: double-check large object write loop
assert NOARG/NONEG behavior of parse-options callbacks
parse-options: drop OPT_DATE()
apply: return -1 from option callback instead of calling exit(1)
cat-file: report an error on multiple --batch options
tag: mark "--message" option with NONEG
show-branch: mark --reflog option as NONEG
format-patch: mark "--no-numbered" option with NONEG
status: mark --find-renames option with NONEG
cat-file: mark batch options with NONEG
pack-objects: mark index-version option as NONEG
ls-files: mark exclude options as NONEG
am: handle --no-patch-format option
apply: mark include/exclude options as NONEG
The way -lcurl library gets linked has been simplified by taking
advantage of the fact that we can just ask curl-config command how.
* jk/curl-ldflags:
build: link with curl-defined linker flags
Add a few tests for a topic already in 'master'.
* mg/gpg-fingerprint-test:
t/t7510-signed-commit.sh: add signing subkey to Eris Discordia key
t/t7510-signed-commit.sh: Add %GP to custom format checks
The codebase has been cleaned up to reduce "#ifndef NO_PTHREADS".
* nd/pthreads:
Clean up pthread_create() error handling
read-cache.c: initialize copy_len to shut up gcc 8
read-cache.c: reduce branching based on HAVE_THREADS
read-cache.c: remove #ifdef NO_PTHREADS
pack-objects: remove #ifdef NO_PTHREADS
preload-index.c: remove #ifdef NO_PTHREADS
grep: clean up num_threads handling
grep: remove #ifdef NO_PTHREADS
attr.c: remove #ifdef NO_PTHREADS
name-hash.c: remove #ifdef NO_PTHREADS
index-pack: remove #ifdef NO_PTHREADS
send-pack.c: move async's #ifdef NO_PTHREADS back to run-command.c
run-command.h: include thread-utils.h instead of pthread.h
thread-utils: macros to unconditionally compile pthreads API
The revision walker machinery learned to take advantage of the
commit generation numbers stored in the commit-graph file.
* ds/reachable-topo-order:
t6012: make rev-list tests more interesting
revision.c: generation-based topo-order algorithm
commit/revisions: bookkeeping before refactoring
revision.c: begin refactoring --topo-order logic
test-reach: add rev-list tests
test-reach: add run_three_modes method
prio-queue: add 'peek' operation
When writing a bundle to a file, the bundle code actually creates
"your.bundle.lock" using our lockfile interface. We feed that output
descriptor to a child git-pack-objects via run-command, which has the
quirk that it closes the output descriptor in the parent.
To avoid confusing the lockfile code (which still thinks the descriptor
is valid), we dup() it, and operate on the duplicate.
However, this has a confusing side effect: after the dup() but before we
call pack-objects, we have _two_ descriptors open to the lockfile. If we
call die() during that time, the lockfile code will try to clean up the
partially-written file. It knows to close() the file before unlinking,
since on some platforms (i.e., Windows) the open file would block the
deletion. But it doesn't know about the duplicate descriptor. On
Windows, triggering an error at the right part of the code will result
in the cleanup failing and the lockfile being left in the filesystem.
We can solve this by moving the dup() much closer to start_command(),
shrinking the window in which we have the second descriptor open. It's
easy to place this in such a way that no die() is possible. We could
still die due to a signal in the exact wrong moment, but we already
tolerate races there (e.g., a signal could come before we manage to put
the file on the cleanup list in the first place).
As a bonus, this shields create_bundle() itself from the duplicate-fd
trick, and we can simplify its error handling (note that the lock
rollback now happens unconditionally, but that's OK; it's a noop if we
didn't open the lock in the first place).
The included test uses an empty bundle to cause a failure at the right
spot in the code, because that's easy to trigger (the other likely
errors are write() problems like ENOSPC). Note that it would already
pass on non-Windows systems (because they are happy to unlink an
already-open file).
Based-on-a-patch-by: Gaël Lhez <gael.lhez@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Tested-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
TODO: change authorship to Peff!
bundle: dup() output descriptor closer to point-of-use
When writing a bundle to a file, the bundle code actually creates
"your.bundle.lock" using our lockfile interface. We feed that output
descriptor to a child git-pack-objects via run-command, which has the
quirk that it closes the output descriptor in the parent.
To avoid confusing the lockfile code (which still thinks the descriptor
is valid), we dup() it, and operate on the duplicate.
However, this has a confusing side effect: after the dup() but before we
call pack-objects, we have _two_ descriptors open to the lockfile. If we
call die() during that time, the lockfile code will try to clean up the
partially-written file. It knows to close() the file before unlinking,
since on some platforms (i.e., Windows) the open file would block the
deletion. But it doesn't know about the duplicate descriptor. On
Windows, triggering an error at the right part of the code will result
in the cleanup failing and the lockfile being left in the filesystem.
We can solve this by moving the dup() much closer to start_command(),
shrinking the window in which we have the second descriptor open. It's
easy to place this in such a way that no die() is possible. We could
still die due to a signal in the exact wrong moment, but we already
tolerate races there (e.g., a signal could come before we manage to put
the file on the cleanup list in the first place).
As a bonus, this shields create_bundle() itself from the duplicate-fd
trick, and we can simplify its error handling (note that the lock
rollback now happens unconditionally, but that's OK; it's a noop if we
didn't open the lock in the first place).
The included test uses an empty bundle to cause a failure at the right
spot in the code, because that's easy to trigger (the other likely
errors are write() problems like ENOSPC). Note that it would already
pass on non-Windows systems (because they are happy to unlink an
already-open file).
Based-on-a-patch-by: Gaël Lhez <gael.lhez@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This reverts the patch we carried for a long time, as it did not survive
contact with the Git mailing list. In preparation for applying Jeff
King's better version, let's revert our version.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This brings substantial wins in performance because the FSCache is now
per-thread, being merged to the primary thread only at the end, so we do
not have to lock (except while merging).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Now that the fscache is single threaded, take advantage of the mem_pool as
the allocator to significantly reduce the cost of allocations and frees.
With the reduced cost of free, in future patches, we can start freeing the
fscache at the end of commands instead of just leaking it.
Signed-off-by: Ben Peart <benpeart@microsoft.com>
The threading model for fscache has been to have a single, global cache.
This puts requirements on it to be thread safe so that callers like
preload-index can call it from multiple threads. This was implemented
with a single mutex and completion events which introduces contention
between the calling threads.
Simplify the threading model by making fscache thread specific. This allows
us to remove the global mutex and synchronization events entirely and instead
associate a fscache with every thread that requests one. This works well with
the current multi-threading which divides the cache entries into blocks with
a separate thread processing each block.
At the end of each worker thread, if there is a fscache on the primary
thread, merge the cached results from the worker into the primary thread
cache. This enables us to reuse the cache later especially when scanning for
untracked files.
In testing, this reduced the time spent in preload_index() by about 25% and
also reduced the CPU utilization significantly. On a repo with ~200K files,
it reduced overall status times by ~12%.
Signed-off-by: Ben Peart <benpeart@microsoft.com>
Update enable_fscache() to take an optional initial size parameter which is
used to initialize the hashmap so that it can avoid having to rehash as
additional entries are added.
Add a separate disable_fscache() macro to make the code clearer and easier
to read.
Signed-off-by: Ben Peart <benpeart@microsoft.com>
Add a GIT_TEST_REBASE_USE_BUILTIN=false test mode which is equivalent
to running with rebase.useBuiltin=false. This is needed to spot that
we're not introducing any regressions in the legacy rebase version
while we're carrying both it and the new builtin version.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The rebase.useBuiltin variable introduced in 55071ea248 ("rebase:
start implementing it as a builtin", 2018-08-07) was turned on by
default in 5541bd5b8f ("rebase: default to using the builtin rebase",
2018-08-08), but had no documentation.
Let's document it so that users who run into any stability issues with
the C rewrite know there's an escape hatch[1], and make it clear that
needing to turn off builtin rebase means you've found a bug in git.
1. https://public-inbox.org/git/87y39w1wc2.fsf@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Let's use the "Hosted" pool, it seems to be under less load, i.e. it
will make our builds faster.
While at it, also disable chain linting and bin-wrappers in the Windows
phase, to save even more time.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This speeds up the tests by a bit on Windows, where running Unix shell
scripts (and spawning processes) is not exactly a cheap operation.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
When building Git with RUNTIME_PREFIX and starting a test helper from
t/helper/, it fails to detect the system prefix correctly.
This is the reason that the warning
RUNTIME_PREFIX requested, but prefix computation failed. [...]
to be printed.
In t0061, we did not expect that to happen, and it actually did not
happen in the normal case, because bin-wrappers/test-tool specifically
sets GIT_TEXTDOMAINDIR (and as a consequence, nothing in test-tool wants
to know about the runtime prefix).
However, with --with-dashes, bin-wrappers/test-tool is no longer called,
but t/helper/test-tool is called directly.
So let's just ignore the RUNTIME_PREFIX warning.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
We really need to be able to find the test helpers... Really. This
change was forgotten when we moved the test helpers into t/helper/
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
These patches will supersede the corresponding patches that are already
in the branch thicket, during the next merging-rebase.
That merging-rebase is scheduled for v2.20.0-rc0, which is scheduled
later tonight. However, the plan is to hold off with pushing this to
`master` until the merging-rebase to v2.20.0-rc1.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The MSDN documentation has been superseded by Microsoft Docs (which is
backed by a repository on GitHub containing many, many files in Markdown
format).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
OSS-Fuzz requires C++-specific flags to link fuzzers. Passing these in
CFLAGS causes lots of build warnings. Using separate FUZZ_CXXFLAGS
avoids this.
Signed-off-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All config extensions are described in technical/repository-version.txt.
I made a mistake of adding it in config.txt instead. This patch moves
it back to where it belongs.
Since repository-version.txt is not part of officially generated
documents (it's not even part of DOC_HTML target), it's only visible
to developers who read plain .txt files. Let's include it in
gitrepository-layout.5 for more visibility. Some minor asciidoc fixes
are required in repository-version.txt to make this happen.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The command 'git ls-remote --sort=authordate <remote>' segfaults when
run outside of a repository, ever since the introduction of its
'--sort' option in 1fb20dfd8e (ls-remote: create '--sort' option,
2018-04-09).
While in general the 'git ls-remote' command can be run outside of a
repository just fine, its '--sort=<key>' option with certain keys does
require access to the referenced objects. This sorting is implemented
using the generic ref-filter sorting facility, which already handles
missing objects gracefully with the appropriate 'missing object
deadbeef for HEAD' message. However, being generic means that it
checks replace refs while trying to retrieve an object, and while
doing so it accesses the 'git_replace_ref_base' variable, which has
not been initialized and is still a NULL pointer when outside of a
repository, thus causing the segfault.
Make ref-filter more careful upfront while parsing the format string,
and make it error out when encountering a format atom requiring object
access when we are not in a repository. Also add a test to ensure
that 'git ls-remote --sort' fails gracefully when executed outside of
a repository.
Reported-by: H.Merijn Brand <h.m.brand@xs4all.nl>
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This did happen at some stage, and was fixed relatively quickly. Make
sure that we detect very quickly, too, should that happen again.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is a good idea to error out early upon seeing, say, `-Cbad`, rather
than starting the rebase only to have the `--am` backend complain later.
Let's do this.
The only options accepting parameters which we pass through to `git am`
(which may, or may not, forward them to `git apply`) are `-C` and
`--whitespace`. The other options we pass through do not accept
parameters, so we do not have to validate them here.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, we parse the options intended for `git am` as if we wanted to
handle them in `git rebase`, and then reconstruct them painstakingly to
define the `git_am_opt` variable.
However, there is a much better way (that I was unaware of, at the time
when I mentored Pratik to implement these options): OPT_PASSTHRU_ARGV.
It is intended for exactly this use case, where command-line options
want to be parsed into a separate `argv_array`.
Let's use this feature.
Incidentally, this also allows us to address a bug discovered by Phillip
Wood, where the built-in rebase failed to understand that the `-C`
option takes an optional argument.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With Windows 10 Build 14972 in Developer Mode, a new flag is supported
by CreateSymbolicLink() to create symbolic links even when running
outside of an elevated session (which was previously required).
This new flag is called SYMBOLIC_LINK_FLAG_ALLOW_UNPRIVILEGED_CREATE and
has the numeric value 0x02.
Previous Windows 10 versions will not understand that flag and return an
ERROR_INVALID_PARAMETER, therefore we have to be careful to try passing
that flag only when the build number indicates that it is supported.
For more information about the new flag, see this blog post:
https://blogs.windows.com/buildingapps/2016/12/02/symlinks-windows-10/
This patch is loosely based on the patch submitted by Samuel D. Leslie
as https://github.com/git-for-windows/git/pull/1184.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Symlinks on Windows have a flag that indicates whether the target is a file
or a directory. Symlinks of wrong type simply don't work. This even affects
core Win32 APIs (e.g. DeleteFile() refuses to delete directory symlinks).
However, CreateFile() with FILE_FLAG_BACKUP_SEMANTICS doesn't seem to care.
Check the target type by first creating a tentative file symlink, opening
it, and checking the type of the resulting handle. If it is a directory,
recreate the symlink with the directory flag set.
It is possible to create symlinks before the target exists (or in case of
symlinks to symlinks: before the target type is known). If this happens,
create a tentative file symlink and postpone the directory decision: keep
a list of phantom symlinks to be processed whenever a new directory is
created in mingw_mkdir().
Limitations: This algorithm may fail if a link target changes from file to
directory or vice versa, or if the target directory is created in another
process.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Implement symlink() that always creates file symlinks. Fails with ENOSYS
if symlinks are disabled or unsupported.
Note: CreateSymbolicLinkW() was introduced with symlink support in Windows
Vista. For compatibility with Windows XP, we need to load it dynamically
and fail gracefully if it isnt's available.
Signed-off-by: Karsten Blees <blees@dcon.de>
Implement readlink() by reading NTFS reparse points. Works for symlinks
and directory junctions. If symlinks are disabled, fail with ENOSYS.
Signed-off-by: Karsten Blees <blees@dcon.de>
If symlinks are enabled, resolve all symlinks when changing directories,
as required by POSIX.
Note: Git's real_path() function bases its link resolution algorithm on
this property of chdir(). Unfortunately, the current directory on Windows
is limited to only MAX_PATH (260) characters. Therefore using symlinks and
long paths in combination may be problematic.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
MSVCRT's _wrename() cannot rename symlinks over existing files: it returns
success without doing anything. Newer MSVCR*.dll versions probably do not
have this problem: according to CRT sources, they just call MoveFileEx()
with the MOVEFILE_COPY_ALLOWED flag.
Get rid of _wrename() and call MoveFileEx() with proper error handling.
Signed-off-by: Karsten Blees <blees@dcon.de>
_wunlink() / DeleteFileW() refuses to delete symlinks to directories. If
_wunlink() fails with ERROR_ACCESS_DENIED, try _wrmdir() as well.
Signed-off-by: Karsten Blees <blees@dcon.de>
Symlinks on Windows don't work the same way as on Unix systems. E.g. there
are different types of symlinks for directories and files, creating
symlinks requires administrative privileges etc.
By default, disable symlink support on Windows. I.e. users explicitly have
to enable it with 'git config [--system|--global] core.symlinks true'.
The test suite ignores system / global config files. Allow testing *with*
symlink support by checking if native symlinks are enabled in MSys2 (via
'MSYS=winsymlinks:nativestrict').
Reminder: This would need to be changed if / when we find a way to run the
test suite in a non-MSys-based shell (e.g. dash).
Signed-off-by: Karsten Blees <blees@dcon.de>