Merge branch 'master' of git://repo.or.cz/alt-git

This commit is contained in:
Johannes Sixt
2009-06-15 08:33:22 +02:00
56 changed files with 8716 additions and 1129 deletions

View File

@@ -31,7 +31,7 @@ OPTIONS
Use the diff tool specified by <tool>.
Valid merge tools are:
kdiff3, kompare, tkdiff, meld, xxdiff, emerge, vimdiff, gvimdiff,
ecmerge, diffuse and opendiff
ecmerge, diffuse, opendiff and araxis.
+
If a diff tool is not specified, 'git-difftool'
will use the configuration variable `diff.tool`. If the

View File

@@ -27,7 +27,7 @@ OPTIONS
Use the merge resolution program specified by <tool>.
Valid merge tools are:
kdiff3, tkdiff, meld, xxdiff, emerge, vimdiff, gvimdiff, ecmerge,
diffuse, tortoisemerge and opendiff
diffuse, tortoisemerge, opendiff and araxis.
+
If a merge resolution program is not specified, 'git-mergetool'
will use the configuration variable `merge.tool`. If the

View File

@@ -31,11 +31,14 @@ OPTIONS
Instead of incrementally packing the unpacked objects,
pack everything referenced into a single pack.
Especially useful when packing a repository that is used
for private development and there is no need to worry
about people fetching via dumb protocols from it. Use
for private development. Use
with '-d'. This will clean up the objects that `git prune`
leaves behind, but `git fsck --full` shows as
dangling.
+
Note that users fetching over dumb protocols will have to fetch the
whole new pack in order to get any contained object, no matter how many
other objects in that pack they already have locally.
-A::
Same as `-a`, unless '-d' is used. Then any unreachable

View File

@@ -12,15 +12,15 @@ SYNOPSIS
DESCRIPTION
-----------
In a workflow that employs relatively long lived topic branches,
the developer sometimes needs to resolve the same conflict over
In a workflow employing relatively long lived topic branches,
the developer sometimes needs to resolve the same conflicts over
and over again until the topic branches are done (either merged
to the "release" branch, or sent out and accepted upstream).
This command helps this process by recording conflicted
automerge results and corresponding hand-resolve results on the
initial manual merge, and later by noticing the same automerge
results and applying the previously recorded hand resolution.
This command assists the developer in this process by recording
conflicted automerge results and corresponding hand resolve results
on the initial manual merge, and applying previously recorded
hand resolutions to their corresponding automerge results.
[NOTE]
You need to set the configuration variable rerere.enabled to
@@ -54,18 +54,18 @@ for resolutions.
'gc'::
This command is used to prune records of conflicted merge that
occurred long time ago. By default, conflicts older than 15
days that you have not recorded their resolution, and conflicts
older than 60 days, are pruned. These are controlled with
This prunes records of conflicted merges that
occurred a long time ago. By default, unresolved conflicts older
than 15 days and resolved conflicts older than 60
days are pruned. These defaults are controlled via the
`gc.rerereunresolved` and `gc.rerereresolved` configuration
variables.
variables respectively.
DISCUSSION
----------
When your topic branch modifies overlapping area that your
When your topic branch modifies an overlapping area that your
master branch (or upstream) touched since your topic branch
forked from it, you may want to test it with the latest master,
even before your topic branch is ready to be pushed upstream:
@@ -140,9 +140,9 @@ top of the tip before the test merge:
This would leave only one merge commit when your topic branch is
finally ready and merged into the master branch. This merge
would require you to resolve the conflict, introduced by the
commits marked with `*`. However, often this conflict is the
commits marked with `*`. However, this conflict is often the
same conflict you resolved when you created the test merge you
blew away. 'git-rerere' command helps you to resolve this final
blew away. 'git-rerere' helps you resolve this final
conflicted merge using the information from your earlier hand
resolve.
@@ -150,33 +150,32 @@ Running the 'git-rerere' command immediately after a conflicted
automerge records the conflicted working tree files, with the
usual conflict markers `<<<<<<<`, `=======`, and `>>>>>>>` in
them. Later, after you are done resolving the conflicts,
running 'git-rerere' again records the resolved state of these
running 'git-rerere' again will record the resolved state of these
files. Suppose you did this when you created the test merge of
master into the topic branch.
Next time, running 'git-rerere' after seeing a conflicted
automerge, if the conflict is the same as the earlier one
recorded, it is noticed and a three-way merge between the
Next time, after seeing the same conflicted automerge,
running 'git-rerere' will perform a three-way merge between the
earlier conflicted automerge, the earlier manual resolution, and
the current conflicted automerge is performed by the command.
the current conflicted automerge.
If this three-way merge resolves cleanly, the result is written
out to your working tree file, so you would not have to manually
out to your working tree file, so you do not have to manually
resolve it. Note that 'git-rerere' leaves the index file alone,
so you still need to do the final sanity checks with `git diff`
(or `git diff -c`) and 'git-add' when you are satisfied.
As a convenience measure, 'git-merge' automatically invokes
'git-rerere' when it exits with a failed automerge, which
records it if it is a new conflict, or reuses the earlier hand
'git-rerere' upon exiting with a failed automerge and 'git-rerere'
records the hand resolve when it is a new conflict, or reuses the earlier hand
resolve when it is not. 'git-commit' also invokes 'git-rerere'
when recording a merge result. What this means is that you do
not have to do anything special yourself (Note: you still have
to set the config variable rerere.enabled to enable this command).
when committing a merge result. What this means is that you do
not have to do anything special yourself (besides enabling
the rerere.enabled config variable).
In our example, when you did the test merge, the manual
In our example, when you do the test merge, the manual
resolution is recorded, and it will be reused when you do the
actual merge later with updated master and topic branch, as long
as the earlier resolution is still applicable.
actual merge later with the updated master and topic branch, as long
as the recorded resolution is still applicable.
The information 'git-rerere' records is also used when running
'git-rebase'. After blowing away the test merge and continuing
@@ -194,11 +193,11 @@ development on the topic branch:
o---o---o---*---o---o---o---o master
------------
you could run `git rebase master topic`, to keep yourself
up-to-date even before your topic is ready to be sent upstream.
This would result in falling back to three-way merge, and it
would conflict the same way the test merge you resolved earlier.
'git-rerere' is run by 'git-rebase' to help you resolve this
you could run `git rebase master topic`, to bring yourself
up-to-date before your topic is ready to be sent upstream.
This would result in falling back to a three-way merge, and it
would conflict the same way as the test merge you resolved earlier.
'git-rerere' will be run by 'git-rebase' to help you resolve this
conflict.

View File

@@ -14,6 +14,10 @@ SYNOPSIS
DESCRIPTION
-----------
Takes the patches given on the command line and emails them out.
Patches can be specified as files, directories (which will send all
files in the directory), or directly as a revision list. In the
last case, any format accepted by linkgit:git-format-patch[1] can
be passed to git send-email.
The header of the email is configurable by command line options. If not
specified on the command line, the user will be prompted with a ReadLine
@@ -161,7 +165,7 @@ Automating
Output of this command must be single email address per line.
Default is the value of 'sendemail.cccmd' configuration value.
--[no-]chain-reply-to=<identifier>::
--[no-]chain-reply-to::
If this is set, each email will be sent as a reply to the previous
email sent. If disabled with "--no-chain-reply-to", all emails after
the first will be sent as replies to the first email sent. When using
@@ -210,7 +214,8 @@ specified, as well as 'body' if --no-signed-off-cc is specified.
--[no-]thread::
If this is set, the In-Reply-To header will be set on each email sent.
If disabled with "--no-thread", no emails will have the In-Reply-To
header set. Default is the value of the 'sendemail.thread' configuration
header set, unless specified with --in-reply-to.
Default is the value of the 'sendemail.thread' configuration
value; if that is unspecified, default to --thread.

View File

@@ -13,7 +13,7 @@ SYNOPSIS
[--reference <repository>] [--] <repository> <path>
'git submodule' [--quiet] status [--cached] [--] [<path>...]
'git submodule' [--quiet] init [--] [<path>...]
'git submodule' [--quiet] update [--init] [-N|--no-fetch]
'git submodule' [--quiet] update [--init] [-N|--no-fetch] [--rebase]
[--reference <repository>] [--] [<path>...]
'git submodule' [--quiet] summary [--summary-limit <n>] [commit] [--] [<path>...]
'git submodule' [--quiet] foreach <command>
@@ -115,7 +115,8 @@ init::
update::
Update the registered submodules, i.e. clone missing submodules and
checkout the commit specified in the index of the containing repository.
This will make the submodules HEAD be detached.
This will make the submodules HEAD be detached unless '--rebase' is
specified or the key `submodule.$name.update` is set to `rebase`.
+
If the submodule is not yet initialized, and you just want to use the
setting as stored in .gitmodules, you can automatically initialize the
@@ -179,6 +180,15 @@ OPTIONS
This option is only valid for the update command.
Don't fetch new objects from the remote site.
--rebase::
This option is only valid for the update command.
Rebase the current branch onto the commit recorded in the
superproject. If this option is given, the submodule's HEAD will not
be detached. If a a merge failure prevents this process, you will have
to resolve these failures with linkgit:git-rebase[1].
If the key `submodule.$name.update` is set to `rebase`, this option is
implicit.
--reference <repository>::
This option is only valid for add and update commands. These
commands sometimes need to clone a remote repository. In this case,

View File

@@ -30,6 +30,15 @@ submodule.<name>.path::
submodule.<name>.url::
Defines an url from where the submodule repository can be cloned.
submodule.<name>.update::
Defines what to do when the submodule is updated by the superproject.
If 'checkout' (the default), the new commit specified in the
superproject will be checked out in the submodule on a detached HEAD.
If 'rebase', the current branch of the submodule will be rebased onto
the commit specified in the superproject.
This config option is overridden if 'git submodule update' is given
the '--rebase' option.
EXAMPLES
--------

View File

@@ -23,7 +23,7 @@ merge.tool::
Controls which merge resolution program is used by
linkgit:git-mergetool[1]. Valid built-in values are: "kdiff3",
"tkdiff", "meld", "xxdiff", "emerge", "vimdiff", "gvimdiff",
"diffuse", "ecmerge", "tortoisemerge", and
"diffuse", "ecmerge", "tortoisemerge", "araxis", and
"opendiff". Any other value is treated is custom merge tool
and there must be a corresponding mergetool.<tool>.cmd option.

View File

@@ -3,6 +3,11 @@ all::
# Define V=1 to have a more verbose compile.
#
# Define SHELL_PATH to a POSIX shell if your /bin/sh is broken.
#
# Define SANE_TOOL_PATH to a colon-separated list of paths to prepend
# to PATH if your tools in /usr/bin are broken.
#
# Define SNPRINTF_RETURNS_BOGUS if your are on a system which snprintf()
# or vsnprintf() return -1 instead of number of characters which would
# have been written to the final string if enough space had been available.
@@ -95,6 +100,10 @@ all::
# Define NEEDS_SOCKET if linking with libc is not enough (SunOS,
# Patrick Mauritz).
#
# Define NEEDS_RESOLV if linking with -lnsl and/or -lsocket is not enough.
# Notably on Solaris hstrerror resides in libresolv and on Solaris 7
# inet_ntop and inet_pton additionally reside there.
#
# Define NO_MMAP if you want to avoid mmap.
#
# Define NO_PTHREADS if you do not have or do not want to use Pthreads.
@@ -182,6 +191,9 @@ all::
#
# Define NO_CROSS_DIRECTORY_HARDLINKS if you plan to distribute the installed
# programs as a tar, where bin/ and libexec/ might be on different file systems.
#
# Define USE_NED_ALLOCATOR if you want to replace the platforms default
# memory allocators with the nedmalloc allocator written by Niall Douglas.
GIT-VERSION-FILE: .FORCE-GIT-VERSION-FILE
@$(SHELL_PATH) ./GIT-VERSION-GEN
@@ -706,13 +718,20 @@ ifeq ($(uname_S),SunOS)
NEEDS_SOCKET = YesPlease
NEEDS_NSL = YesPlease
SHELL_PATH = /bin/bash
SANE_TOOL_PATH = /usr/xpg6/bin:/usr/xpg4/bin
NO_STRCASESTR = YesPlease
NO_MEMMEM = YesPlease
NO_HSTRERROR = YesPlease
NO_MKDTEMP = YesPlease
NO_MKSTEMPS = YesPlease
ifneq ($(uname_R),5.11)
OLD_ICONV = UnfortunatelyYes
ifeq ($(uname_R),5.7)
NEEDS_RESOLV = YesPlease
NO_IPV6 = YesPlease
NO_SOCKADDR_STORAGE = YesPlease
NO_UNSETENV = YesPlease
NO_SETENV = YesPlease
NO_STRLCPY = YesPlease
NO_C99_FORMAT = YesPlease
NO_STRTOUMAX = YesPlease
endif
ifeq ($(uname_R),5.8)
NO_UNSETENV = YesPlease
@@ -726,9 +745,12 @@ ifeq ($(uname_S),SunOS)
NO_C99_FORMAT = YesPlease
NO_STRTOUMAX = YesPlease
endif
INSTALL = ginstall
ifdef NO_IPV6
NEEDS_RESOLV = YesPlease
endif
INSTALL = /usr/ucb/install
TAR = gtar
BASIC_CFLAGS += -D__EXTENSIONS__
BASIC_CFLAGS += -D__EXTENSIONS__ -D__sun__
endif
ifeq ($(uname_O),Cygwin)
NO_D_TYPE_IN_DIRENT = YesPlease
@@ -837,7 +859,6 @@ ifneq (,$(findstring MINGW,$(uname_S)))
pathsep = ;
NO_PREAD = YesPlease
NO_OPENSSL = YesPlease
NO_CURL = YesPlease
NO_LIBGEN_H = YesPlease
NO_SYMLINK_HEAD = YesPlease
NO_IPV6 = YesPlease
@@ -846,7 +867,6 @@ ifneq (,$(findstring MINGW,$(uname_S)))
NO_STRCASESTR = YesPlease
NO_STRLCPY = YesPlease
NO_MEMMEM = YesPlease
NO_PTHREADS = YesPlease
NEEDS_LIBICONV = YesPlease
OLD_ICONV = YesPlease
NO_C99_FORMAT = YesPlease
@@ -861,14 +881,26 @@ ifneq (,$(findstring MINGW,$(uname_S)))
NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
NO_NSEC = YesPlease
USE_WIN32_MMAP = YesPlease
USE_NED_ALLOCATOR = YesPlease
UNRELIABLE_FSTAT = UnfortunatelyYes
OBJECT_CREATION_USES_RENAMES = UnfortunatelyNeedsTo
COMPAT_CFLAGS += -D__USE_MINGW_ACCESS -DNOGDI -Icompat -Icompat/regex -Icompat/fnmatch
COMPAT_CFLAGS += -DSNPRINTF_SIZE_CORR=1
COMPAT_CFLAGS += -DSTRIP_EXTENSION=\".exe\"
COMPAT_OBJS += compat/mingw.o compat/fnmatch/fnmatch.o compat/regex/regex.o compat/winansi.o
EXTLIBS += -lws2_32
X = .exe
ifneq (,$(wildcard ../THIS_IS_MSYSGIT))
htmldir=doc/git/html/
prefix =
INSTALL = /bin/install
EXTLIBS += /mingw/lib/libz.a
NO_R_TO_GCC_LINKER = YesPlease
INTERNAL_QSORT = YesPlease
THREADED_DELTA_SEARCH = YesPlease
else
NO_CURL = YesPlease
NO_PTHREADS = YesPlease
endif
endif
ifneq (,$(findstring arm,$(uname_M)))
ARM_SHA1 = YesPlease
@@ -878,6 +910,14 @@ endif
-include config.mak.autogen
-include config.mak
ifdef SANE_TOOL_PATH
SANE_TOOL_PATH_SQ = $(subst ','\'',$(SANE_TOOL_PATH))
BROKEN_PATH_FIX = 's|^\# @@BROKEN_PATH_FIX@@$$|git_broken_path_fix $(SANE_TOOL_PATH_SQ)|'
PATH := $(SANE_TOOL_PATH):${PATH}
else
BROKEN_PATH_FIX = '/^\# @@BROKEN_PATH_FIX@@$$/d'
endif
ifeq ($(uname_S),Darwin)
ifndef NO_FINK
ifeq ($(shell test -d /sw/lib && echo y),y)
@@ -981,6 +1021,9 @@ endif
ifdef NEEDS_NSL
EXTLIBS += -lnsl
endif
ifdef NEEDS_RESOLV
EXTLIBS += -lresolv
endif
ifdef NO_D_TYPE_IN_DIRENT
BASIC_CFLAGS += -DNO_D_TYPE_IN_DIRENT
endif
@@ -1158,6 +1201,11 @@ ifdef UNRELIABLE_FSTAT
BASIC_CFLAGS += -DUNRELIABLE_FSTAT
endif
ifdef USE_NED_ALLOCATOR
COMPAT_CFLAGS += -DUSE_NED_ALLOCATOR -DOVERRIDE_STRDUP -DNDEBUG -DREPLACE_SYSTEM_ALLOCATOR -Icompat/nedmalloc
COMPAT_OBJS += compat/nedmalloc/nedmalloc.o
endif
ifeq ($(TCLTK_PATH),)
NO_TCLTK=NoThanks
endif
@@ -1232,7 +1280,7 @@ SHELL = $(SHELL_PATH)
all:: shell_compatibility_test $(ALL_PROGRAMS) $(BUILT_INS) $(OTHER_PROGRAMS) GIT-BUILD-OPTIONS
ifneq (,$X)
$(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_PROGRAMS) $(BUILT_INS) git$X)), test '$p' -ef '$p$X' || $(RM) '$p';)
$(QUIET_BUILT_IN)$(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_PROGRAMS) $(BUILT_INS) git$X)), test '$p' -ef '$p$X' || $(RM) '$p';)
endif
all::
@@ -1285,6 +1333,7 @@ $(patsubst %.sh,%,$(SCRIPT_SH)) : % : %.sh
-e 's|@SHELL_PATH@|$(SHELL_PATH_SQ)|' \
-e 's/@@GIT_VERSION@@/$(GIT_VERSION)/g' \
-e 's/@@NO_CURL@@/$(NO_CURL)/g' \
-e $(BROKEN_PATH_FIX) \
$@.sh >$@+ && \
chmod +x $@+ && \
mv $@+ $@
@@ -1673,7 +1722,7 @@ distclean: clean
$(RM) configure
clean:
$(RM) *.o mozilla-sha1/*.o arm/*.o ppc/*.o compat/*.o xdiff/*.o \
$(RM) *.o mozilla-sha1/*.o arm/*.o ppc/*.o compat/*.o compat/*/*.o xdiff/*.o \
$(LIB_FILE) $(XDIFF_LIB)
$(RM) $(ALL_PROGRAMS) $(BUILT_INS) git$X
$(RM) $(TEST_PROGRAMS)

120
bisect.c
View File

@@ -7,6 +7,7 @@
#include "quote.h"
#include "sha1-lookup.h"
#include "run-command.h"
#include "log-tree.h"
#include "bisect.h"
struct sha1_array {
@@ -27,7 +28,6 @@ struct argv_array {
int argv_alloc;
};
static const char *argv_diff_tree[] = {"diff-tree", "--pretty", NULL, NULL};
static const char *argv_checkout[] = {"checkout", "-q", NULL, "--", NULL};
static const char *argv_show_branch[] = {"show-branch", NULL, NULL};
@@ -521,14 +521,34 @@ static char *join_sha1_array_hex(struct sha1_array *array, char delim)
return strbuf_detach(&joined_hexs, NULL);
}
/*
* In this function, passing a not NULL skipped_first is very special.
* It means that we want to know if the first commit in the list is
* skipped because we will want to test a commit away from it if it is
* indeed skipped.
* So if the first commit is skipped, we cannot take the shortcut to
* just "return list" when we find the first non skipped commit, we
* have to return a fully filtered list.
*
* We use (*skipped_first == -1) to mean "it has been found that the
* first commit is not skipped". In this case *skipped_first is set back
* to 0 just before the function returns.
*/
struct commit_list *filter_skipped(struct commit_list *list,
struct commit_list **tried,
int show_all)
int show_all,
int *count,
int *skipped_first)
{
struct commit_list *filtered = NULL, **f = &filtered;
*tried = NULL;
if (skipped_first)
*skipped_first = 0;
if (count)
*count = 0;
if (!skipped_revs.sha1_nr)
return list;
@@ -537,22 +557,82 @@ struct commit_list *filter_skipped(struct commit_list *list,
list->next = NULL;
if (0 <= lookup_sha1_array(&skipped_revs,
list->item->object.sha1)) {
if (skipped_first && !*skipped_first)
*skipped_first = 1;
/* Move current to tried list */
*tried = list;
tried = &list->next;
} else {
if (!show_all)
return list;
if (!show_all) {
if (!skipped_first || !*skipped_first)
return list;
} else if (skipped_first && !*skipped_first) {
/* This means we know it's not skipped */
*skipped_first = -1;
}
/* Move current to filtered list */
*f = list;
f = &list->next;
if (count)
(*count)++;
}
list = next;
}
if (skipped_first && *skipped_first == -1)
*skipped_first = 0;
return filtered;
}
static struct commit_list *apply_skip_ratio(struct commit_list *list,
int count,
int skip_num, int skip_denom)
{
int index, i;
struct commit_list *cur, *previous;
cur = list;
previous = NULL;
index = count * skip_num / skip_denom;
for (i = 0; cur; cur = cur->next, i++) {
if (i == index) {
if (hashcmp(cur->item->object.sha1, current_bad_sha1))
return cur;
if (previous)
return previous;
return list;
}
previous = cur;
}
return list;
}
static struct commit_list *managed_skipped(struct commit_list *list,
struct commit_list **tried)
{
int count, skipped_first;
int skip_num, skip_denom;
*tried = NULL;
if (!skipped_revs.sha1_nr)
return list;
list = filter_skipped(list, tried, 0, &count, &skipped_first);
if (!skipped_first)
return list;
/* Use alternatively 1/5, 2/5 and 3/5 as skip ratio. */
skip_num = count % 3 + 1;
skip_denom = 5;
return apply_skip_ratio(list, count, skip_num, skip_denom);
}
static void bisect_rev_setup(struct rev_info *revs, const char *prefix,
const char *bad_format, const char *good_format,
int read_paths)
@@ -771,7 +851,7 @@ static int check_ancestors(const char *prefix)
/* Clean up objects used, as they will be reused. */
for (i = 0; i < pending_copy.nr; i++) {
struct object *o = pending_copy.objects[i].item;
unparse_commit((struct commit *)o);
clear_commit_marks((struct commit *)o, ALL_REV_FLAGS);
}
return res;
@@ -815,6 +895,31 @@ static void check_good_are_ancestors_of_bad(const char *prefix)
close(fd);
}
/*
* This does "git diff-tree --pretty COMMIT" without one fork+exec.
*/
static void show_diff_tree(const char *prefix, struct commit *commit)
{
struct rev_info opt;
/* diff-tree init */
init_revisions(&opt, prefix);
git_config(git_diff_basic_config, NULL); /* no "diff" UI options */
opt.abbrev = 0;
opt.diff = 1;
/* This is what "--pretty" does */
opt.verbose_header = 1;
opt.use_terminator = 0;
opt.commit_format = CMIT_FMT_DEFAULT;
/* diff-tree init */
if (!opt.diffopt.output_format)
opt.diffopt.output_format = DIFF_FORMAT_RAW;
log_tree_commit(&opt, commit);
}
/*
* We use the convention that exiting with an exit code 10 means that
* the bisection process finished successfully.
@@ -840,7 +945,7 @@ int bisect_next_all(const char *prefix)
revs.commits = find_bisection(revs.commits, &reaches, &all,
!!skipped_revs.sha1_nr);
revs.commits = filter_skipped(revs.commits, &tried, 0);
revs.commits = managed_skipped(revs.commits, &tried);
if (!revs.commits) {
/*
@@ -860,8 +965,7 @@ int bisect_next_all(const char *prefix)
if (!hashcmp(bisect_rev, current_bad_sha1)) {
exit_if_skipped_commits(tried, current_bad_sha1);
printf("%s is first bad commit\n", bisect_rev_hex);
argv_diff_tree[2] = bisect_rev_hex;
run_command_v_opt(argv_diff_tree, RUN_GIT_CMD);
show_diff_tree(prefix, revs.commits->item);
/* This means the bisection process succeeded. */
exit(10);
}

View File

@@ -7,7 +7,9 @@ extern struct commit_list *find_bisection(struct commit_list *list,
extern struct commit_list *filter_skipped(struct commit_list *list,
struct commit_list **tried,
int show_all);
int show_all,
int *count,
int *skipped_first);
extern void print_commit_list(struct commit_list *list,
const char *format_cur,

View File

@@ -836,8 +836,11 @@ int cmd_merge(int argc, const char **argv, const char *prefix)
struct commit_list **remotes = &remoteheads;
setup_work_tree();
if (file_exists(git_path("MERGE_HEAD")))
die("You have not concluded your merge. (MERGE_HEAD exists)");
if (read_cache_unmerged())
die("You are in the middle of a conflicted merge.");
die("You are in the middle of a conflicted merge."
" (index unmerged)");
/*
* Check if we are _not_ on a detached HEAD, i.e. if there is a

View File

@@ -295,17 +295,14 @@ static int get_push_ref_states(const struct ref *remote_refs,
struct ref_states *states)
{
struct remote *remote = states->remote;
struct ref *ref, *local_refs, *push_map, **push_tail;
struct ref *ref, *local_refs, *push_map;
if (remote->mirror)
return 0;
local_refs = get_local_heads();
push_map = copy_ref_list(remote_refs);
push_tail = &push_map;
while (*push_tail)
push_tail = &((*push_tail)->next);
match_refs(local_refs, push_map, &push_tail, remote->push_refspec_nr,
match_refs(local_refs, &push_map, remote->push_refspec_nr,
remote->push_refspec, MATCH_REFS_NONE);
states->push.strdup_strings = 1;

View File

@@ -262,7 +262,9 @@ int show_bisect_vars(struct rev_list_info *info, int reaches, int all)
if (!revs->commits && !(flags & BISECT_SHOW_TRIED))
return 1;
revs->commits = filter_skipped(revs->commits, &tried, flags & BISECT_SHOW_ALL);
revs->commits = filter_skipped(revs->commits, &tried,
flags & BISECT_SHOW_ALL,
NULL, NULL);
/*
* revs->commits can reach "reaches" commits among

View File

@@ -473,7 +473,7 @@ int cmd_send_pack(int argc, const char **argv, const char *prefix)
int fd[2];
struct child_process *conn;
struct extra_have_objects extra_have;
struct ref *remote_refs, **remote_tail, *local_refs;
struct ref *remote_refs, *local_refs;
int ret;
int send_all = 0;
const char *receivepack = "git-receive-pack";
@@ -567,13 +567,8 @@ int cmd_send_pack(int argc, const char **argv, const char *prefix)
flags |= MATCH_REFS_MIRROR;
/* match them up */
remote_tail = &remote_refs;
while (*remote_tail)
remote_tail = &((*remote_tail)->next);
if (match_refs(local_refs, remote_refs, &remote_tail,
nr_refspecs, refspecs, flags)) {
if (match_refs(local_refs, &remote_refs, nr_refspecs, refspecs, flags))
return -1;
}
ret = send_pack(&args, fd, conn, remote_refs, &extra_have);

View File

@@ -316,26 +316,6 @@ int parse_commit(struct commit *item)
return ret;
}
static void unparse_commit_list(struct commit_list *list)
{
for (; list; list = list->next)
unparse_commit(list->item);
}
void unparse_commit(struct commit *item)
{
item->object.flags = 0;
item->object.used = 0;
if (item->object.parsed) {
item->object.parsed = 0;
if (item->parents) {
unparse_commit_list(item->parents);
free_commit_list(item->parents);
item->parents = NULL;
}
}
}
struct commit_list *commit_list_insert(struct commit *item, struct commit_list **list_p)
{
struct commit_list *new_list = xmalloc(sizeof(struct commit_list));

View File

@@ -40,8 +40,6 @@ int parse_commit_buffer(struct commit *item, void *buffer, unsigned long size);
int parse_commit(struct commit *item);
void unparse_commit(struct commit *item);
struct commit_list * commit_list_insert(struct commit *item, struct commit_list **list_p);
unsigned commit_list_count(const struct commit_list *l);
struct commit_list * insert_by_date(struct commit *item, struct commit_list **list);

View File

@@ -1,5 +1,6 @@
#include "../git-compat-util.h"
#include "win32.h"
#include <conio.h>
#include "../strbuf.h"
unsigned int _CRT_fmode = _O_BINARY;
@@ -1171,3 +1172,62 @@ char *getpass(const char *prompt)
fputs("\n", stderr);
return strbuf_detach(&buf, NULL);
}
#ifndef NO_MINGW_REPLACE_READDIR
/* MinGW readdir implementation to avoid extra lstats for Git */
struct mingw_DIR
{
struct _finddata_t dd_dta; /* disk transfer area for this dir */
struct mingw_dirent dd_dir; /* Our own implementation, including d_type */
long dd_handle; /* _findnext handle */
int dd_stat; /* 0 = next entry to read is first entry, -1 = off the end, positive = 0 based index of next entry */
char dd_name[1]; /* given path for dir with search pattern (struct is extended) */
};
struct dirent *mingw_readdir(DIR *dir)
{
WIN32_FIND_DATAA buf;
HANDLE handle;
struct mingw_DIR *mdir = (struct mingw_DIR*)dir;
if (!dir->dd_handle) {
errno = EBADF; /* No set_errno for mingw */
return NULL;
}
if (dir->dd_handle == (long)INVALID_HANDLE_VALUE && dir->dd_stat == 0)
{
handle = FindFirstFileA(dir->dd_name, &buf);
DWORD lasterr = GetLastError();
dir->dd_handle = (long)handle;
if (handle == INVALID_HANDLE_VALUE && (lasterr != ERROR_NO_MORE_FILES)) {
errno = err_win_to_posix(lasterr);
return NULL;
}
} else if (dir->dd_handle == (long)INVALID_HANDLE_VALUE) {
return NULL;
} else if (!FindNextFileA((HANDLE)dir->dd_handle, &buf)) {
DWORD lasterr = GetLastError();
FindClose((HANDLE)dir->dd_handle);
dir->dd_handle = (long)INVALID_HANDLE_VALUE;
/* POSIX says you shouldn't set errno when readdir can't
find any more files; so, if another error we leave it set. */
if (lasterr != ERROR_NO_MORE_FILES)
errno = err_win_to_posix(lasterr);
return NULL;
}
/* We get here if `buf' contains valid data. */
strcpy(dir->dd_dir.d_name, buf.cFileName);
++dir->dd_stat;
/* Set file type, based on WIN32_FIND_DATA */
mdir->dd_dir.d_type = 0;
if (buf.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)
mdir->dd_dir.d_type |= DT_DIR;
else
mdir->dd_dir.d_type |= DT_REG;
return (struct dirent*)&dir->dd_dir;
}
#endif // !NO_MINGW_REPLACE_READDIR

View File

@@ -235,3 +235,32 @@ int main(int argc, const char **argv) \
return mingw_main(argc, argv); \
} \
static int mingw_main(c,v)
#ifndef NO_MINGW_REPLACE_READDIR
/*
* A replacement of readdir, to ensure that it reads the file type at
* the same time. This avoid extra unneeded lstats in git on MinGW
*/
#undef DT_UNKNOWN
#undef DT_DIR
#undef DT_REG
#undef DT_LNK
#define DT_UNKNOWN 0
#define DT_DIR 1
#define DT_REG 2
#define DT_LNK 3
struct mingw_dirent
{
long d_ino; /* Always zero. */
union {
unsigned short d_reclen; /* Always zero. */
unsigned char d_type; /* Reimplementation adds this */
};
unsigned short d_namlen; /* Length of name in d_name. */
char d_name[FILENAME_MAX]; /* File name. */
};
#define dirent mingw_dirent
#define readdir(x) mingw_readdir(x)
struct dirent *mingw_readdir(DIR *dir);
#endif // !NO_MINGW_REPLACE_READDIR

View File

@@ -0,0 +1,23 @@
Boost Software License - Version 1.0 - August 17th, 2003
Permission is hereby granted, free of charge, to any person or organization
obtaining a copy of the software and accompanying documentation covered by
this license (the "Software") to use, reproduce, display, distribute,
execute, and transmit the Software, and to prepare derivative works of the
Software, and to permit third-parties to whom the Software is furnished to
do so, all subject to the following:
The copyright notices in the Software and this entire statement, including
the above license grant, this restriction and the following disclaimer,
must be included in all copies of the Software, in whole or in part, and
all derivative works of the Software, unless such copies or derivative
works are solely in the form of machine-executable object code generated by
a source language processor.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

136
compat/nedmalloc/Readme.txt Normal file
View File

@@ -0,0 +1,136 @@
nedalloc v1.05 15th June 2008:
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
by Niall Douglas (http://www.nedprod.com/programs/portable/nedmalloc/)
Enclosed is nedalloc, an alternative malloc implementation for multiple
threads without lock contention based on dlmalloc v2.8.4. It is more
or less a newer implementation of ptmalloc2, the standard allocator in
Linux (which is based on dlmalloc v2.7.0) but also contains a per-thread
cache for maximum CPU scalability.
It is licensed under the Boost Software License which basically means
you can do anything you like with it. This does not apply to the malloc.c.h
file which remains copyright to others.
It has been tested on win32 (x86), win64 (x64), Linux (x64), FreeBSD (x64)
and Apple MacOS X (x86). It works very well on all of these and is very
significantly faster than the system allocator on all of these platforms.
By literally dropping in this allocator as a replacement for your system
allocator, you can see real world improvements of up to three times in normal
code!
To use:
-=-=-=-
Drop in nedmalloc.h, nedmalloc.c and malloc.c.h into your project.
Configure using the instructions in nedmalloc.h. Run and enjoy.
To test, compile test.c. It will run a comparison between your system
allocator and nedalloc and tell you how much faster nedalloc is. It also
serves as an example of usage.
Notes:
-=-=-=
If you want the very latest version of this allocator, get it from the
TnFOX SVN repository at svn://svn.berlios.de/viewcvs/tnfox/trunk/src/nedmalloc
Because of how nedalloc allocates an mspace per thread, it can cause
severe bloating of memory usage under certain allocation patterns.
You can substantially reduce this wastage by setting MAXTHREADSINPOOL
or the threads parameter to nedcreatepool() to a fraction of the number of
threads which would normally be in a pool at once. This will reduce
bloating at the cost of an increase in lock contention. If allocated size
is less than THREADCACHEMAX, locking is avoided 90-99% of the time and
if most of your allocations are below this value, you can safely set
MAXTHREADSINPOOL to one.
You will suffer memory leakage unless you call neddisablethreadcache()
per pool for every thread which exits. This is because nedalloc cannot
portably know when a thread exits and thus when its thread cache can
be returned for use by other code. Don't forget pool zero, the system pool.
For C++ type allocation patterns (where the same sizes of memory are
regularly allocated and deallocated as objects are created and destroyed),
the threadcache always benefits performance. If however your allocation
patterns are different, searching the threadcache may significantly slow
down your code - as a rule of thumb, if cache utilisation is below 80%
(see the source for neddisablethreadcache() for how to enable debug
printing in release mode) then you should disable the thread cache for
that thread. You can compile out the threadcache code by setting
THREADCACHEMAX to zero.
Speed comparisons:
-=-=-=-=-=-=-=-=-=
See Benchmarks.xls for details.
The enclosed test.c can do two things: it can be a torture test or a speed
test. The speed test is designed to be a representative synthetic
memory allocator test. It works by randomly mixing allocations with frees
with half of the allocation sizes being a two power multiple less than
512 bytes (to mimic C++ stack instantiated objects) and the other half
being a simple random value less than 16Kb.
The real world code results are from Tn's TestIO benchmark. This is a
heavily multithreaded and memory intensive benchmark with a lot of branching
and other stuff modern processors don't like so much. As you'll note, the
test doesn't show the benefits of the threadcache mostly due to the saturation
of the memory bus being the limiting factor.
ChangeLog:
-=-=-=-=-=
v1.05 15th June 2008:
* { 1042 } Added error check for TLSSET() and TLSFREE() macros. Thanks to
Markus Elfring for reporting this.
* { 1043 } Fixed a segfault when freeing memory allocated using
nedindependent_comalloc(). Thanks to Pavel Vozenilek for reporting this.
v1.04 14th July 2007:
* Fixed a bug with the new optimised implementation that failed to lock
on a realloc under certain conditions.
* Fixed lack of thread synchronisation in InitPool() causing pool corruption
* Fixed a memory leak of thread cache contents on disabling. Thanks to Earl
Chew for reporting this.
* Added a sanity check for freed blocks being valid.
* Reworked test.c into being a torture test.
* Fixed GCC assembler optimisation misspecification
v1.04alpha_svn915 7th October 2006:
* Fixed failure to unlock thread cache list if allocating a new list failed.
Thanks to Dmitry Chichkov for reporting this. Futher thanks to Aleksey Sanin.
* Fixed realloc(0, <size>) segfaulting. Thanks to Dmitry Chichkov for
reporting this.
* Made config defines #ifndef so they can be overriden by the build system.
Thanks to Aleksey Sanin for suggesting this.
* Fixed deadlock in nedprealloc() due to unnecessary locking of preferred
thread mspace when mspace_realloc() always uses the original block's mspace
anyway. Thanks to Aleksey Sanin for reporting this.
* Made some speed improvements by hacking mspace_malloc() to no longer lock
its mspace, thus allowing the recursive mutex implementation to be removed
with an associated speed increase. Thanks to Aleksey Sanin for suggesting this.
* Fixed a bug where allocating mspaces overran its max limit. Thanks to
Aleksey Sanin for reporting this.
v1.03 10th July 2006:
* Fixed memory corruption bug in threadcache code which only appeared with >4
threads and in heavy use of the threadcache.
v1.02 15th May 2006:
* Integrated dlmalloc v2.8.4, fixing the win32 memory release problem and
improving performance still further. Speed is now up to twice the speed of v1.01
(average is 67% faster).
* Fixed win32 critical section implementation. Thanks to Pavel Kuznetsov
for reporting this.
* Wasn't locking mspace if all mspaces were locked. Thanks to Pavel Kuznetsov
for reporting this.
* Added Apple Mac OS X support.
v1.01 24th February 2006:
* Fixed multiprocessor scaling problems by removing sources of cache sloshing
* Earl Chew <earl_chew <at> agilent <dot> com> sent patches for the following:
1. size2binidx() wasn't working for default code path (non x86)
2. Fixed failure to release mspace lock under certain circumstances which
caused a deadlock
v1.00 1st January 2006:
* First release

5752
compat/nedmalloc/malloc.c.h Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,966 @@
/* Alternative malloc implementation for multiple threads without
lock contention based on dlmalloc. (C) 2005-2006 Niall Douglas
Boost Software License - Version 1.0 - August 17th, 2003
Permission is hereby granted, free of charge, to any person or organization
obtaining a copy of the software and accompanying documentation covered by
this license (the "Software") to use, reproduce, display, distribute,
execute, and transmit the Software, and to prepare derivative works of the
Software, and to permit third-parties to whom the Software is furnished to
do so, all subject to the following:
The copyright notices in the Software and this entire statement, including
the above license grant, this restriction and the following disclaimer,
must be included in all copies of the Software, in whole or in part, and
all derivative works of the Software, unless such copies or derivative
works are solely in the form of machine-executable object code generated by
a source language processor.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
*/
#ifdef _MSC_VER
/* Enable full aliasing on MSVC */
/*#pragma optimize("a", on)*/
#endif
/*#define FULLSANITYCHECKS*/
#include "nedmalloc.h"
#if defined(WIN32)
#include <malloc.h>
#endif
#define MSPACES 1
#define ONLY_MSPACES 1
#ifndef USE_LOCKS
#define USE_LOCKS 1
#endif
#define FOOTERS 1 /* Need to enable footers so frees lock the right mspace */
#undef DEBUG /* dlmalloc wants DEBUG either 0 or 1 */
#ifdef _DEBUG
#define DEBUG 1
#else
#define DEBUG 0
#endif
#ifdef NDEBUG /* Disable assert checking on release builds */
#undef DEBUG
#endif
/* The default of 64Kb means we spend too much time kernel-side */
#ifndef DEFAULT_GRANULARITY
#define DEFAULT_GRANULARITY (1*1024*1024)
#endif
/*#define USE_SPIN_LOCKS 0*/
/*#define FORCEINLINE*/
#include "malloc.c.h"
#ifdef NDEBUG /* Disable assert checking on release builds */
#undef DEBUG
#endif
/* The maximum concurrent threads in a pool possible */
#ifndef MAXTHREADSINPOOL
#define MAXTHREADSINPOOL 16
#endif
/* The maximum number of threadcaches which can be allocated */
#ifndef THREADCACHEMAXCACHES
#define THREADCACHEMAXCACHES 256
#endif
/* The maximum size to be allocated from the thread cache */
#ifndef THREADCACHEMAX
#define THREADCACHEMAX 8192
#endif
#if 0
/* The number of cache entries for finer grained bins. This is (topbitpos(THREADCACHEMAX)-4)*2 */
#define THREADCACHEMAXBINS ((13-4)*2)
#else
/* The number of cache entries. This is (topbitpos(THREADCACHEMAX)-4) */
#define THREADCACHEMAXBINS (13-4)
#endif
/* Point at which the free space in a thread cache is garbage collected */
#ifndef THREADCACHEMAXFREESPACE
#define THREADCACHEMAXFREESPACE (512*1024)
#endif
#ifdef WIN32
#define TLSVAR DWORD
#define TLSALLOC(k) (*(k)=TlsAlloc(), TLS_OUT_OF_INDEXES==*(k))
#define TLSFREE(k) (!TlsFree(k))
#define TLSGET(k) TlsGetValue(k)
#define TLSSET(k, a) (!TlsSetValue(k, a))
#ifdef DEBUG
static LPVOID ChkedTlsGetValue(DWORD idx)
{
LPVOID ret=TlsGetValue(idx);
assert(S_OK==GetLastError());
return ret;
}
#undef TLSGET
#define TLSGET(k) ChkedTlsGetValue(k)
#endif
#else
#define TLSVAR pthread_key_t
#define TLSALLOC(k) pthread_key_create(k, 0)
#define TLSFREE(k) pthread_key_delete(k)
#define TLSGET(k) pthread_getspecific(k)
#define TLSSET(k, a) pthread_setspecific(k, a)
#endif
#if 0
/* Only enable if testing with valgrind. Causes misoperation */
#define mspace_malloc(p, s) malloc(s)
#define mspace_realloc(p, m, s) realloc(m, s)
#define mspace_calloc(p, n, s) calloc(n, s)
#define mspace_free(p, m) free(m)
#endif
#if defined(__cplusplus)
#if !defined(NO_NED_NAMESPACE)
namespace nedalloc {
#else
extern "C" {
#endif
#endif
size_t nedblksize(void *mem) THROWSPEC
{
#if 0
/* Only enable if testing with valgrind. Causes misoperation */
return THREADCACHEMAX;
#else
if(mem)
{
mchunkptr p=mem2chunk(mem);
assert(cinuse(p)); /* If this fails, someone tried to free a block twice */
if(cinuse(p))
return chunksize(p)-overhead_for(p);
}
return 0;
#endif
}
void nedsetvalue(void *v) THROWSPEC { nedpsetvalue(0, v); }
void * nedmalloc(size_t size) THROWSPEC { return nedpmalloc(0, size); }
void * nedcalloc(size_t no, size_t size) THROWSPEC { return nedpcalloc(0, no, size); }
void * nedrealloc(void *mem, size_t size) THROWSPEC { return nedprealloc(0, mem, size); }
void nedfree(void *mem) THROWSPEC { nedpfree(0, mem); }
void * nedmemalign(size_t alignment, size_t bytes) THROWSPEC { return nedpmemalign(0, alignment, bytes); }
#if !NO_MALLINFO
struct mallinfo nedmallinfo(void) THROWSPEC { return nedpmallinfo(0); }
#endif
int nedmallopt(int parno, int value) THROWSPEC { return nedpmallopt(0, parno, value); }
int nedmalloc_trim(size_t pad) THROWSPEC { return nedpmalloc_trim(0, pad); }
void nedmalloc_stats() THROWSPEC { nedpmalloc_stats(0); }
size_t nedmalloc_footprint() THROWSPEC { return nedpmalloc_footprint(0); }
void **nedindependent_calloc(size_t elemsno, size_t elemsize, void **chunks) THROWSPEC { return nedpindependent_calloc(0, elemsno, elemsize, chunks); }
void **nedindependent_comalloc(size_t elems, size_t *sizes, void **chunks) THROWSPEC { return nedpindependent_comalloc(0, elems, sizes, chunks); }
struct threadcacheblk_t;
typedef struct threadcacheblk_t threadcacheblk;
struct threadcacheblk_t
{ /* Keep less than 16 bytes on 32 bit systems and 32 bytes on 64 bit systems */
#ifdef FULLSANITYCHECKS
unsigned int magic;
#endif
unsigned int lastUsed, size;
threadcacheblk *next, *prev;
};
typedef struct threadcache_t
{
#ifdef FULLSANITYCHECKS
unsigned int magic1;
#endif
int mymspace; /* Last mspace entry this thread used */
long threadid;
unsigned int mallocs, frees, successes;
size_t freeInCache; /* How much free space is stored in this cache */
threadcacheblk *bins[(THREADCACHEMAXBINS+1)*2];
#ifdef FULLSANITYCHECKS
unsigned int magic2;
#endif
} threadcache;
struct nedpool_t
{
MLOCK_T mutex;
void *uservalue;
int threads; /* Max entries in m to use */
threadcache *caches[THREADCACHEMAXCACHES];
TLSVAR mycache; /* Thread cache for this thread. 0 for unset, negative for use mspace-1 directly, otherwise is cache-1 */
mstate m[MAXTHREADSINPOOL+1]; /* mspace entries for this pool */
};
static nedpool syspool;
static FORCEINLINE unsigned int size2binidx(size_t _size) THROWSPEC
{ /* 8=1000 16=10000 20=10100 24=11000 32=100000 48=110000 4096=1000000000000 */
unsigned int topbit, size=(unsigned int)(_size>>4);
/* 16=1 20=1 24=1 32=10 48=11 64=100 96=110 128=1000 4096=100000000 */
#if defined(__GNUC__)
topbit = sizeof(size)*__CHAR_BIT__ - 1 - __builtin_clz(size);
#elif defined(_MSC_VER) && _MSC_VER>=1300
{
unsigned long bsrTopBit;
_BitScanReverse(&bsrTopBit, size);
topbit = bsrTopBit;
}
#else
#if 0
union {
unsigned asInt[2];
double asDouble;
};
int n;
asDouble = (double)size + 0.5;
topbit = (asInt[!FOX_BIGENDIAN] >> 20) - 1023;
#else
{
unsigned int x=size;
x = x | (x >> 1);
x = x | (x >> 2);
x = x | (x >> 4);
x = x | (x >> 8);
x = x | (x >>16);
x = ~x;
x = x - ((x >> 1) & 0x55555555);
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x + (x >> 4)) & 0x0F0F0F0F;
x = x + (x << 8);
x = x + (x << 16);
topbit=31 - (x >> 24);
}
#endif
#endif
return topbit;
}
#ifdef FULLSANITYCHECKS
static void tcsanitycheck(threadcacheblk **ptr) THROWSPEC
{
assert((ptr[0] && ptr[1]) || (!ptr[0] && !ptr[1]));
if(ptr[0] && ptr[1])
{
assert(nedblksize(ptr[0])>=sizeof(threadcacheblk));
assert(nedblksize(ptr[1])>=sizeof(threadcacheblk));
assert(*(unsigned int *) "NEDN"==ptr[0]->magic);
assert(*(unsigned int *) "NEDN"==ptr[1]->magic);
assert(!ptr[0]->prev);
assert(!ptr[1]->next);
if(ptr[0]==ptr[1])
{
assert(!ptr[0]->next);
assert(!ptr[1]->prev);
}
}
}
static void tcfullsanitycheck(threadcache *tc) THROWSPEC
{
threadcacheblk **tcbptr=tc->bins;
int n;
for(n=0; n<=THREADCACHEMAXBINS; n++, tcbptr+=2)
{
threadcacheblk *b, *ob=0;
tcsanitycheck(tcbptr);
for(b=tcbptr[0]; b; ob=b, b=b->next)
{
assert(*(unsigned int *) "NEDN"==b->magic);
assert(!ob || ob->next==b);
assert(!ob || b->prev==ob);
}
}
}
#endif
static NOINLINE void RemoveCacheEntries(nedpool *p, threadcache *tc, unsigned int age) THROWSPEC
{
#ifdef FULLSANITYCHECKS
tcfullsanitycheck(tc);
#endif
if(tc->freeInCache)
{
threadcacheblk **tcbptr=tc->bins;
int n;
for(n=0; n<=THREADCACHEMAXBINS; n++, tcbptr+=2)
{
threadcacheblk **tcb=tcbptr+1; /* come from oldest end of list */
/*tcsanitycheck(tcbptr);*/
for(; *tcb && tc->frees-(*tcb)->lastUsed>=age; )
{
threadcacheblk *f=*tcb;
size_t blksize=f->size; /*nedblksize(f);*/
assert(blksize<=nedblksize(f));
assert(blksize);
#ifdef FULLSANITYCHECKS
assert(*(unsigned int *) "NEDN"==(*tcb)->magic);
#endif
*tcb=(*tcb)->prev;
if(*tcb)
(*tcb)->next=0;
else
*tcbptr=0;
tc->freeInCache-=blksize;
assert((long) tc->freeInCache>=0);
mspace_free(0, f);
/*tcsanitycheck(tcbptr);*/
}
}
}
#ifdef FULLSANITYCHECKS
tcfullsanitycheck(tc);
#endif
}
static void DestroyCaches(nedpool *p) THROWSPEC
{
if(p->caches)
{
threadcache *tc;
int n;
for(n=0; n<THREADCACHEMAXCACHES; n++)
{
if((tc=p->caches[n]))
{
tc->frees++;
RemoveCacheEntries(p, tc, 0);
assert(!tc->freeInCache);
tc->mymspace=-1;
tc->threadid=0;
mspace_free(0, tc);
p->caches[n]=0;
}
}
}
}
static NOINLINE threadcache *AllocCache(nedpool *p) THROWSPEC
{
threadcache *tc=0;
int n, end;
ACQUIRE_LOCK(&p->mutex);
for(n=0; n<THREADCACHEMAXCACHES && p->caches[n]; n++);
if(THREADCACHEMAXCACHES==n)
{ /* List exhausted, so disable for this thread */
RELEASE_LOCK(&p->mutex);
return 0;
}
tc=p->caches[n]=(threadcache *) mspace_calloc(p->m[0], 1, sizeof(threadcache));
if(!tc)
{
RELEASE_LOCK(&p->mutex);
return 0;
}
#ifdef FULLSANITYCHECKS
tc->magic1=*(unsigned int *)"NEDMALC1";
tc->magic2=*(unsigned int *)"NEDMALC2";
#endif
tc->threadid=(long)(size_t)CURRENT_THREAD;
for(end=0; p->m[end]; end++);
tc->mymspace=tc->threadid % end;
RELEASE_LOCK(&p->mutex);
if(TLSSET(p->mycache, (void *)(size_t)(n+1))) abort();
return tc;
}
static void *threadcache_malloc(nedpool *p, threadcache *tc, size_t *size) THROWSPEC
{
void *ret=0;
unsigned int bestsize;
unsigned int idx=size2binidx(*size);
size_t blksize=0;
threadcacheblk *blk, **binsptr;
#ifdef FULLSANITYCHECKS
tcfullsanitycheck(tc);
#endif
/* Calculate best fit bin size */
bestsize=1<<(idx+4);
#if 0
/* Finer grained bin fit */
idx<<=1;
if(*size>bestsize)
{
idx++;
bestsize+=bestsize>>1;
}
if(*size>bestsize)
{
idx++;
bestsize=1<<(4+(idx>>1));
}
#else
if(*size>bestsize)
{
idx++;
bestsize<<=1;
}
#endif
assert(bestsize>=*size);
if(*size<bestsize) *size=bestsize;
assert(*size<=THREADCACHEMAX);
assert(idx<=THREADCACHEMAXBINS);
binsptr=&tc->bins[idx*2];
/* Try to match close, but move up a bin if necessary */
blk=*binsptr;
if(!blk || blk->size<*size)
{ /* Bump it up a bin */
if(idx<THREADCACHEMAXBINS)
{
idx++;
binsptr+=2;
blk=*binsptr;
}
}
if(blk)
{
blksize=blk->size; /*nedblksize(blk);*/
assert(nedblksize(blk)>=blksize);
assert(blksize>=*size);
if(blk->next)
blk->next->prev=0;
*binsptr=blk->next;
if(!*binsptr)
binsptr[1]=0;
#ifdef FULLSANITYCHECKS
blk->magic=0;
#endif
assert(binsptr[0]!=blk && binsptr[1]!=blk);
assert(nedblksize(blk)>=sizeof(threadcacheblk) && nedblksize(blk)<=THREADCACHEMAX+CHUNK_OVERHEAD);
/*printf("malloc: %p, %p, %p, %lu\n", p, tc, blk, (long) size);*/
ret=(void *) blk;
}
++tc->mallocs;
if(ret)
{
assert(blksize>=*size);
++tc->successes;
tc->freeInCache-=blksize;
assert((long) tc->freeInCache>=0);
}
#if defined(DEBUG) && 0
if(!(tc->mallocs & 0xfff))
{
printf("*** threadcache=%u, mallocs=%u (%f), free=%u (%f), freeInCache=%u\n", (unsigned int) tc->threadid, tc->mallocs,
(float) tc->successes/tc->mallocs, tc->frees, (float) tc->successes/tc->frees, (unsigned int) tc->freeInCache);
}
#endif
#ifdef FULLSANITYCHECKS
tcfullsanitycheck(tc);
#endif
return ret;
}
static NOINLINE void ReleaseFreeInCache(nedpool *p, threadcache *tc, int mymspace) THROWSPEC
{
unsigned int age=THREADCACHEMAXFREESPACE/8192;
/*ACQUIRE_LOCK(&p->m[mymspace]->mutex);*/
while(age && tc->freeInCache>=THREADCACHEMAXFREESPACE)
{
RemoveCacheEntries(p, tc, age);
/*printf("*** Removing cache entries older than %u (%u)\n", age, (unsigned int) tc->freeInCache);*/
age>>=1;
}
/*RELEASE_LOCK(&p->m[mymspace]->mutex);*/
}
static void threadcache_free(nedpool *p, threadcache *tc, int mymspace, void *mem, size_t size) THROWSPEC
{
unsigned int bestsize;
unsigned int idx=size2binidx(size);
threadcacheblk **binsptr, *tck=(threadcacheblk *) mem;
assert(size>=sizeof(threadcacheblk) && size<=THREADCACHEMAX+CHUNK_OVERHEAD);
#ifdef DEBUG
{ /* Make sure this is a valid memory block */
mchunkptr p = mem2chunk(mem);
mstate fm = get_mstate_for(p);
if (!ok_magic(fm)) {
USAGE_ERROR_ACTION(fm, p);
return;
}
}
#endif
#ifdef FULLSANITYCHECKS
tcfullsanitycheck(tc);
#endif
/* Calculate best fit bin size */
bestsize=1<<(idx+4);
#if 0
/* Finer grained bin fit */
idx<<=1;
if(size>bestsize)
{
unsigned int biggerbestsize=bestsize+bestsize<<1;
if(size>=biggerbestsize)
{
idx++;
bestsize=biggerbestsize;
}
}
#endif
if(bestsize!=size) /* dlmalloc can round up, so we round down to preserve indexing */
size=bestsize;
binsptr=&tc->bins[idx*2];
assert(idx<=THREADCACHEMAXBINS);
if(tck==*binsptr)
{
fprintf(stderr, "Attempt to free already freed memory block %p - aborting!\n", tck);
abort();
}
#ifdef FULLSANITYCHECKS
tck->magic=*(unsigned int *) "NEDN";
#endif
tck->lastUsed=++tc->frees;
tck->size=(unsigned int) size;
tck->next=*binsptr;
tck->prev=0;
if(tck->next)
tck->next->prev=tck;
else
binsptr[1]=tck;
assert(!*binsptr || (*binsptr)->size==tck->size);
*binsptr=tck;
assert(tck==tc->bins[idx*2]);
assert(tc->bins[idx*2+1]==tck || binsptr[0]->next->prev==tck);
/*printf("free: %p, %p, %p, %lu\n", p, tc, mem, (long) size);*/
tc->freeInCache+=size;
#ifdef FULLSANITYCHECKS
tcfullsanitycheck(tc);
#endif
#if 1
if(tc->freeInCache>=THREADCACHEMAXFREESPACE)
ReleaseFreeInCache(p, tc, mymspace);
#endif
}
static NOINLINE int InitPool(nedpool *p, size_t capacity, int threads) THROWSPEC
{ /* threads is -1 for system pool */
ensure_initialization();
ACQUIRE_MALLOC_GLOBAL_LOCK();
if(p->threads) goto done;
if(INITIAL_LOCK(&p->mutex)) goto err;
if(TLSALLOC(&p->mycache)) goto err;
if(!(p->m[0]=(mstate) create_mspace(capacity, 1))) goto err;
p->m[0]->extp=p;
p->threads=(threads<1 || threads>MAXTHREADSINPOOL) ? MAXTHREADSINPOOL : threads;
done:
RELEASE_MALLOC_GLOBAL_LOCK();
return 1;
err:
if(threads<0)
abort(); /* If you can't allocate for system pool, we're screwed */
DestroyCaches(p);
if(p->m[0])
{
destroy_mspace(p->m[0]);
p->m[0]=0;
}
if(p->mycache)
{
if(TLSFREE(p->mycache)) abort();
p->mycache=0;
}
RELEASE_MALLOC_GLOBAL_LOCK();
return 0;
}
static NOINLINE mstate FindMSpace(nedpool *p, threadcache *tc, int *lastUsed, size_t size) THROWSPEC
{ /* Gets called when thread's last used mspace is in use. The strategy
is to run through the list of all available mspaces looking for an
unlocked one and if we fail, we create a new one so long as we don't
exceed p->threads */
int n, end;
for(n=end=*lastUsed+1; p->m[n]; end=++n)
{
if(TRY_LOCK(&p->m[n]->mutex)) goto found;
}
for(n=0; n<*lastUsed && p->m[n]; n++)
{
if(TRY_LOCK(&p->m[n]->mutex)) goto found;
}
if(end<p->threads)
{
mstate temp;
if(!(temp=(mstate) create_mspace(size, 1)))
goto badexit;
/* Now we're ready to modify the lists, we lock */
ACQUIRE_LOCK(&p->mutex);
while(p->m[end] && end<p->threads)
end++;
if(end>=p->threads)
{ /* Drat, must destroy it now */
RELEASE_LOCK(&p->mutex);
destroy_mspace((mspace) temp);
goto badexit;
}
/* We really want to make sure this goes into memory now but we
have to be careful of breaking aliasing rules, so write it twice */
*((volatile struct malloc_state **) &p->m[end])=p->m[end]=temp;
ACQUIRE_LOCK(&p->m[end]->mutex);
/*printf("Created mspace idx %d\n", end);*/
RELEASE_LOCK(&p->mutex);
n=end;
goto found;
}
/* Let it lock on the last one it used */
badexit:
ACQUIRE_LOCK(&p->m[*lastUsed]->mutex);
return p->m[*lastUsed];
found:
*lastUsed=n;
if(tc)
tc->mymspace=n;
else
{
if(TLSSET(p->mycache, (void *)(size_t)(-(n+1)))) abort();
}
return p->m[n];
}
nedpool *nedcreatepool(size_t capacity, int threads) THROWSPEC
{
nedpool *ret;
if(!(ret=(nedpool *) nedpcalloc(0, 1, sizeof(nedpool)))) return 0;
if(!InitPool(ret, capacity, threads))
{
nedpfree(0, ret);
return 0;
}
return ret;
}
void neddestroypool(nedpool *p) THROWSPEC
{
int n;
ACQUIRE_LOCK(&p->mutex);
DestroyCaches(p);
for(n=0; p->m[n]; n++)
{
destroy_mspace(p->m[n]);
p->m[n]=0;
}
RELEASE_LOCK(&p->mutex);
if(TLSFREE(p->mycache)) abort();
nedpfree(0, p);
}
void nedpsetvalue(nedpool *p, void *v) THROWSPEC
{
if(!p) { p=&syspool; if(!syspool.threads) InitPool(&syspool, 0, -1); }
p->uservalue=v;
}
void *nedgetvalue(nedpool **p, void *mem) THROWSPEC
{
nedpool *np=0;
mchunkptr mcp=mem2chunk(mem);
mstate fm;
if(!(is_aligned(chunk2mem(mcp))) && mcp->head != FENCEPOST_HEAD) return 0;
if(!cinuse(mcp)) return 0;
if(!next_pinuse(mcp)) return 0;
if(!is_mmapped(mcp) && !pinuse(mcp))
{
if(next_chunk(prev_chunk(mcp))!=mcp) return 0;
}
fm=get_mstate_for(mcp);
if(!ok_magic(fm)) return 0;
if(!ok_address(fm, mcp)) return 0;
if(!fm->extp) return 0;
np=(nedpool *) fm->extp;
if(p) *p=np;
return np->uservalue;
}
void neddisablethreadcache(nedpool *p) THROWSPEC
{
int mycache;
if(!p)
{
p=&syspool;
if(!syspool.threads) InitPool(&syspool, 0, -1);
}
mycache=(int)(size_t) TLSGET(p->mycache);
if(!mycache)
{ /* Set to mspace 0 */
if(TLSSET(p->mycache, (void *)-1)) abort();
}
else if(mycache>0)
{ /* Set to last used mspace */
threadcache *tc=p->caches[mycache-1];
#if defined(DEBUG)
printf("Threadcache utilisation: %lf%% in cache with %lf%% lost to other threads\n",
100.0*tc->successes/tc->mallocs, 100.0*((double) tc->mallocs-tc->frees)/tc->mallocs);
#endif
if(TLSSET(p->mycache, (void *)(size_t)(-tc->mymspace))) abort();
tc->frees++;
RemoveCacheEntries(p, tc, 0);
assert(!tc->freeInCache);
tc->mymspace=-1;
tc->threadid=0;
mspace_free(0, p->caches[mycache-1]);
p->caches[mycache-1]=0;
}
}
#define GETMSPACE(m,p,tc,ms,s,action) \
do \
{ \
mstate m = GetMSpace((p),(tc),(ms),(s)); \
action; \
RELEASE_LOCK(&m->mutex); \
} while (0)
static FORCEINLINE mstate GetMSpace(nedpool *p, threadcache *tc, int mymspace, size_t size) THROWSPEC
{ /* Returns a locked and ready for use mspace */
mstate m=p->m[mymspace];
assert(m);
if(!TRY_LOCK(&p->m[mymspace]->mutex)) m=FindMSpace(p, tc, &mymspace, size);\
/*assert(IS_LOCKED(&p->m[mymspace]->mutex));*/
return m;
}
static FORCEINLINE void GetThreadCache(nedpool **p, threadcache **tc, int *mymspace, size_t *size) THROWSPEC
{
int mycache;
if(size && *size<sizeof(threadcacheblk)) *size=sizeof(threadcacheblk);
if(!*p)
{
*p=&syspool;
if(!syspool.threads) InitPool(&syspool, 0, -1);
}
mycache=(int)(size_t) TLSGET((*p)->mycache);
if(mycache>0)
{
*tc=(*p)->caches[mycache-1];
*mymspace=(*tc)->mymspace;
}
else if(!mycache)
{
*tc=AllocCache(*p);
if(!*tc)
{ /* Disable */
if(TLSSET((*p)->mycache, (void *)-1)) abort();
*mymspace=0;
}
else
*mymspace=(*tc)->mymspace;
}
else
{
*tc=0;
*mymspace=-mycache-1;
}
assert(*mymspace>=0);
assert((long)(size_t)CURRENT_THREAD==(*tc)->threadid);
#ifdef FULLSANITYCHECKS
if(*tc)
{
if(*(unsigned int *)"NEDMALC1"!=(*tc)->magic1 || *(unsigned int *)"NEDMALC2"!=(*tc)->magic2)
{
abort();
}
}
#endif
}
void * nedpmalloc(nedpool *p, size_t size) THROWSPEC
{
void *ret=0;
threadcache *tc;
int mymspace;
GetThreadCache(&p, &tc, &mymspace, &size);
#if THREADCACHEMAX
if(tc && size<=THREADCACHEMAX)
{ /* Use the thread cache */
ret=threadcache_malloc(p, tc, &size);
}
#endif
if(!ret)
{ /* Use this thread's mspace */
GETMSPACE(m, p, tc, mymspace, size,
ret=mspace_malloc(m, size));
}
return ret;
}
void * nedpcalloc(nedpool *p, size_t no, size_t size) THROWSPEC
{
size_t rsize=size*no;
void *ret=0;
threadcache *tc;
int mymspace;
GetThreadCache(&p, &tc, &mymspace, &rsize);
#if THREADCACHEMAX
if(tc && rsize<=THREADCACHEMAX)
{ /* Use the thread cache */
if((ret=threadcache_malloc(p, tc, &rsize)))
memset(ret, 0, rsize);
}
#endif
if(!ret)
{ /* Use this thread's mspace */
GETMSPACE(m, p, tc, mymspace, rsize,
ret=mspace_calloc(m, 1, rsize));
}
return ret;
}
void * nedprealloc(nedpool *p, void *mem, size_t size) THROWSPEC
{
void *ret=0;
threadcache *tc;
int mymspace;
if(!mem) return nedpmalloc(p, size);
GetThreadCache(&p, &tc, &mymspace, &size);
#if THREADCACHEMAX
if(tc && size && size<=THREADCACHEMAX)
{ /* Use the thread cache */
size_t memsize=nedblksize(mem);
assert(memsize);
if((ret=threadcache_malloc(p, tc, &size)))
{
memcpy(ret, mem, memsize<size ? memsize : size);
if(memsize<=THREADCACHEMAX)
threadcache_free(p, tc, mymspace, mem, memsize);
else
mspace_free(0, mem);
}
}
#endif
if(!ret)
{ /* Reallocs always happen in the mspace they happened in, so skip
locking the preferred mspace for this thread */
ret=mspace_realloc(0, mem, size);
}
return ret;
}
void nedpfree(nedpool *p, void *mem) THROWSPEC
{ /* Frees always happen in the mspace they happened in, so skip
locking the preferred mspace for this thread */
threadcache *tc;
int mymspace;
size_t memsize;
assert(mem);
GetThreadCache(&p, &tc, &mymspace, 0);
#if THREADCACHEMAX
memsize=nedblksize(mem);
assert(memsize);
if(mem && tc && memsize<=(THREADCACHEMAX+CHUNK_OVERHEAD))
threadcache_free(p, tc, mymspace, mem, memsize);
else
#endif
mspace_free(0, mem);
}
void * nedpmemalign(nedpool *p, size_t alignment, size_t bytes) THROWSPEC
{
void *ret;
threadcache *tc;
int mymspace;
GetThreadCache(&p, &tc, &mymspace, &bytes);
{ /* Use this thread's mspace */
GETMSPACE(m, p, tc, mymspace, bytes,
ret=mspace_memalign(m, alignment, bytes));
}
return ret;
}
#if !NO_MALLINFO
struct mallinfo nedpmallinfo(nedpool *p) THROWSPEC
{
int n;
struct mallinfo ret={0};
if(!p) { p=&syspool; if(!syspool.threads) InitPool(&syspool, 0, -1); }
for(n=0; p->m[n]; n++)
{
struct mallinfo t=mspace_mallinfo(p->m[n]);
ret.arena+=t.arena;
ret.ordblks+=t.ordblks;
ret.hblkhd+=t.hblkhd;
ret.usmblks+=t.usmblks;
ret.uordblks+=t.uordblks;
ret.fordblks+=t.fordblks;
ret.keepcost+=t.keepcost;
}
return ret;
}
#endif
int nedpmallopt(nedpool *p, int parno, int value) THROWSPEC
{
return mspace_mallopt(parno, value);
}
int nedpmalloc_trim(nedpool *p, size_t pad) THROWSPEC
{
int n, ret=0;
if(!p) { p=&syspool; if(!syspool.threads) InitPool(&syspool, 0, -1); }
for(n=0; p->m[n]; n++)
{
ret+=mspace_trim(p->m[n], pad);
}
return ret;
}
void nedpmalloc_stats(nedpool *p) THROWSPEC
{
int n;
if(!p) { p=&syspool; if(!syspool.threads) InitPool(&syspool, 0, -1); }
for(n=0; p->m[n]; n++)
{
mspace_malloc_stats(p->m[n]);
}
}
size_t nedpmalloc_footprint(nedpool *p) THROWSPEC
{
size_t ret=0;
int n;
if(!p) { p=&syspool; if(!syspool.threads) InitPool(&syspool, 0, -1); }
for(n=0; p->m[n]; n++)
{
ret+=mspace_footprint(p->m[n]);
}
return ret;
}
void **nedpindependent_calloc(nedpool *p, size_t elemsno, size_t elemsize, void **chunks) THROWSPEC
{
void **ret;
threadcache *tc;
int mymspace;
GetThreadCache(&p, &tc, &mymspace, &elemsize);
GETMSPACE(m, p, tc, mymspace, elemsno*elemsize,
ret=mspace_independent_calloc(m, elemsno, elemsize, chunks));
return ret;
}
void **nedpindependent_comalloc(nedpool *p, size_t elems, size_t *sizes, void **chunks) THROWSPEC
{
void **ret;
threadcache *tc;
int mymspace;
size_t i, *adjustedsizes=(size_t *) alloca(elems*sizeof(size_t));
if(!adjustedsizes) return 0;
for(i=0; i<elems; i++)
adjustedsizes[i]=sizes[i]<sizeof(threadcacheblk) ? sizeof(threadcacheblk) : sizes[i];
GetThreadCache(&p, &tc, &mymspace, 0);
GETMSPACE(m, p, tc, mymspace, 0,
ret=mspace_independent_comalloc(m, elems, adjustedsizes, chunks));
return ret;
}
#ifdef OVERRIDE_STRDUP
/*
* This implementation is purely there to override the libc version, to
* avoid a crash due to allocation and free on different 'heaps'.
*/
char *strdup(const char *s1)
{
char *s2 = 0;
if (s1) {
s2 = malloc(strlen(s1) + 1);
strcpy(s2, s1);
}
return s2;
}
#endif
#if defined(__cplusplus)
}
#endif

View File

@@ -0,0 +1,180 @@
/* nedalloc, an alternative malloc implementation for multiple threads without
lock contention based on dlmalloc v2.8.3. (C) 2005 Niall Douglas
Boost Software License - Version 1.0 - August 17th, 2003
Permission is hereby granted, free of charge, to any person or organization
obtaining a copy of the software and accompanying documentation covered by
this license (the "Software") to use, reproduce, display, distribute,
execute, and transmit the Software, and to prepare derivative works of the
Software, and to permit third-parties to whom the Software is furnished to
do so, all subject to the following:
The copyright notices in the Software and this entire statement, including
the above license grant, this restriction and the following disclaimer,
must be included in all copies of the Software, in whole or in part, and
all derivative works of the Software, unless such copies or derivative
works are solely in the form of machine-executable object code generated by
a source language processor.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
*/
#ifndef NEDMALLOC_H
#define NEDMALLOC_H
/* See malloc.c.h for what each function does.
REPLACE_SYSTEM_ALLOCATOR causes nedalloc's functions to be called malloc,
free etc. instead of nedmalloc, nedfree etc. You may or may not want this.
NO_NED_NAMESPACE prevents the functions from being defined in the nedalloc
namespace when in C++ (uses the global namespace instead).
EXTSPEC can be defined to be __declspec(dllexport) or
__attribute__ ((visibility("default"))) or whatever you like. It defaults
to extern.
USE_LOCKS can be 2 if you want to define your own MLOCK_T, INITIAL_LOCK,
ACQUIRE_LOCK, RELEASE_LOCK, TRY_LOCK, IS_LOCKED and NULL_LOCK_INITIALIZER.
*/
#include <stddef.h> /* for size_t */
#ifndef EXTSPEC
#define EXTSPEC extern
#endif
#if defined(_MSC_VER) && _MSC_VER>=1400
#define MALLOCATTR __declspec(restrict)
#endif
#ifdef __GNUC__
#define MALLOCATTR __attribute__ ((malloc))
#endif
#ifndef MALLOCATTR
#define MALLOCATTR
#endif
#ifdef REPLACE_SYSTEM_ALLOCATOR
#define nedmalloc malloc
#define nedcalloc calloc
#define nedrealloc realloc
#define nedfree free
#define nedmemalign memalign
#define nedmallinfo mallinfo
#define nedmallopt mallopt
#define nedmalloc_trim malloc_trim
#define nedmalloc_stats malloc_stats
#define nedmalloc_footprint malloc_footprint
#define nedindependent_calloc independent_calloc
#define nedindependent_comalloc independent_comalloc
#ifdef _MSC_VER
#define nedblksize _msize
#endif
#endif
#ifndef NO_MALLINFO
#define NO_MALLINFO 0
#endif
#if !NO_MALLINFO
struct mallinfo;
#endif
#if defined(__cplusplus)
#if !defined(NO_NED_NAMESPACE)
namespace nedalloc {
#else
extern "C" {
#endif
#define THROWSPEC throw()
#else
#define THROWSPEC
#endif
/* These are the global functions */
/* Gets the usable size of an allocated block. Note this will always be bigger than what was
asked for due to rounding etc.
*/
EXTSPEC size_t nedblksize(void *mem) THROWSPEC;
EXTSPEC void nedsetvalue(void *v) THROWSPEC;
EXTSPEC MALLOCATTR void * nedmalloc(size_t size) THROWSPEC;
EXTSPEC MALLOCATTR void * nedcalloc(size_t no, size_t size) THROWSPEC;
EXTSPEC MALLOCATTR void * nedrealloc(void *mem, size_t size) THROWSPEC;
EXTSPEC void nedfree(void *mem) THROWSPEC;
EXTSPEC MALLOCATTR void * nedmemalign(size_t alignment, size_t bytes) THROWSPEC;
#if !NO_MALLINFO
EXTSPEC struct mallinfo nedmallinfo(void) THROWSPEC;
#endif
EXTSPEC int nedmallopt(int parno, int value) THROWSPEC;
EXTSPEC int nedmalloc_trim(size_t pad) THROWSPEC;
EXTSPEC void nedmalloc_stats(void) THROWSPEC;
EXTSPEC size_t nedmalloc_footprint(void) THROWSPEC;
EXTSPEC MALLOCATTR void **nedindependent_calloc(size_t elemsno, size_t elemsize, void **chunks) THROWSPEC;
EXTSPEC MALLOCATTR void **nedindependent_comalloc(size_t elems, size_t *sizes, void **chunks) THROWSPEC;
/* These are the pool functions */
struct nedpool_t;
typedef struct nedpool_t nedpool;
/* Creates a memory pool for use with the nedp* functions below.
Capacity is how much to allocate immediately (if you know you'll be allocating a lot
of memory very soon) which you can leave at zero. Threads specifies how many threads
will *normally* be accessing the pool concurrently. Setting this to zero means it
extends on demand, but be careful of this as it can rapidly consume system resources
where bursts of concurrent threads use a pool at once.
*/
EXTSPEC MALLOCATTR nedpool *nedcreatepool(size_t capacity, int threads) THROWSPEC;
/* Destroys a memory pool previously created by nedcreatepool().
*/
EXTSPEC void neddestroypool(nedpool *p) THROWSPEC;
/* Sets a value to be associated with a pool. You can retrieve this value by passing
any memory block allocated from that pool.
*/
EXTSPEC void nedpsetvalue(nedpool *p, void *v) THROWSPEC;
/* Gets a previously set value using nedpsetvalue() or zero if memory is unknown.
Optionally can also retrieve pool.
*/
EXTSPEC void *nedgetvalue(nedpool **p, void *mem) THROWSPEC;
/* Disables the thread cache for the calling thread, returning any existing cache
data to the central pool.
*/
EXTSPEC void neddisablethreadcache(nedpool *p) THROWSPEC;
EXTSPEC MALLOCATTR void * nedpmalloc(nedpool *p, size_t size) THROWSPEC;
EXTSPEC MALLOCATTR void * nedpcalloc(nedpool *p, size_t no, size_t size) THROWSPEC;
EXTSPEC MALLOCATTR void * nedprealloc(nedpool *p, void *mem, size_t size) THROWSPEC;
EXTSPEC void nedpfree(nedpool *p, void *mem) THROWSPEC;
EXTSPEC MALLOCATTR void * nedpmemalign(nedpool *p, size_t alignment, size_t bytes) THROWSPEC;
#if !NO_MALLINFO
EXTSPEC struct mallinfo nedpmallinfo(nedpool *p) THROWSPEC;
#endif
EXTSPEC int nedpmallopt(nedpool *p, int parno, int value) THROWSPEC;
EXTSPEC int nedpmalloc_trim(nedpool *p, size_t pad) THROWSPEC;
EXTSPEC void nedpmalloc_stats(nedpool *p) THROWSPEC;
EXTSPEC size_t nedpmalloc_footprint(nedpool *p) THROWSPEC;
EXTSPEC MALLOCATTR void **nedpindependent_calloc(nedpool *p, size_t elemsno, size_t elemsize, void **chunks) THROWSPEC;
EXTSPEC MALLOCATTR void **nedpindependent_comalloc(nedpool *p, size_t elems, size_t *sizes, void **chunks) THROWSPEC;
#if defined(__cplusplus)
}
#endif
#undef MALLOCATTR
#undef EXTSPEC
#endif

View File

@@ -6,8 +6,12 @@
* number of characters to write without the trailing NUL.
*/
#ifndef SNPRINTF_SIZE_CORR
#if defined(__MINGW32__) && defined(__GNUC__) && __GNUC__ < 4
#define SNPRINTF_SIZE_CORR 1
#else
#define SNPRINTF_SIZE_CORR 0
#endif
#endif
#undef vsnprintf
int git_vsnprintf(char *str, size_t maxsize, const char *format, va_list ap)

View File

@@ -80,6 +80,7 @@ static void set_console_attr(void)
static void erase_in_line(void)
{
CONSOLE_SCREEN_BUFFER_INFO sbi;
DWORD dummy; /* Needed for Windows 7 (or Vista) regression */
if (!console)
return;
@@ -87,7 +88,7 @@ static void erase_in_line(void)
GetConsoleScreenBufferInfo(console, &sbi);
FillConsoleOutputCharacterA(console, ' ',
sbi.dwSize.X - sbi.dwCursorPosition.X, sbi.dwCursorPosition,
NULL);
&dummy);
}

View File

@@ -33,6 +33,7 @@ NO_EXPAT=@NO_EXPAT@
NO_LIBGEN_H=@NO_LIBGEN_H@
NEEDS_LIBICONV=@NEEDS_LIBICONV@
NEEDS_SOCKET=@NEEDS_SOCKET@
NEEDS_RESOLV=@NEEDS_RESOLV@
NO_SYS_SELECT_H=@NO_SYS_SELECT_H@
NO_D_INO_IN_DIRENT=@NO_D_INO_IN_DIRENT@
NO_D_TYPE_IN_DIRENT=@NO_D_TYPE_IN_DIRENT@

View File

@@ -467,6 +467,15 @@ AC_CHECK_LIB([c], [socket],
AC_SUBST(NEEDS_SOCKET)
test -n "$NEEDS_SOCKET" && LIBS="$LIBS -lsocket"
#
# Define NEEDS_RESOLV if linking with -lnsl and/or -lsocket is not enough.
# Notably on Solaris hstrerror resides in libresolv and on Solaris 7
# inet_ntop and inet_pton additionally reside there.
AC_CHECK_LIB([resolv], [hstrerror],
[NEEDS_RESOLV=],
[NEEDS_RESOLV=YesPlease])
AC_SUBST(NEEDS_RESOLV)
test -n "$NEEDS_RESOLV" && LIBS="$LIBS -lresolv"
## Checks for header files.
AC_MSG_NOTICE([CHECKS for header files])

View File

@@ -605,14 +605,18 @@ struct child_process *git_connect(int fd[2], const char *url_orig,
die("command line too long");
conn->in = conn->out = -1;
conn->argv = arg = xcalloc(6, sizeof(*arg));
conn->argv = arg = xcalloc(7, sizeof(*arg));
if (protocol == PROTO_SSH) {
const char *ssh = getenv("GIT_SSH");
int putty = ssh && strcasestr(ssh, "plink");
if (!ssh) ssh = "ssh";
*arg++ = ssh;
if (putty && !strcasestr(ssh, "tortoiseplink"))
*arg++ = "-batch";
if (port) {
*arg++ = "-p";
/* P is for PuTTY, p is for OpenSSH */
*arg++ = putty ? "-P" : "-p";
*arg++ = port;
}
*arg++ = host;

View File

@@ -927,7 +927,7 @@ _git_diff ()
}
__git_mergetools_common="diffuse ecmerge emerge kdiff3 meld opendiff
tkdiff vimdiff gvimdiff xxdiff
tkdiff vimdiff gvimdiff xxdiff araxis
"
_git_difftool ()

View File

@@ -7,7 +7,7 @@
/*
* See if our compiler is known to support flexible array members.
*/
#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)
#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) && (!defined(__SUNPRO_C) || (__SUNPRO_C > 0x580))
# define FLEX_ARRAY /* empty */
#elif defined(__GNUC__)
# if (__GNUC__ >= 3)
@@ -39,7 +39,20 @@
/* Approximation of the length of the decimal representation of this type. */
#define decimal_length(x) ((int)(sizeof(x) * 2.56 + 0.5) + 1)
#if !defined(__APPLE__) && !defined(__FreeBSD__) && !defined(__USLC__) && !defined(_M_UNIX)
#if defined(__sun__)
/*
* On Solaris, when _XOPEN_EXTENDED is set, its header file
* forces the programs to be XPG4v2, defeating any _XOPEN_SOURCE
* setting to say we are XPG5 or XPG6. Also on Solaris,
* XPG6 programs must be compiled with a c99 compiler, while
* non XPG6 programs must be compiled with a pre-c99 compiler.
*/
# if __STDC_VERSION__ - 0 >= 199901L
# define _XOPEN_SOURCE 600
# else
# define _XOPEN_SOURCE 500
# endif
#elif !defined(__APPLE__) && !defined(__FreeBSD__) && !defined(__USLC__) && !defined(_M_UNIX)
#define _XOPEN_SOURCE 600 /* glibc2 and AIX 5.3L need 500, OpenBSD needs 600 for S_ISLNK() */
#ifndef __sun__
#define _XOPEN_SOURCE_EXTENDED 1 /* AIX 5.3L needs this */

View File

@@ -225,7 +225,14 @@ if (@canstatusfiles) {
foreach my $name (keys %todo) {
my $basename = basename($name);
$basename = "no file " . $basename if (exists($added{$basename}));
# CVS reports files that don't exist in the current revision as
# "no file $basename" in its "status" output, so we should
# anticipate that. Totally unknown files will have a status
# "Unknown". However, if they exist in the Attic, their status
# will be "Up-to-date" (this means they were added once but have
# been removed).
$basename = "no file $basename" if $added{$basename};
$basename =~ s/^\s+//;
$basename =~ s/\s+$//;
@@ -233,31 +240,45 @@ if (@canstatusfiles) {
$fullname{$basename} = $name;
push (@canstatusfiles2, $name);
delete($todo{$name});
}
}
}
my @cvsoutput;
@cvsoutput = xargs_safe_pipe_capture([@cvs, 'status'], @canstatusfiles2);
foreach my $l (@cvsoutput) {
chomp $l;
if ($l =~ /^File:\s+(.*\S)\s+Status: (.*)$/) {
if (!exists($fullname{$1})) {
print STDERR "Huh? Status reported for unexpected file '$1'\n";
} else {
$cvsstat{$fullname{$1}} = $2;
}
}
chomp $l;
next unless
my ($file, $status) = $l =~ /^File:\s+(.*\S)\s+Status: (.*)$/;
my $fullname = $fullname{$file};
print STDERR "Huh? Status '$status' reported for unexpected file '$file'\n"
unless defined $fullname;
# This response means the file does not exist except in
# CVS's attic, so set the status accordingly
$status = "In-attic"
if $file =~ /^no file /
&& $status eq 'Up-to-date';
$cvsstat{$fullname{$file}} = $status;
}
}
}
# ... validate new files,
# ... Validate that new files have the correct status
foreach my $f (@afiles) {
if (defined ($cvsstat{$f}) and $cvsstat{$f} ne "Unknown") {
$dirty = 1;
next unless defined(my $stat = $cvsstat{$f});
# This means the file has never been seen before
next if $stat eq 'Unknown';
# This means the file has been seen before but was removed
next if $stat eq 'In-attic';
$dirty = 1;
warn "File $f is already known in your CVS checkout -- perhaps it has been added by another user. Or this may indicate that it exists on a different branch. If this is the case, use -f to force the merge.\n";
warn "Status was: $cvsstat{$f}\n";
}
}
# ... validate known files.
foreach my $f (@files) {
next if grep { $_ eq $f } @afiles;

View File

@@ -18,6 +18,9 @@ translate_merge_tool_path () {
emerge)
echo emacs
;;
araxis)
echo compare
;;
*)
echo "$1"
;;
@@ -43,7 +46,7 @@ check_unchanged () {
valid_tool () {
case "$1" in
kdiff3 | tkdiff | xxdiff | meld | opendiff | \
emerge | vimdiff | gvimdiff | ecmerge | diffuse)
emerge | vimdiff | gvimdiff | ecmerge | diffuse | araxis)
;; # happy
tortoisemerge)
if ! merge_mode; then
@@ -263,6 +266,24 @@ run_merge_tool () {
status=1
fi
;;
araxis)
if merge_mode; then
touch "$BACKUP"
if $base_present; then
"$merge_tool_path" -wait -merge -3 -a1 \
"$BASE" "$LOCAL" "$REMOTE" "$MERGED" \
>/dev/null 2>&1
else
"$merge_tool_path" -wait -2 \
"$LOCAL" "$REMOTE" "$MERGED" \
>/dev/null 2>&1
fi
check_unchanged
else
"$merge_tool_path" -wait -2 "$LOCAL" "$REMOTE" \
>/dev/null 2>&1
fi
;;
*)
merge_tool_cmd="$(get_merge_tool_cmd "$1")"
if test -z "$merge_tool_cmd"; then
@@ -302,7 +323,7 @@ guess_merge_tool () {
else
tools="opendiff kdiff3 tkdiff xxdiff meld $tools"
fi
tools="$tools gvimdiff diffuse ecmerge"
tools="$tools gvimdiff diffuse ecmerge araxis"
fi
if echo "${VISUAL:-$EDITOR}" | grep emacs > /dev/null 2>&1; then
# $EDITOR is emacs so add emerge as a candidate

View File

@@ -420,7 +420,7 @@ do_next () {
NEWHEAD=$(git rev-parse HEAD) &&
case $HEADNAME in
refs/*)
message="$GIT_REFLOG_ACTION: $HEADNAME onto $SHORTONTO)" &&
message="$GIT_REFLOG_ACTION: $HEADNAME onto $SHORTONTO" &&
git update-ref -m "$message" $HEADNAME $NEWHEAD $OLDHEAD &&
git symbolic-ref HEAD $HEADNAME
;;

View File

@@ -812,7 +812,7 @@ sub sanitize_address
}
# Returns 1 if the message was sent, and 0 otherwise.
# In actuality, the whole program dies when a there
# In actuality, the whole program dies when there
# is an error sending a message.
sub send_message
@@ -1150,7 +1150,8 @@ foreach my $t (@files) {
my $message_was_sent = send_message();
# set up for the next message
if ($message_was_sent and $chain_reply_to || not defined $reply_to || length($reply_to) == 0) {
if ($thread && $message_was_sent &&
($chain_reply_to || !defined $reply_to || length($reply_to) == 0)) {
$reply_to = $message_id;
if (length $references > 0) {
$references .= "\n $message_id";

View File

@@ -11,6 +11,34 @@
# exporting it.
unset CDPATH
git_broken_path_fix () {
case ":$PATH:" in
*:$1:*) : ok ;;
*)
PATH=$(
SANE_TOOL_PATH="$1"
IFS=: path= sep=
set x $PATH
shift
for elem
do
case "$SANE_TOOL_PATH:$elem" in
(?*:/bin | ?*:/usr/bin)
path="$path$sep$SANE_TOOL_PATH"
sep=:
SANE_TOOL_PATH=
esac
path="$path$sep$elem"
sep=:
done
echo "$path"
)
;;
esac
}
# @@BROKEN_PATH_FIX@@
die() {
echo >&2 "$@"
exit 1

View File

@@ -18,6 +18,7 @@ quiet=
reference=
cached=
nofetch=
update=
#
# print stuff on stdout unless -q was specified
@@ -310,6 +311,11 @@ cmd_init()
git config submodule."$name".url "$url" ||
die "Failed to register url for submodule path '$path'"
upd="$(git config -f .gitmodules submodule."$name".update)"
test -z "$upd" ||
git config submodule."$name".update "$upd" ||
die "Failed to register update mode for submodule path '$path'"
say "Submodule '$name' ($url) registered for path '$path'"
done
}
@@ -337,6 +343,10 @@ cmd_update()
shift
nofetch=1
;;
-r|--rebase)
shift
update="rebase"
;;
--reference)
case "$2" in '') usage ;; esac
reference="--reference=$2"
@@ -369,6 +379,7 @@ cmd_update()
do
name=$(module_name "$path") || exit
url=$(git config submodule."$name".url)
update_module=$(git config submodule."$name".update)
if test -z "$url"
then
# Only mention uninitialized submodules when its
@@ -389,6 +400,11 @@ cmd_update()
die "Unable to find current revision in submodule path '$path'"
fi
if ! test -z "$update"
then
update_module=$update
fi
if test "$subsha1" != "$sha1"
then
force=
@@ -404,11 +420,22 @@ cmd_update()
die "Unable to fetch in submodule path '$path'"
fi
(unset GIT_DIR; cd "$path" &&
git-checkout $force -q "$sha1") ||
die "Unable to checkout '$sha1' in submodule path '$path'"
case "$update_module" in
rebase)
command="git rebase"
action="rebase"
msg="rebased onto"
;;
*)
command="git checkout $force -q"
action="checkout"
msg="checked out"
;;
esac
say "Submodule path '$path': checked out '$sha1'"
(unset GIT_DIR; cd "$path" && $command "$sha1") ||
die "Unable to $action '$sha1' in submodule path '$path'"
say "Submodule path '$path': $msg '$sha1'"
fi
done
}

View File

@@ -161,9 +161,12 @@ case "$browser" in
;;
esac
;;
w3m|links|lynx|open|start)
w3m|links|lynx|open)
eval "$browser_path" "$@"
;;
start)
exec "$browser_path" '"web-browse"' "$@"
;;
dillo)
"$browser_path" "$@" &
;;

View File

@@ -1,6 +1,5 @@
#include "cache.h"
#include "commit.h"
#include "pack.h"
#include "tag.h"
#include "blob.h"
#include "http.h"
@@ -27,7 +26,6 @@ enum XML_Status {
#endif
#define PREV_BUF_SIZE 4096
#define RANGE_HEADER_SIZE 30
/* DAV methods */
#define DAV_LOCK "LOCK"
@@ -76,8 +74,6 @@ static int pushing;
static int aborted;
static signed char remote_dir_exists[256];
static struct curl_slist *no_pragma_header;
static int push_verbosely;
static int push_all = MATCH_REFS_NONE;
static int force_all;
@@ -119,19 +115,10 @@ struct transfer_request
struct remote_lock *lock;
struct curl_slist *headers;
struct buffer buffer;
char filename[PATH_MAX];
char tmpfile[PATH_MAX];
int local_fileno;
FILE *local_stream;
enum transfer_state state;
CURLcode curl_result;
char errorstr[CURL_ERROR_SIZE];
long http_code;
unsigned char real_sha1[20];
git_SHA_CTX c;
z_stream stream;
int zret;
int rename;
void *userData;
struct active_request_slot *slot;
struct transfer_request *next;
@@ -237,15 +224,6 @@ static struct curl_slist *get_dav_token_headers(struct remote_lock *lock, enum d
return dav_headers;
}
static void append_remote_object_url(struct strbuf *buf, const char *url,
const char *hex,
int only_two_digit_prefix)
{
strbuf_addf(buf, "%sobjects/%.*s/", url, 2, hex);
if (!only_two_digit_prefix)
strbuf_addf(buf, "%s", hex+2);
}
static void finish_request(struct transfer_request *request);
static void release_request(struct transfer_request *request);
@@ -259,163 +237,29 @@ static void process_response(void *callback_data)
#ifdef USE_CURL_MULTI
static char *get_remote_object_url(const char *url, const char *hex,
int only_two_digit_prefix)
{
struct strbuf buf = STRBUF_INIT;
append_remote_object_url(&buf, url, hex, only_two_digit_prefix);
return strbuf_detach(&buf, NULL);
}
static size_t fwrite_sha1_file(void *ptr, size_t eltsize, size_t nmemb,
void *data)
{
unsigned char expn[4096];
size_t size = eltsize * nmemb;
int posn = 0;
struct transfer_request *request = (struct transfer_request *)data;
do {
ssize_t retval = xwrite(request->local_fileno,
(char *) ptr + posn, size - posn);
if (retval < 0)
return posn;
posn += retval;
} while (posn < size);
request->stream.avail_in = size;
request->stream.next_in = ptr;
do {
request->stream.next_out = expn;
request->stream.avail_out = sizeof(expn);
request->zret = git_inflate(&request->stream, Z_SYNC_FLUSH);
git_SHA1_Update(&request->c, expn,
sizeof(expn) - request->stream.avail_out);
} while (request->stream.avail_in && request->zret == Z_OK);
data_received++;
return size;
}
static void start_fetch_loose(struct transfer_request *request)
{
char *hex = sha1_to_hex(request->obj->sha1);
char *filename;
char prevfile[PATH_MAX];
char *url;
int prevlocal;
unsigned char prev_buf[PREV_BUF_SIZE];
ssize_t prev_read = 0;
long prev_posn = 0;
char range[RANGE_HEADER_SIZE];
struct curl_slist *range_header = NULL;
struct active_request_slot *slot;
struct http_object_request *obj_req;
filename = sha1_file_name(request->obj->sha1);
snprintf(request->filename, sizeof(request->filename), "%s", filename);
snprintf(request->tmpfile, sizeof(request->tmpfile),
"%s.temp", filename);
snprintf(prevfile, sizeof(prevfile), "%s.prev", request->filename);
unlink_or_warn(prevfile);
rename(request->tmpfile, prevfile);
unlink_or_warn(request->tmpfile);
if (request->local_fileno != -1)
error("fd leakage in start: %d", request->local_fileno);
request->local_fileno = open(request->tmpfile,
O_WRONLY | O_CREAT | O_EXCL, 0666);
/* This could have failed due to the "lazy directory creation";
* try to mkdir the last path component.
*/
if (request->local_fileno < 0 && errno == ENOENT) {
char *dir = strrchr(request->tmpfile, '/');
if (dir) {
*dir = 0;
mkdir(request->tmpfile, 0777);
*dir = '/';
}
request->local_fileno = open(request->tmpfile,
O_WRONLY | O_CREAT | O_EXCL, 0666);
}
if (request->local_fileno < 0) {
obj_req = new_http_object_request(repo->url, request->obj->sha1);
if (obj_req == NULL) {
request->state = ABORTED;
error("Couldn't create temporary file %s for %s: %s",
request->tmpfile, request->filename, strerror(errno));
return;
}
memset(&request->stream, 0, sizeof(request->stream));
git_inflate_init(&request->stream);
git_SHA1_Init(&request->c);
url = get_remote_object_url(repo->url, hex, 0);
request->url = xstrdup(url);
/* If a previous temp file is present, process what was already
fetched. */
prevlocal = open(prevfile, O_RDONLY);
if (prevlocal != -1) {
do {
prev_read = xread(prevlocal, prev_buf, PREV_BUF_SIZE);
if (prev_read>0) {
if (fwrite_sha1_file(prev_buf,
1,
prev_read,
request) == prev_read) {
prev_posn += prev_read;
} else {
prev_read = -1;
}
}
} while (prev_read > 0);
close(prevlocal);
}
unlink_or_warn(prevfile);
/* Reset inflate/SHA1 if there was an error reading the previous temp
file; also rewind to the beginning of the local file. */
if (prev_read == -1) {
memset(&request->stream, 0, sizeof(request->stream));
git_inflate_init(&request->stream);
git_SHA1_Init(&request->c);
if (prev_posn>0) {
prev_posn = 0;
lseek(request->local_fileno, 0, SEEK_SET);
ftruncate(request->local_fileno, 0);
}
}
slot = get_active_slot();
slot = obj_req->slot;
slot->callback_func = process_response;
slot->callback_data = request;
request->slot = slot;
curl_easy_setopt(slot->curl, CURLOPT_FILE, request);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
curl_easy_setopt(slot->curl, CURLOPT_ERRORBUFFER, request->errorstr);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
/* If we have successfully processed data from a previous fetch
attempt, only fetch the data we don't already have. */
if (prev_posn>0) {
if (push_verbosely)
fprintf(stderr,
"Resuming fetch of object %s at byte %ld\n",
hex, prev_posn);
sprintf(range, "Range: bytes=%ld-", prev_posn);
range_header = curl_slist_append(range_header, range);
curl_easy_setopt(slot->curl,
CURLOPT_HTTPHEADER, range_header);
}
request->userData = obj_req;
/* Try to get the request started, abort the request on error */
request->state = RUN_FETCH_LOOSE;
if (!start_active_slot(slot)) {
fprintf(stderr, "Unable to start GET request\n");
repo->can_update_info_refs = 0;
release_http_object_request(obj_req);
release_request(request);
}
}
@@ -449,16 +293,10 @@ static void start_mkcol(struct transfer_request *request)
static void start_fetch_packed(struct transfer_request *request)
{
char *url;
struct packed_git *target;
FILE *packfile;
char *filename;
long prev_posn = 0;
char range[RANGE_HEADER_SIZE];
struct curl_slist *range_header = NULL;
struct transfer_request *check_request = request_queue_head;
struct active_request_slot *slot;
struct http_pack_request *preq;
target = find_sha1_pack(request->obj->sha1, repo->packs);
if (!target) {
@@ -471,66 +309,35 @@ static void start_fetch_packed(struct transfer_request *request)
fprintf(stderr, "Fetching pack %s\n", sha1_to_hex(target->sha1));
fprintf(stderr, " which contains %s\n", sha1_to_hex(request->obj->sha1));
filename = sha1_pack_name(target->sha1);
snprintf(request->filename, sizeof(request->filename), "%s", filename);
snprintf(request->tmpfile, sizeof(request->tmpfile),
"%s.temp", filename);
url = xmalloc(strlen(repo->url) + 64);
sprintf(url, "%sobjects/pack/pack-%s.pack",
repo->url, sha1_to_hex(target->sha1));
preq = new_http_pack_request(target, repo->url);
if (preq == NULL) {
release_http_pack_request(preq);
repo->can_update_info_refs = 0;
return;
}
preq->lst = &repo->packs;
/* Make sure there isn't another open request for this pack */
while (check_request) {
if (check_request->state == RUN_FETCH_PACKED &&
!strcmp(check_request->url, url)) {
free(url);
!strcmp(check_request->url, preq->url)) {
release_http_pack_request(preq);
release_request(request);
return;
}
check_request = check_request->next;
}
packfile = fopen(request->tmpfile, "a");
if (!packfile) {
fprintf(stderr, "Unable to open local file %s for pack",
request->tmpfile);
repo->can_update_info_refs = 0;
free(url);
return;
}
slot = get_active_slot();
slot->callback_func = process_response;
slot->callback_data = request;
request->slot = slot;
request->local_stream = packfile;
request->userData = target;
request->url = url;
curl_easy_setopt(slot->curl, CURLOPT_FILE, packfile);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
slot->local = packfile;
/* If there is data present from a previous transfer attempt,
resume where it left off */
prev_posn = ftell(packfile);
if (prev_posn>0) {
if (push_verbosely)
fprintf(stderr,
"Resuming fetch of pack %s at byte %ld\n",
sha1_to_hex(target->sha1), prev_posn);
sprintf(range, "Range: bytes=%ld-", prev_posn);
range_header = curl_slist_append(range_header, range);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
}
preq->slot->callback_func = process_response;
preq->slot->callback_data = request;
request->slot = preq->slot;
request->userData = preq;
/* Try to get the request started, abort the request on error */
request->state = RUN_FETCH_PACKED;
if (!start_active_slot(slot)) {
if (!start_active_slot(preq->slot)) {
fprintf(stderr, "Unable to start GET request\n");
release_http_pack_request(preq);
repo->can_update_info_refs = 0;
release_request(request);
}
@@ -711,19 +518,14 @@ static void release_request(struct transfer_request *request)
entry->next = entry->next->next;
}
if (request->local_fileno != -1)
close(request->local_fileno);
if (request->local_stream)
fclose(request->local_stream);
free(request->url);
free(request);
}
static void finish_request(struct transfer_request *request)
{
struct stat st;
struct packed_git *target;
struct packed_git **lst;
struct http_pack_request *preq;
struct http_object_request *obj_req;
request->curl_result = request->slot->curl_result;
request->http_code = request->slot->http_code;
@@ -778,76 +580,46 @@ static void finish_request(struct transfer_request *request)
aborted = 1;
}
} else if (request->state == RUN_FETCH_LOOSE) {
close(request->local_fileno); request->local_fileno = -1;
obj_req = (struct http_object_request *)request->userData;
if (request->curl_result != CURLE_OK &&
request->http_code != 416) {
if (stat(request->tmpfile, &st) == 0) {
if (st.st_size == 0)
unlink_or_warn(request->tmpfile);
}
} else {
if (request->http_code == 416)
warning("requested range invalid; we may already have all the data.");
git_inflate_end(&request->stream);
git_SHA1_Final(request->real_sha1, &request->c);
if (request->zret != Z_STREAM_END) {
unlink_or_warn(request->tmpfile);
} else if (hashcmp(request->obj->sha1, request->real_sha1)) {
unlink_or_warn(request->tmpfile);
} else {
request->rename =
move_temp_to_file(
request->tmpfile,
request->filename);
if (request->rename == 0) {
request->obj->flags |= (LOCAL | REMOTE);
}
}
}
if (finish_http_object_request(obj_req) == 0)
if (obj_req->rename == 0)
request->obj->flags |= (LOCAL | REMOTE);
/* Try fetching packed if necessary */
if (request->obj->flags & LOCAL)
if (request->obj->flags & LOCAL) {
release_http_object_request(obj_req);
release_request(request);
else
} else
start_fetch_packed(request);
} else if (request->state == RUN_FETCH_PACKED) {
int fail = 1;
if (request->curl_result != CURLE_OK) {
fprintf(stderr, "Unable to get pack file %s\n%s",
request->url, curl_errorstr);
repo->can_update_info_refs = 0;
} else {
off_t pack_size = ftell(request->local_stream);
preq = (struct http_pack_request *)request->userData;
fclose(request->local_stream);
request->local_stream = NULL;
if (!move_temp_to_file(request->tmpfile,
request->filename)) {
target = (struct packed_git *)request->userData;
target->pack_size = pack_size;
lst = &repo->packs;
while (*lst != target)
lst = &((*lst)->next);
*lst = (*lst)->next;
if (!verify_pack(target))
install_packed_git(target);
else
repo->can_update_info_refs = 0;
if (preq) {
if (finish_http_pack_request(preq) > 0)
fail = 0;
release_http_pack_request(preq);
}
}
if (fail)
repo->can_update_info_refs = 0;
release_request(request);
}
}
#ifdef USE_CURL_MULTI
static int is_running_queue;
static int fill_active_slot(void *unused)
{
struct transfer_request *request;
if (aborted)
if (aborted || !is_running_queue)
return 0;
for (request = request_queue_head; request; request = request->next) {
@@ -890,8 +662,6 @@ static void add_fetch_request(struct object *obj)
request->url = NULL;
request->lock = NULL;
request->headers = NULL;
request->local_fileno = -1;
request->local_stream = NULL;
request->state = NEED_FETCH;
request->next = request_queue_head;
request_queue_head = request;
@@ -930,8 +700,6 @@ static int add_send_request(struct object *obj, struct remote_lock *lock)
request->url = NULL;
request->lock = lock;
request->headers = NULL;
request->local_fileno = -1;
request->local_stream = NULL;
request->state = NEED_PUSH;
request->next = request_queue_head;
request_queue_head = request;
@@ -944,176 +712,23 @@ static int add_send_request(struct object *obj, struct remote_lock *lock)
return 1;
}
static int fetch_index(unsigned char *sha1)
{
char *hex = sha1_to_hex(sha1);
char *filename;
char *url;
char tmpfile[PATH_MAX];
long prev_posn = 0;
char range[RANGE_HEADER_SIZE];
struct curl_slist *range_header = NULL;
FILE *indexfile;
struct active_request_slot *slot;
struct slot_results results;
/* Don't use the index if the pack isn't there */
url = xmalloc(strlen(repo->url) + 64);
sprintf(url, "%sobjects/pack/pack-%s.pack", repo->url, hex);
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 1);
if (start_active_slot(slot)) {
run_active_slot(slot);
if (results.curl_result != CURLE_OK) {
free(url);
return error("Unable to verify pack %s is available",
hex);
}
} else {
free(url);
return error("Unable to start request");
}
if (has_pack_index(sha1)) {
free(url);
return 0;
}
if (push_verbosely)
fprintf(stderr, "Getting index for pack %s\n", hex);
sprintf(url, "%sobjects/pack/pack-%s.idx", repo->url, hex);
filename = sha1_pack_index_name(sha1);
snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
indexfile = fopen(tmpfile, "a");
if (!indexfile) {
free(url);
return error("Unable to open local file %s for pack index",
tmpfile);
}
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 0);
curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
curl_easy_setopt(slot->curl, CURLOPT_FILE, indexfile);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
slot->local = indexfile;
/* If there is data present from a previous transfer attempt,
resume where it left off */
prev_posn = ftell(indexfile);
if (prev_posn>0) {
if (push_verbosely)
fprintf(stderr,
"Resuming fetch of index for pack %s at byte %ld\n",
hex, prev_posn);
sprintf(range, "Range: bytes=%ld-", prev_posn);
range_header = curl_slist_append(range_header, range);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
}
if (start_active_slot(slot)) {
run_active_slot(slot);
if (results.curl_result != CURLE_OK) {
free(url);
fclose(indexfile);
return error("Unable to get pack index %s\n%s", url,
curl_errorstr);
}
} else {
free(url);
fclose(indexfile);
return error("Unable to start request");
}
free(url);
fclose(indexfile);
return move_temp_to_file(tmpfile, filename);
}
static int setup_index(unsigned char *sha1)
{
struct packed_git *new_pack;
if (fetch_index(sha1))
return -1;
new_pack = parse_pack_index(sha1);
new_pack->next = repo->packs;
repo->packs = new_pack;
return 0;
}
static int fetch_indices(void)
{
unsigned char sha1[20];
char *url;
struct strbuf buffer = STRBUF_INIT;
char *data;
int i = 0;
struct active_request_slot *slot;
struct slot_results results;
int ret;
if (push_verbosely)
fprintf(stderr, "Getting pack list\n");
url = xmalloc(strlen(repo->url) + 20);
sprintf(url, "%sobjects/info/packs", repo->url);
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
if (start_active_slot(slot)) {
run_active_slot(slot);
if (results.curl_result != CURLE_OK) {
strbuf_release(&buffer);
free(url);
if (results.http_code == 404)
return 0;
else
return error("%s", curl_errorstr);
}
} else {
strbuf_release(&buffer);
free(url);
return error("Unable to start request");
}
free(url);
data = buffer.buf;
while (i < buffer.len) {
switch (data[i]) {
case 'P':
i++;
if (i + 52 < buffer.len &&
!prefixcmp(data + i, " pack-") &&
!prefixcmp(data + i + 46, ".pack\n")) {
get_sha1_hex(data + i + 6, sha1);
setup_index(sha1);
i += 51;
break;
}
default:
while (data[i] != '\n')
i++;
}
i++;
switch (http_get_info_packs(repo->url, &repo->packs)) {
case HTTP_OK:
case HTTP_MISSING_TARGET:
ret = 0;
break;
default:
ret = -1;
}
strbuf_release(&buffer);
return 0;
return ret;
}
static void one_remote_object(const char *hex)
@@ -1844,7 +1459,7 @@ static int update_remote(unsigned char *sha1, struct remote_lock *lock)
return 1;
}
static struct ref *remote_refs, **remote_tail;
static struct ref *remote_refs;
static void one_remote_ref(char *refname)
{
@@ -1874,13 +1489,12 @@ static void one_remote_ref(char *refname)
}
}
*remote_tail = ref;
remote_tail = &ref->next;
ref->next = remote_refs;
remote_refs = ref;
}
static void get_dav_remote_heads(void)
{
remote_tail = &remote_refs;
remote_ls("refs/", (PROCESS_FILES | PROCESS_DIRS | RECURSIVE), process_ls_ref, NULL);
}
@@ -1977,29 +1591,22 @@ static void update_remote_info_refs(struct remote_lock *lock)
static int remote_exists(const char *path)
{
char *url = xmalloc(strlen(repo->url) + strlen(path) + 1);
struct active_request_slot *slot;
struct slot_results results;
int ret = -1;
int ret;
sprintf(url, "%s%s", repo->url, path);
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 1);
if (start_active_slot(slot)) {
run_active_slot(slot);
if (results.http_code == 404)
ret = 0;
else if (results.curl_result == CURLE_OK)
ret = 1;
else
fprintf(stderr, "HEAD HTTP error %ld\n", results.http_code);
} else {
fprintf(stderr, "Unable to start HEAD request\n");
switch (http_get_strbuf(url, NULL, 0)) {
case HTTP_OK:
ret = 1;
break;
case HTTP_MISSING_TARGET:
ret = 0;
break;
case HTTP_ERROR:
http_error(url, HTTP_ERROR);
default:
ret = -1;
}
free(url);
return ret;
}
@@ -2008,27 +1615,13 @@ static void fetch_symref(const char *path, char **symref, unsigned char *sha1)
{
char *url;
struct strbuf buffer = STRBUF_INIT;
struct active_request_slot *slot;
struct slot_results results;
url = xmalloc(strlen(repo->url) + strlen(path) + 1);
sprintf(url, "%s%s", repo->url, path);
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
if (start_active_slot(slot)) {
run_active_slot(slot);
if (results.curl_result != CURLE_OK) {
die("Couldn't get %s for remote symref\n%s",
url, curl_errorstr);
}
} else {
die("Unable to start remote symref request");
}
if (http_get_strbuf(url, &buffer, 0) != HTTP_OK)
die("Couldn't get %s for remote symref\n%s", url,
curl_errorstr);
free(url);
free(*symref);
@@ -2157,6 +1750,25 @@ static int delete_remote_branch(char *pattern, int force)
return 0;
}
void run_request_queue(void)
{
#ifdef USE_CURL_MULTI
is_running_queue = 1;
fill_active_slots();
add_fill_function(NULL, fill_active_slot);
#endif
do {
finish_all_active_slots();
#ifdef USE_CURL_MULTI
fill_active_slots();
#endif
} while (request_queue_head && !aborted);
#ifdef USE_CURL_MULTI
is_running_queue = 0;
#endif
}
int main(int argc, char **argv)
{
struct transfer_request *request;
@@ -2201,6 +1813,7 @@ int main(int argc, char **argv)
}
if (!strcmp(arg, "--verbose")) {
push_verbosely = 1;
http_is_verbose = 1;
continue;
}
if (!strcmp(arg, "-d")) {
@@ -2250,8 +1863,6 @@ int main(int argc, char **argv)
remote->url[remote->url_nr++] = repo->url;
http_init(remote);
no_pragma_header = curl_slist_append(no_pragma_header, "Pragma:");
if (repo->url && repo->url[strlen(repo->url)-1] != '/') {
rewritten_url = xmalloc(strlen(repo->url)+2);
strcpy(rewritten_url, repo->url);
@@ -2261,6 +1872,10 @@ int main(int argc, char **argv)
repo->url = rewritten_url;
}
#ifdef USE_CURL_MULTI
is_running_queue = 0;
#endif
/* Verify DAV compliance/lock support */
if (!locking_available()) {
rc = 1;
@@ -2290,6 +1905,7 @@ int main(int argc, char **argv)
local_refs = get_local_heads();
fprintf(stderr, "Fetching remote heads...\n");
get_dav_remote_heads();
run_request_queue();
/* Remove a remote branch if -d or -D was specified */
if (delete_branch) {
@@ -2300,9 +1916,7 @@ int main(int argc, char **argv)
}
/* match them up */
if (!remote_tail)
remote_tail = &remote_refs;
if (match_refs(local_refs, remote_refs, &remote_tail,
if (match_refs(local_refs, &remote_refs,
nr_refspec, (const char **) refspec, push_all)) {
rc = -1;
goto cleanup;
@@ -2420,16 +2034,8 @@ int main(int argc, char **argv)
if (objects_to_send)
fprintf(stderr, " sending %d objects\n",
objects_to_send);
#ifdef USE_CURL_MULTI
fill_active_slots();
add_fill_function(NULL, fill_active_slot);
#endif
do {
finish_all_active_slots();
#ifdef USE_CURL_MULTI
fill_active_slots();
#endif
} while (request_queue_head && !aborted);
run_request_queue();
/* Update the remote branch if all went well */
if (aborted || !update_remote(ref->new_sha1, ref_lock))
@@ -2458,8 +2064,6 @@ int main(int argc, char **argv)
unlock_remote(info_ref_lock);
free(repo);
curl_slist_free_all(no_pragma_header);
http_cleanup();
request = request_queue_head;

View File

@@ -1,12 +1,8 @@
#include "cache.h"
#include "commit.h"
#include "pack.h"
#include "walker.h"
#include "http.h"
#define PREV_BUF_SIZE 4096
#define RANGE_HEADER_SIZE 30
struct alt_base
{
char *base;
@@ -27,20 +23,8 @@ struct object_request
struct walker *walker;
unsigned char sha1[20];
struct alt_base *repo;
char *url;
char filename[PATH_MAX];
char tmpfile[PATH_MAX];
int local;
enum object_request_state state;
CURLcode curl_result;
char errorstr[CURL_ERROR_SIZE];
long http_code;
unsigned char real_sha1[20];
git_SHA_CTX c;
z_stream stream;
int zret;
int rename;
struct active_request_slot *slot;
struct http_object_request *req;
struct object_request *next;
};
@@ -57,39 +41,10 @@ struct walker_data {
const char *url;
int got_alternates;
struct alt_base *alt;
struct curl_slist *no_pragma_header;
};
static struct object_request *object_queue_head;
static size_t fwrite_sha1_file(void *ptr, size_t eltsize, size_t nmemb,
void *data)
{
unsigned char expn[4096];
size_t size = eltsize * nmemb;
int posn = 0;
struct object_request *obj_req = (struct object_request *)data;
do {
ssize_t retval = xwrite(obj_req->local,
(char *) ptr + posn, size - posn);
if (retval < 0)
return posn;
posn += retval;
} while (posn < size);
obj_req->stream.avail_in = size;
obj_req->stream.next_in = ptr;
do {
obj_req->stream.next_out = expn;
obj_req->stream.avail_out = sizeof(expn);
obj_req->zret = git_inflate(&obj_req->stream, Z_SYNC_FLUSH);
git_SHA1_Update(&obj_req->c, expn,
sizeof(expn) - obj_req->stream.avail_out);
} while (obj_req->stream.avail_in && obj_req->zret == Z_OK);
data_received++;
return size;
}
static void fetch_alternates(struct walker *walker, const char *base);
static void process_object_response(void *callback_data);
@@ -97,165 +52,35 @@ static void process_object_response(void *callback_data);
static void start_object_request(struct walker *walker,
struct object_request *obj_req)
{
char *hex = sha1_to_hex(obj_req->sha1);
char prevfile[PATH_MAX];
char *url;
char *posn;
int prevlocal;
unsigned char prev_buf[PREV_BUF_SIZE];
ssize_t prev_read = 0;
long prev_posn = 0;
char range[RANGE_HEADER_SIZE];
struct curl_slist *range_header = NULL;
struct active_request_slot *slot;
struct walker_data *data = walker->data;
struct http_object_request *req;
snprintf(prevfile, sizeof(prevfile), "%s.prev", obj_req->filename);
unlink_or_warn(prevfile);
rename(obj_req->tmpfile, prevfile);
unlink_or_warn(obj_req->tmpfile);
if (obj_req->local != -1)
error("fd leakage in start: %d", obj_req->local);
obj_req->local = open(obj_req->tmpfile,
O_WRONLY | O_CREAT | O_EXCL, 0666);
/* This could have failed due to the "lazy directory creation";
* try to mkdir the last path component.
*/
if (obj_req->local < 0 && errno == ENOENT) {
char *dir = strrchr(obj_req->tmpfile, '/');
if (dir) {
*dir = 0;
mkdir(obj_req->tmpfile, 0777);
*dir = '/';
}
obj_req->local = open(obj_req->tmpfile,
O_WRONLY | O_CREAT | O_EXCL, 0666);
}
if (obj_req->local < 0) {
req = new_http_object_request(obj_req->repo->base, obj_req->sha1);
if (req == NULL) {
obj_req->state = ABORTED;
error("Couldn't create temporary file %s for %s: %s",
obj_req->tmpfile, obj_req->filename, strerror(errno));
return;
}
obj_req->req = req;
memset(&obj_req->stream, 0, sizeof(obj_req->stream));
git_inflate_init(&obj_req->stream);
git_SHA1_Init(&obj_req->c);
url = xmalloc(strlen(obj_req->repo->base) + 51);
obj_req->url = xmalloc(strlen(obj_req->repo->base) + 51);
strcpy(url, obj_req->repo->base);
posn = url + strlen(obj_req->repo->base);
strcpy(posn, "/objects/");
posn += 9;
memcpy(posn, hex, 2);
posn += 2;
*(posn++) = '/';
strcpy(posn, hex + 2);
strcpy(obj_req->url, url);
/* If a previous temp file is present, process what was already
fetched. */
prevlocal = open(prevfile, O_RDONLY);
if (prevlocal != -1) {
do {
prev_read = xread(prevlocal, prev_buf, PREV_BUF_SIZE);
if (prev_read>0) {
if (fwrite_sha1_file(prev_buf,
1,
prev_read,
obj_req) == prev_read) {
prev_posn += prev_read;
} else {
prev_read = -1;
}
}
} while (prev_read > 0);
close(prevlocal);
}
unlink_or_warn(prevfile);
/* Reset inflate/SHA1 if there was an error reading the previous temp
file; also rewind to the beginning of the local file. */
if (prev_read == -1) {
memset(&obj_req->stream, 0, sizeof(obj_req->stream));
git_inflate_init(&obj_req->stream);
git_SHA1_Init(&obj_req->c);
if (prev_posn>0) {
prev_posn = 0;
lseek(obj_req->local, 0, SEEK_SET);
ftruncate(obj_req->local, 0);
}
}
slot = get_active_slot();
slot = req->slot;
slot->callback_func = process_object_response;
slot->callback_data = obj_req;
obj_req->slot = slot;
curl_easy_setopt(slot->curl, CURLOPT_FILE, obj_req);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
curl_easy_setopt(slot->curl, CURLOPT_ERRORBUFFER, obj_req->errorstr);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, data->no_pragma_header);
/* If we have successfully processed data from a previous fetch
attempt, only fetch the data we don't already have. */
if (prev_posn>0) {
if (walker->get_verbosely)
fprintf(stderr,
"Resuming fetch of object %s at byte %ld\n",
hex, prev_posn);
sprintf(range, "Range: bytes=%ld-", prev_posn);
range_header = curl_slist_append(range_header, range);
curl_easy_setopt(slot->curl,
CURLOPT_HTTPHEADER, range_header);
}
/* Try to get the request started, abort the request on error */
obj_req->state = ACTIVE;
if (!start_active_slot(slot)) {
obj_req->state = ABORTED;
obj_req->slot = NULL;
close(obj_req->local); obj_req->local = -1;
free(obj_req->url);
release_http_object_request(req);
return;
}
}
static void finish_object_request(struct object_request *obj_req)
{
struct stat st;
close(obj_req->local); obj_req->local = -1;
if (obj_req->http_code == 416) {
fprintf(stderr, "Warning: requested range invalid; we may already have all the data.\n");
} else if (obj_req->curl_result != CURLE_OK) {
if (stat(obj_req->tmpfile, &st) == 0)
if (st.st_size == 0)
unlink_or_warn(obj_req->tmpfile);
if (finish_http_object_request(obj_req->req))
return;
}
git_inflate_end(&obj_req->stream);
git_SHA1_Final(obj_req->real_sha1, &obj_req->c);
if (obj_req->zret != Z_STREAM_END) {
unlink_or_warn(obj_req->tmpfile);
return;
}
if (hashcmp(obj_req->sha1, obj_req->real_sha1)) {
unlink_or_warn(obj_req->tmpfile);
return;
}
obj_req->rename =
move_temp_to_file(obj_req->tmpfile, obj_req->filename);
if (obj_req->rename == 0)
if (obj_req->req->rename == 0)
walker_say(obj_req->walker, "got %s\n", sha1_to_hex(obj_req->sha1));
}
@@ -267,19 +92,16 @@ static void process_object_response(void *callback_data)
struct walker_data *data = walker->data;
struct alt_base *alt = data->alt;
obj_req->curl_result = obj_req->slot->curl_result;
obj_req->http_code = obj_req->slot->http_code;
obj_req->slot = NULL;
process_http_object_request(obj_req->req);
obj_req->state = COMPLETE;
/* Use alternates if necessary */
if (missing_target(obj_req)) {
if (missing_target(obj_req->req)) {
fetch_alternates(walker, alt->base);
if (obj_req->repo->next != NULL) {
obj_req->repo =
obj_req->repo->next;
close(obj_req->local);
obj_req->local = -1;
release_http_object_request(obj_req->req);
start_object_request(walker, obj_req);
return;
}
@@ -292,8 +114,8 @@ static void release_object_request(struct object_request *obj_req)
{
struct object_request *entry = object_queue_head;
if (obj_req->local != -1)
error("fd leakage in release: %d", obj_req->local);
if (obj_req->req !=NULL && obj_req->req->localfile != -1)
error("fd leakage in release: %d", obj_req->req->localfile);
if (obj_req == object_queue_head) {
object_queue_head = obj_req->next;
} else {
@@ -303,7 +125,6 @@ static void release_object_request(struct object_request *obj_req)
entry->next = entry->next->next;
}
free(obj_req->url);
free(obj_req);
}
@@ -331,28 +152,23 @@ static void prefetch(struct walker *walker, unsigned char *sha1)
struct object_request *newreq;
struct object_request *tail;
struct walker_data *data = walker->data;
char *filename = sha1_file_name(sha1);
newreq = xmalloc(sizeof(*newreq));
newreq->walker = walker;
hashcpy(newreq->sha1, sha1);
newreq->repo = data->alt;
newreq->url = NULL;
newreq->local = -1;
newreq->state = WAITING;
snprintf(newreq->filename, sizeof(newreq->filename), "%s", filename);
snprintf(newreq->tmpfile, sizeof(newreq->tmpfile),
"%s.temp", filename);
newreq->slot = NULL;
newreq->req = NULL;
newreq->next = NULL;
http_is_verbose = walker->get_verbosely;
if (object_queue_head == NULL) {
object_queue_head = newreq;
} else {
tail = object_queue_head;
while (tail->next != NULL) {
while (tail->next != NULL)
tail = tail->next;
}
tail->next = newreq;
}
@@ -362,92 +178,6 @@ static void prefetch(struct walker *walker, unsigned char *sha1)
#endif
}
static int fetch_index(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
{
char *hex = sha1_to_hex(sha1);
char *filename;
char *url;
char tmpfile[PATH_MAX];
long prev_posn = 0;
char range[RANGE_HEADER_SIZE];
struct curl_slist *range_header = NULL;
struct walker_data *data = walker->data;
FILE *indexfile;
struct active_request_slot *slot;
struct slot_results results;
if (has_pack_index(sha1))
return 0;
if (walker->get_verbosely)
fprintf(stderr, "Getting index for pack %s\n", hex);
url = xmalloc(strlen(repo->base) + 64);
sprintf(url, "%s/objects/pack/pack-%s.idx", repo->base, hex);
filename = sha1_pack_index_name(sha1);
snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
indexfile = fopen(tmpfile, "a");
if (!indexfile)
return error("Unable to open local file %s for pack index",
tmpfile);
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_FILE, indexfile);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, data->no_pragma_header);
slot->local = indexfile;
/* If there is data present from a previous transfer attempt,
resume where it left off */
prev_posn = ftell(indexfile);
if (prev_posn>0) {
if (walker->get_verbosely)
fprintf(stderr,
"Resuming fetch of index for pack %s at byte %ld\n",
hex, prev_posn);
sprintf(range, "Range: bytes=%ld-", prev_posn);
range_header = curl_slist_append(range_header, range);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
}
if (start_active_slot(slot)) {
run_active_slot(slot);
if (results.curl_result != CURLE_OK) {
fclose(indexfile);
return error("Unable to get pack index %s\n%s", url,
curl_errorstr);
}
} else {
fclose(indexfile);
return error("Unable to start request");
}
fclose(indexfile);
return move_temp_to_file(tmpfile, filename);
}
static int setup_index(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
{
struct packed_git *new_pack;
if (has_pack_file(sha1))
return 0; /* don't list this as something we can get */
if (fetch_index(walker, repo, sha1))
return -1;
new_pack = parse_pack_index(sha1);
if (!new_pack)
return -1; /* parse_pack_index() already issued error message */
new_pack->next = repo->packs;
repo->packs = new_pack;
return 0;
}
static void process_alternates_response(void *callback_data)
{
struct alternates_request *alt_req =
@@ -504,7 +234,8 @@ static void process_alternates_response(void *callback_data)
struct alt_base *newalt;
char *target = NULL;
if (data[i] == '/') {
/* This counts
/*
* This counts
* http://git.host/pub/scm/linux.git/
* -----------here^
* so memcpy(dst, base, serverlen) will
@@ -517,7 +248,8 @@ static void process_alternates_response(void *callback_data)
okay = 1;
}
} else if (!memcmp(data + i, "../", 3)) {
/* Relative URL; chop the corresponding
/*
* Relative URL; chop the corresponding
* number of subpath from base (and ../
* from data), and concatenate the result.
*
@@ -546,7 +278,7 @@ static void process_alternates_response(void *callback_data)
}
/* If the server got removed, give up. */
okay = strchr(base, ':') - base + 3 <
serverlen;
serverlen;
} else if (alt_req->http_specific) {
char *colon = strchr(data + i, ':');
char *slash = strchr(data + i, '/');
@@ -590,9 +322,11 @@ static void fetch_alternates(struct walker *walker, const char *base)
struct alternates_request alt_req;
struct walker_data *cdata = walker->data;
/* If another request has already started fetching alternates,
wait for them to arrive and return to processing this request's
curl message */
/*
* If another request has already started fetching alternates,
* wait for them to arrive and return to processing this request's
* curl message
*/
#ifdef USE_CURL_MULTI
while (cdata->got_alternates == 0) {
step_active_slots();
@@ -612,8 +346,10 @@ static void fetch_alternates(struct walker *walker, const char *base)
url = xmalloc(strlen(base) + 31);
sprintf(url, "%s/objects/info/http-alternates", base);
/* Use a callback to process the result, since another request
may fail and need to have alternates loaded before continuing */
/*
* Use a callback to process the result, since another request
* may fail and need to have alternates loaded before continuing
*/
slot = get_active_slot();
slot->callback_func = process_alternates_response;
alt_req.walker = walker;
@@ -640,15 +376,7 @@ static void fetch_alternates(struct walker *walker, const char *base)
static int fetch_indices(struct walker *walker, struct alt_base *repo)
{
unsigned char sha1[20];
char *url;
struct strbuf buffer = STRBUF_INIT;
char *data;
int i = 0;
int ret = 0;
struct active_request_slot *slot;
struct slot_results results;
int ret;
if (repo->got_indices)
return 0;
@@ -656,76 +384,26 @@ static int fetch_indices(struct walker *walker, struct alt_base *repo)
if (walker->get_verbosely)
fprintf(stderr, "Getting pack list for %s\n", repo->base);
url = xmalloc(strlen(repo->base) + 21);
sprintf(url, "%s/objects/info/packs", repo->base);
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
if (start_active_slot(slot)) {
run_active_slot(slot);
if (results.curl_result != CURLE_OK) {
if (missing_target(&results)) {
repo->got_indices = 1;
goto cleanup;
} else {
repo->got_indices = 0;
ret = error("%s", curl_errorstr);
goto cleanup;
}
}
} else {
switch (http_get_info_packs(repo->base, &repo->packs)) {
case HTTP_OK:
case HTTP_MISSING_TARGET:
repo->got_indices = 1;
ret = 0;
break;
default:
repo->got_indices = 0;
ret = error("Unable to start request");
goto cleanup;
ret = -1;
}
data = buffer.buf;
while (i < buffer.len) {
switch (data[i]) {
case 'P':
i++;
if (i + 52 <= buffer.len &&
!prefixcmp(data + i, " pack-") &&
!prefixcmp(data + i + 46, ".pack\n")) {
get_sha1_hex(data + i + 6, sha1);
setup_index(walker, repo, sha1);
i += 51;
break;
}
default:
while (i < buffer.len && data[i] != '\n')
i++;
}
i++;
}
repo->got_indices = 1;
cleanup:
strbuf_release(&buffer);
free(url);
return ret;
}
static int fetch_pack(struct walker *walker, struct alt_base *repo, unsigned char *sha1)
{
char *url;
struct packed_git *target;
struct packed_git **lst;
FILE *packfile;
char *filename;
char tmpfile[PATH_MAX];
int ret;
long prev_posn = 0;
char range[RANGE_HEADER_SIZE];
struct curl_slist *range_header = NULL;
struct walker_data *data = walker->data;
struct active_request_slot *slot;
struct slot_results results;
struct http_pack_request *preq;
if (fetch_indices(walker, repo))
return -1;
@@ -740,80 +418,37 @@ static int fetch_pack(struct walker *walker, struct alt_base *repo, unsigned cha
sha1_to_hex(sha1));
}
url = xmalloc(strlen(repo->base) + 65);
sprintf(url, "%s/objects/pack/pack-%s.pack",
repo->base, sha1_to_hex(target->sha1));
preq = new_http_pack_request(target, repo->base);
if (preq == NULL)
goto abort;
preq->lst = &repo->packs;
preq->slot->results = &results;
filename = sha1_pack_name(target->sha1);
snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
packfile = fopen(tmpfile, "a");
if (!packfile)
return error("Unable to open local file %s for pack",
tmpfile);
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_FILE, packfile);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, data->no_pragma_header);
slot->local = packfile;
/* If there is data present from a previous transfer attempt,
resume where it left off */
prev_posn = ftell(packfile);
if (prev_posn>0) {
if (walker->get_verbosely)
fprintf(stderr,
"Resuming fetch of pack %s at byte %ld\n",
sha1_to_hex(target->sha1), prev_posn);
sprintf(range, "Range: bytes=%ld-", prev_posn);
range_header = curl_slist_append(range_header, range);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
}
if (start_active_slot(slot)) {
run_active_slot(slot);
if (start_active_slot(preq->slot)) {
run_active_slot(preq->slot);
if (results.curl_result != CURLE_OK) {
fclose(packfile);
return error("Unable to get pack file %s\n%s", url,
curl_errorstr);
error("Unable to get pack file %s\n%s", preq->url,
curl_errorstr);
goto abort;
}
} else {
fclose(packfile);
return error("Unable to start request");
error("Unable to start request");
goto abort;
}
target->pack_size = ftell(packfile);
fclose(packfile);
ret = move_temp_to_file(tmpfile, filename);
ret = finish_http_pack_request(preq);
release_http_pack_request(preq);
if (ret)
return ret;
lst = &repo->packs;
while (*lst != target)
lst = &((*lst)->next);
*lst = (*lst)->next;
if (verify_pack(target))
return -1;
install_packed_git(target);
return 0;
abort:
return -1;
}
static void abort_object_request(struct object_request *obj_req)
{
if (obj_req->local >= 0) {
close(obj_req->local);
obj_req->local = -1;
}
unlink_or_warn(obj_req->tmpfile);
if (obj_req->slot) {
release_active_slot(obj_req->slot);
obj_req->slot = NULL;
}
release_object_request(obj_req);
}
@@ -822,6 +457,7 @@ static int fetch_object(struct walker *walker, struct alt_base *repo, unsigned c
char *hex = sha1_to_hex(sha1);
int ret = 0;
struct object_request *obj_req = object_queue_head;
struct http_object_request *req;
while (obj_req != NULL && hashcmp(obj_req->sha1, sha1))
obj_req = obj_req->next;
@@ -829,45 +465,55 @@ static int fetch_object(struct walker *walker, struct alt_base *repo, unsigned c
return error("Couldn't find request for %s in the queue", hex);
if (has_sha1_file(obj_req->sha1)) {
if (obj_req->req != NULL)
abort_http_object_request(obj_req->req);
abort_object_request(obj_req);
return 0;
}
#ifdef USE_CURL_MULTI
while (obj_req->state == WAITING) {
while (obj_req->state == WAITING)
step_active_slots();
}
#else
start_object_request(walker, obj_req);
#endif
while (obj_req->state == ACTIVE) {
run_active_slot(obj_req->slot);
}
if (obj_req->local != -1) {
close(obj_req->local); obj_req->local = -1;
/*
* obj_req->req might change when fetching alternates in the callback
* process_object_response; therefore, the "shortcut" variable, req,
* is used only after we're done with slots.
*/
while (obj_req->state == ACTIVE)
run_active_slot(obj_req->req->slot);
req = obj_req->req;
if (req->localfile != -1) {
close(req->localfile);
req->localfile = -1;
}
if (obj_req->state == ABORTED) {
ret = error("Request for %s aborted", hex);
} else if (obj_req->curl_result != CURLE_OK &&
obj_req->http_code != 416) {
if (missing_target(obj_req))
} else if (req->curl_result != CURLE_OK &&
req->http_code != 416) {
if (missing_target(req))
ret = -1; /* Be silent, it is probably in a pack. */
else
ret = error("%s (curl_result = %d, http_code = %ld, sha1 = %s)",
obj_req->errorstr, obj_req->curl_result,
obj_req->http_code, hex);
} else if (obj_req->zret != Z_STREAM_END) {
req->errorstr, req->curl_result,
req->http_code, hex);
} else if (req->zret != Z_STREAM_END) {
walker->corrupt_object_found++;
ret = error("File %s (%s) corrupt", hex, obj_req->url);
} else if (hashcmp(obj_req->sha1, obj_req->real_sha1)) {
ret = error("File %s (%s) corrupt", hex, req->url);
} else if (hashcmp(obj_req->sha1, req->real_sha1)) {
ret = error("File %s has bad hash", hex);
} else if (obj_req->rename < 0) {
} else if (req->rename < 0) {
ret = error("unable to write sha1 filename %s",
obj_req->filename);
req->filename);
}
release_http_object_request(req);
release_object_request(obj_req);
return ret;
}
@@ -897,10 +543,7 @@ static int fetch_ref(struct walker *walker, struct ref *ref)
static void cleanup(struct walker *walker)
{
struct walker_data *data = walker->data;
http_cleanup();
curl_slist_free_all(data->no_pragma_header);
}
struct walker *get_http_walker(const char *url, struct remote *remote)
@@ -911,8 +554,6 @@ struct walker *get_http_walker(const char *url, struct remote *remote)
http_init(remote);
data->no_pragma_header = curl_slist_append(NULL, "Pragma:");
data->alt = xmalloc(sizeof(*data->alt));
data->alt->base = xmalloc(strlen(url) + 1);
strcpy(data->alt->base, url);

609
http.c
View File

@@ -1,7 +1,9 @@
#include "http.h"
#include "pack.h"
int data_received;
int active_requests;
int http_is_verbose;
#ifdef USE_CURL_MULTI
static int max_requests = -1;
@@ -10,6 +12,10 @@ static CURLM *curlm;
#ifndef NO_CURL_EASY_DUPHANDLE
static CURL *curl_default;
#endif
#define PREV_BUF_SIZE 4096
#define RANGE_HEADER_SIZE 30
char curl_errorstr[CURL_ERROR_SIZE];
static int curl_ssl_verify = -1;
@@ -28,6 +34,7 @@ static const char *curl_http_proxy;
static char *user_name, *user_pass;
static struct curl_slist *pragma_header;
static struct curl_slist *no_pragma_header;
static struct active_request_slot *active_queue_head;
@@ -276,6 +283,8 @@ void http_init(struct remote *remote)
char *low_speed_limit;
char *low_speed_time;
http_is_verbose = 0;
git_config(http_options, NULL);
curl_global_init(CURL_GLOBAL_ALL);
@@ -284,6 +293,7 @@ void http_init(struct remote *remote)
curl_http_proxy = xstrdup(remote->http_proxy);
pragma_header = curl_slist_append(pragma_header, "Pragma: no-cache");
no_pragma_header = curl_slist_append(no_pragma_header, "Pragma:");
#ifdef USE_CURL_MULTI
{
@@ -366,6 +376,9 @@ void http_cleanup(void)
curl_slist_free_all(pragma_header);
pragma_header = NULL;
curl_slist_free_all(no_pragma_header);
no_pragma_header = NULL;
if (curl_http_proxy) {
free((void *)curl_http_proxy);
curl_http_proxy = NULL;
@@ -611,6 +624,7 @@ void finish_all_active_slots(void)
}
}
/* Helpers for modifying and creating URLs */
static inline int needs_quote(int ch)
{
if (((ch >= 'A') && (ch <= 'Z'))
@@ -631,15 +645,20 @@ static inline int hex(int v)
return 'A' + v - 10;
}
static void end_url_with_slash(struct strbuf *buf, const char *url)
{
strbuf_addstr(buf, url);
if (buf->len && buf->buf[buf->len - 1] != '/')
strbuf_addstr(buf, "/");
}
static char *quote_ref_url(const char *base, const char *ref)
{
struct strbuf buf = STRBUF_INIT;
const char *cp;
int ch;
strbuf_addstr(&buf, base);
if (buf.len && buf.buf[buf.len - 1] != '/' && *ref != '/')
strbuf_addstr(&buf, "/");
end_url_with_slash(&buf, base);
for (cp = ref; (ch = *cp) != 0; cp++)
if (needs_quote(ch))
@@ -650,41 +669,575 @@ static char *quote_ref_url(const char *base, const char *ref)
return strbuf_detach(&buf, NULL);
}
void append_remote_object_url(struct strbuf *buf, const char *url,
const char *hex,
int only_two_digit_prefix)
{
strbuf_addf(buf, "%s/objects/%.*s/", url, 2, hex);
if (!only_two_digit_prefix)
strbuf_addf(buf, "%s", hex+2);
}
char *get_remote_object_url(const char *url, const char *hex,
int only_two_digit_prefix)
{
struct strbuf buf = STRBUF_INIT;
append_remote_object_url(&buf, url, hex, only_two_digit_prefix);
return strbuf_detach(&buf, NULL);
}
/* http_request() targets */
#define HTTP_REQUEST_STRBUF 0
#define HTTP_REQUEST_FILE 1
static int http_request(const char *url, void *result, int target, int options)
{
struct active_request_slot *slot;
struct slot_results results;
struct curl_slist *headers = NULL;
struct strbuf buf = STRBUF_INIT;
int ret;
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
if (result == NULL) {
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 1);
} else {
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 0);
curl_easy_setopt(slot->curl, CURLOPT_FILE, result);
if (target == HTTP_REQUEST_FILE) {
long posn = ftell(result);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION,
fwrite);
if (posn > 0) {
strbuf_addf(&buf, "Range: bytes=%ld-", posn);
headers = curl_slist_append(headers, buf.buf);
strbuf_reset(&buf);
}
slot->local = result;
} else
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION,
fwrite_buffer);
}
strbuf_addstr(&buf, "Pragma:");
if (options & HTTP_NO_CACHE)
strbuf_addstr(&buf, " no-cache");
headers = curl_slist_append(headers, buf.buf);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, headers);
if (start_active_slot(slot)) {
run_active_slot(slot);
if (results.curl_result == CURLE_OK)
ret = HTTP_OK;
else if (missing_target(&results))
ret = HTTP_MISSING_TARGET;
else
ret = HTTP_ERROR;
} else {
error("Unable to start HTTP request for %s", url);
ret = HTTP_START_FAILED;
}
slot->local = NULL;
curl_slist_free_all(headers);
strbuf_release(&buf);
return ret;
}
int http_get_strbuf(const char *url, struct strbuf *result, int options)
{
return http_request(url, result, HTTP_REQUEST_STRBUF, options);
}
int http_get_file(const char *url, const char *filename, int options)
{
int ret;
struct strbuf tmpfile = STRBUF_INIT;
FILE *result;
strbuf_addf(&tmpfile, "%s.temp", filename);
result = fopen(tmpfile.buf, "a");
if (! result) {
error("Unable to open local file %s", tmpfile.buf);
ret = HTTP_ERROR;
goto cleanup;
}
ret = http_request(url, result, HTTP_REQUEST_FILE, options);
fclose(result);
if ((ret == HTTP_OK) && move_temp_to_file(tmpfile.buf, filename))
ret = HTTP_ERROR;
cleanup:
strbuf_release(&tmpfile);
return ret;
}
int http_error(const char *url, int ret)
{
/* http_request has already handled HTTP_START_FAILED. */
if (ret != HTTP_START_FAILED)
error("%s while accessing %s\n", curl_errorstr, url);
return ret;
}
int http_fetch_ref(const char *base, struct ref *ref)
{
char *url;
struct strbuf buffer = STRBUF_INIT;
struct active_request_slot *slot;
struct slot_results results;
int ret;
int ret = -1;
url = quote_ref_url(base, ref->name);
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
if (start_active_slot(slot)) {
run_active_slot(slot);
if (results.curl_result == CURLE_OK) {
strbuf_rtrim(&buffer);
if (buffer.len == 40)
ret = get_sha1_hex(buffer.buf, ref->old_sha1);
else if (!prefixcmp(buffer.buf, "ref: ")) {
ref->symref = xstrdup(buffer.buf + 5);
ret = 0;
} else
ret = 1;
} else {
ret = error("Couldn't get %s for %s\n%s",
url, ref->name, curl_errorstr);
if (http_get_strbuf(url, &buffer, HTTP_NO_CACHE) == HTTP_OK) {
strbuf_rtrim(&buffer);
if (buffer.len == 40)
ret = get_sha1_hex(buffer.buf, ref->old_sha1);
else if (!prefixcmp(buffer.buf, "ref: ")) {
ref->symref = xstrdup(buffer.buf + 5);
ret = 0;
}
} else {
ret = error("Unable to start request");
}
strbuf_release(&buffer);
free(url);
return ret;
}
/* Helpers for fetching packs */
static int fetch_pack_index(unsigned char *sha1, const char *base_url)
{
int ret = 0;
char *hex = xstrdup(sha1_to_hex(sha1));
char *filename;
char *url;
struct strbuf buf = STRBUF_INIT;
/* Don't use the index if the pack isn't there */
end_url_with_slash(&buf, base_url);
strbuf_addf(&buf, "objects/pack/pack-%s.pack", hex);
url = strbuf_detach(&buf, 0);
if (http_get_strbuf(url, NULL, 0)) {
ret = error("Unable to verify pack %s is available",
hex);
goto cleanup;
}
if (has_pack_index(sha1)) {
ret = 0;
goto cleanup;
}
if (http_is_verbose)
fprintf(stderr, "Getting index for pack %s\n", hex);
end_url_with_slash(&buf, base_url);
strbuf_addf(&buf, "objects/pack/pack-%s.idx", hex);
url = strbuf_detach(&buf, NULL);
filename = sha1_pack_index_name(sha1);
if (http_get_file(url, filename, 0) != HTTP_OK)
ret = error("Unable to get pack index %s\n", url);
cleanup:
free(hex);
free(url);
return ret;
}
static int fetch_and_setup_pack_index(struct packed_git **packs_head,
unsigned char *sha1, const char *base_url)
{
struct packed_git *new_pack;
if (fetch_pack_index(sha1, base_url))
return -1;
new_pack = parse_pack_index(sha1);
if (!new_pack)
return -1; /* parse_pack_index() already issued error message */
new_pack->next = *packs_head;
*packs_head = new_pack;
return 0;
}
int http_get_info_packs(const char *base_url, struct packed_git **packs_head)
{
int ret = 0, i = 0;
char *url, *data;
struct strbuf buf = STRBUF_INIT;
unsigned char sha1[20];
end_url_with_slash(&buf, base_url);
strbuf_addstr(&buf, "objects/info/packs");
url = strbuf_detach(&buf, NULL);
ret = http_get_strbuf(url, &buf, HTTP_NO_CACHE);
if (ret != HTTP_OK)
goto cleanup;
data = buf.buf;
while (i < buf.len) {
switch (data[i]) {
case 'P':
i++;
if (i + 52 <= buf.len &&
!prefixcmp(data + i, " pack-") &&
!prefixcmp(data + i + 46, ".pack\n")) {
get_sha1_hex(data + i + 6, sha1);
fetch_and_setup_pack_index(packs_head, sha1,
base_url);
i += 51;
break;
}
default:
while (i < buf.len && data[i] != '\n')
i++;
}
i++;
}
cleanup:
free(url);
return ret;
}
void release_http_pack_request(struct http_pack_request *preq)
{
if (preq->packfile != NULL) {
fclose(preq->packfile);
preq->packfile = NULL;
preq->slot->local = NULL;
}
if (preq->range_header != NULL) {
curl_slist_free_all(preq->range_header);
preq->range_header = NULL;
}
preq->slot = NULL;
free(preq->url);
}
int finish_http_pack_request(struct http_pack_request *preq)
{
int ret;
struct packed_git **lst;
preq->target->pack_size = ftell(preq->packfile);
if (preq->packfile != NULL) {
fclose(preq->packfile);
preq->packfile = NULL;
preq->slot->local = NULL;
}
ret = move_temp_to_file(preq->tmpfile, preq->filename);
if (ret)
return ret;
lst = preq->lst;
while (*lst != preq->target)
lst = &((*lst)->next);
*lst = (*lst)->next;
if (verify_pack(preq->target))
return -1;
install_packed_git(preq->target);
return 0;
}
struct http_pack_request *new_http_pack_request(
struct packed_git *target, const char *base_url)
{
char *url;
char *filename;
long prev_posn = 0;
char range[RANGE_HEADER_SIZE];
struct strbuf buf = STRBUF_INIT;
struct http_pack_request *preq;
preq = xmalloc(sizeof(*preq));
preq->target = target;
preq->range_header = NULL;
end_url_with_slash(&buf, base_url);
strbuf_addf(&buf, "objects/pack/pack-%s.pack",
sha1_to_hex(target->sha1));
url = strbuf_detach(&buf, NULL);
preq->url = xstrdup(url);
filename = sha1_pack_name(target->sha1);
snprintf(preq->filename, sizeof(preq->filename), "%s", filename);
snprintf(preq->tmpfile, sizeof(preq->tmpfile), "%s.temp", filename);
preq->packfile = fopen(preq->tmpfile, "a");
if (!preq->packfile) {
error("Unable to open local file %s for pack",
preq->tmpfile);
goto abort;
}
preq->slot = get_active_slot();
preq->slot->local = preq->packfile;
curl_easy_setopt(preq->slot->curl, CURLOPT_FILE, preq->packfile);
curl_easy_setopt(preq->slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
curl_easy_setopt(preq->slot->curl, CURLOPT_URL, url);
curl_easy_setopt(preq->slot->curl, CURLOPT_HTTPHEADER,
no_pragma_header);
/*
* If there is data present from a previous transfer attempt,
* resume where it left off
*/
prev_posn = ftell(preq->packfile);
if (prev_posn>0) {
if (http_is_verbose)
fprintf(stderr,
"Resuming fetch of pack %s at byte %ld\n",
sha1_to_hex(target->sha1), prev_posn);
sprintf(range, "Range: bytes=%ld-", prev_posn);
preq->range_header = curl_slist_append(NULL, range);
curl_easy_setopt(preq->slot->curl, CURLOPT_HTTPHEADER,
preq->range_header);
}
return preq;
abort:
free(filename);
return NULL;
}
/* Helpers for fetching objects (loose) */
static size_t fwrite_sha1_file(void *ptr, size_t eltsize, size_t nmemb,
void *data)
{
unsigned char expn[4096];
size_t size = eltsize * nmemb;
int posn = 0;
struct http_object_request *freq =
(struct http_object_request *)data;
do {
ssize_t retval = xwrite(freq->localfile,
(char *) ptr + posn, size - posn);
if (retval < 0)
return posn;
posn += retval;
} while (posn < size);
freq->stream.avail_in = size;
freq->stream.next_in = ptr;
do {
freq->stream.next_out = expn;
freq->stream.avail_out = sizeof(expn);
freq->zret = git_inflate(&freq->stream, Z_SYNC_FLUSH);
git_SHA1_Update(&freq->c, expn,
sizeof(expn) - freq->stream.avail_out);
} while (freq->stream.avail_in && freq->zret == Z_OK);
data_received++;
return size;
}
struct http_object_request *new_http_object_request(const char *base_url,
unsigned char *sha1)
{
char *hex = sha1_to_hex(sha1);
char *filename;
char prevfile[PATH_MAX];
char *url;
int prevlocal;
unsigned char prev_buf[PREV_BUF_SIZE];
ssize_t prev_read = 0;
long prev_posn = 0;
char range[RANGE_HEADER_SIZE];
struct curl_slist *range_header = NULL;
struct http_object_request *freq;
freq = xmalloc(sizeof(*freq));
hashcpy(freq->sha1, sha1);
freq->localfile = -1;
filename = sha1_file_name(sha1);
snprintf(freq->filename, sizeof(freq->filename), "%s", filename);
snprintf(freq->tmpfile, sizeof(freq->tmpfile),
"%s.temp", filename);
snprintf(prevfile, sizeof(prevfile), "%s.prev", filename);
unlink_or_warn(prevfile);
rename(freq->tmpfile, prevfile);
unlink_or_warn(freq->tmpfile);
if (freq->localfile != -1)
error("fd leakage in start: %d", freq->localfile);
freq->localfile = open(freq->tmpfile,
O_WRONLY | O_CREAT | O_EXCL, 0666);
/*
* This could have failed due to the "lazy directory creation";
* try to mkdir the last path component.
*/
if (freq->localfile < 0 && errno == ENOENT) {
char *dir = strrchr(freq->tmpfile, '/');
if (dir) {
*dir = 0;
mkdir(freq->tmpfile, 0777);
*dir = '/';
}
freq->localfile = open(freq->tmpfile,
O_WRONLY | O_CREAT | O_EXCL, 0666);
}
if (freq->localfile < 0) {
error("Couldn't create temporary file %s for %s: %s",
freq->tmpfile, freq->filename, strerror(errno));
goto abort;
}
memset(&freq->stream, 0, sizeof(freq->stream));
git_inflate_init(&freq->stream);
git_SHA1_Init(&freq->c);
url = get_remote_object_url(base_url, hex, 0);
freq->url = xstrdup(url);
/*
* If a previous temp file is present, process what was already
* fetched.
*/
prevlocal = open(prevfile, O_RDONLY);
if (prevlocal != -1) {
do {
prev_read = xread(prevlocal, prev_buf, PREV_BUF_SIZE);
if (prev_read>0) {
if (fwrite_sha1_file(prev_buf,
1,
prev_read,
freq) == prev_read) {
prev_posn += prev_read;
} else {
prev_read = -1;
}
}
} while (prev_read > 0);
close(prevlocal);
}
unlink_or_warn(prevfile);
/*
* Reset inflate/SHA1 if there was an error reading the previous temp
* file; also rewind to the beginning of the local file.
*/
if (prev_read == -1) {
memset(&freq->stream, 0, sizeof(freq->stream));
git_inflate_init(&freq->stream);
git_SHA1_Init(&freq->c);
if (prev_posn>0) {
prev_posn = 0;
lseek(freq->localfile, 0, SEEK_SET);
ftruncate(freq->localfile, 0);
}
}
freq->slot = get_active_slot();
curl_easy_setopt(freq->slot->curl, CURLOPT_FILE, freq);
curl_easy_setopt(freq->slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
curl_easy_setopt(freq->slot->curl, CURLOPT_ERRORBUFFER, freq->errorstr);
curl_easy_setopt(freq->slot->curl, CURLOPT_URL, url);
curl_easy_setopt(freq->slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
/*
* If we have successfully processed data from a previous fetch
* attempt, only fetch the data we don't already have.
*/
if (prev_posn>0) {
if (http_is_verbose)
fprintf(stderr,
"Resuming fetch of object %s at byte %ld\n",
hex, prev_posn);
sprintf(range, "Range: bytes=%ld-", prev_posn);
range_header = curl_slist_append(range_header, range);
curl_easy_setopt(freq->slot->curl,
CURLOPT_HTTPHEADER, range_header);
}
return freq;
free(url);
abort:
free(filename);
free(freq);
return NULL;
}
void process_http_object_request(struct http_object_request *freq)
{
if (freq->slot == NULL)
return;
freq->curl_result = freq->slot->curl_result;
freq->http_code = freq->slot->http_code;
freq->slot = NULL;
}
int finish_http_object_request(struct http_object_request *freq)
{
struct stat st;
close(freq->localfile);
freq->localfile = -1;
process_http_object_request(freq);
if (freq->http_code == 416) {
fprintf(stderr, "Warning: requested range invalid; we may already have all the data.\n");
} else if (freq->curl_result != CURLE_OK) {
if (stat(freq->tmpfile, &st) == 0)
if (st.st_size == 0)
unlink_or_warn(freq->tmpfile);
return -1;
}
git_inflate_end(&freq->stream);
git_SHA1_Final(freq->real_sha1, &freq->c);
if (freq->zret != Z_STREAM_END) {
unlink_or_warn(freq->tmpfile);
return -1;
}
if (hashcmp(freq->sha1, freq->real_sha1)) {
unlink_or_warn(freq->tmpfile);
return -1;
}
freq->rename =
move_temp_to_file(freq->tmpfile, freq->filename);
return freq->rename;
}
void abort_http_object_request(struct http_object_request *freq)
{
unlink_or_warn(freq->tmpfile);
release_http_object_request(freq);
}
void release_http_object_request(struct http_object_request *freq)
{
if (freq->localfile != -1) {
close(freq->localfile);
freq->localfile = -1;
}
if (freq->url != NULL) {
free(freq->url);
freq->url = NULL;
}
freq->slot = NULL;
}

85
http.h
View File

@@ -93,6 +93,7 @@ extern void http_cleanup(void);
extern int data_received;
extern int active_requests;
extern int http_is_verbose;
extern char curl_errorstr[CURL_ERROR_SIZE];
@@ -109,6 +110,90 @@ static inline int missing__target(int code, int result)
#define missing_target(a) missing__target((a)->http_code, (a)->curl_result)
/* Helpers for modifying and creating URLs */
extern void append_remote_object_url(struct strbuf *buf, const char *url,
const char *hex,
int only_two_digit_prefix);
extern char *get_remote_object_url(const char *url, const char *hex,
int only_two_digit_prefix);
/* Options for http_request_*() */
#define HTTP_NO_CACHE 1
/* Return values for http_request_*() */
#define HTTP_OK 0
#define HTTP_MISSING_TARGET 1
#define HTTP_ERROR 2
#define HTTP_START_FAILED 3
/*
* Requests an url and stores the result in a strbuf.
*
* If the result pointer is NULL, a HTTP HEAD request is made instead of GET.
*/
int http_get_strbuf(const char *url, struct strbuf *result, int options);
/*
* Downloads an url and stores the result in the given file.
*
* If a previous interrupted download is detected (i.e. a previous temporary
* file is still around) the download is resumed.
*/
int http_get_file(const char *url, const char *filename, int options);
/*
* Prints an error message using error() containing url and curl_errorstr,
* and returns ret.
*/
int http_error(const char *url, int ret);
extern int http_fetch_ref(const char *base, struct ref *ref);
/* Helpers for fetching packs */
extern int http_get_info_packs(const char *base_url,
struct packed_git **packs_head);
struct http_pack_request
{
char *url;
struct packed_git *target;
struct packed_git **lst;
FILE *packfile;
char filename[PATH_MAX];
char tmpfile[PATH_MAX];
struct curl_slist *range_header;
struct active_request_slot *slot;
};
extern struct http_pack_request *new_http_pack_request(
struct packed_git *target, const char *base_url);
extern int finish_http_pack_request(struct http_pack_request *preq);
extern void release_http_pack_request(struct http_pack_request *preq);
/* Helpers for fetching object */
struct http_object_request
{
char *url;
char filename[PATH_MAX];
char tmpfile[PATH_MAX];
int localfile;
CURLcode curl_result;
char errorstr[CURL_ERROR_SIZE];
long http_code;
unsigned char sha1[20];
unsigned char real_sha1[20];
git_SHA_CTX c;
z_stream stream;
int zret;
int rename;
struct active_request_slot *slot;
};
extern struct http_object_request *new_http_object_request(
const char *base_url, unsigned char *sha1);
extern void process_http_object_request(struct http_object_request *freq);
extern int finish_http_object_request(struct http_object_request *freq);
extern void abort_http_object_request(struct http_object_request *freq);
extern void release_http_object_request(struct http_object_request *freq);
#endif /* HTTP_H */

View File

@@ -49,7 +49,7 @@ static int verify_packfile(struct packed_git *p,
const unsigned char *index_base = p->index_data;
git_SHA_CTX ctx;
unsigned char sha1[20], *pack_sig;
off_t offset = 0, pack_sig_ofs = p->pack_size - 20;
off_t offset = 0, pack_sig_ofs = 0;
uint32_t nr_objects, i;
int err = 0;
struct idx_entry *entries;
@@ -61,14 +61,16 @@ static int verify_packfile(struct packed_git *p,
*/
git_SHA1_Init(&ctx);
while (offset < pack_sig_ofs) {
do {
unsigned int remaining;
unsigned char *in = use_pack(p, w_curs, offset, &remaining);
offset += remaining;
if (!pack_sig_ofs)
pack_sig_ofs = p->pack_size - 20;
if (offset > pack_sig_ofs)
remaining -= (unsigned int)(offset - pack_sig_ofs);
git_SHA1_Update(&ctx, in, remaining);
}
} while (offset < pack_sig_ofs);
git_SHA1_Final(sha1, &ctx);
pack_sig = use_pack(p, w_curs, pack_sig_ofs, NULL);
if (hashcmp(sha1, pack_sig))

View File

@@ -306,6 +306,28 @@ static void check_typos(const char *arg, const struct option *options)
}
}
static void parse_options_check(const struct option *opts)
{
int err = 0;
for (; opts->type != OPTION_END; opts++) {
if ((opts->flags & PARSE_OPT_LASTARG_DEFAULT) &&
(opts->flags & PARSE_OPT_OPTARG)) {
if (opts->long_name) {
error("`--%s` uses incompatible flags "
"LASTARG_DEFAULT and OPTARG", opts->long_name);
} else {
error("`-%c` uses incompatible flags "
"LASTARG_DEFAULT and OPTARG", opts->short_name);
}
err |= 1;
}
}
if (err)
exit(129);
}
void parse_options_start(struct parse_opt_ctx_t *ctx,
int argc, const char **argv, const char *prefix,
int flags)
@@ -331,6 +353,8 @@ int parse_options_step(struct parse_opt_ctx_t *ctx,
{
int internal_help = !(ctx->flags & PARSE_OPT_NO_INTERNAL_HELP);
parse_options_check(options);
/* we must reset ->opt, unknown short option leave it dangling */
ctx->opt = NULL;

View File

@@ -1085,12 +1085,20 @@ static const struct refspec *check_pattern_match(const struct refspec *rs,
return NULL;
}
static struct ref **tail_ref(struct ref **head)
{
struct ref **tail = head;
while (*tail)
tail = &((*tail)->next);
return tail;
}
/*
* Note. This is used only by "push"; refspec matching rules for
* push and fetch are subtly different, so do not try to reuse it
* without thinking.
*/
int match_refs(struct ref *src, struct ref *dst, struct ref ***dst_tail,
int match_refs(struct ref *src, struct ref **dst,
int nr_refspec, const char **refspec, int flags)
{
struct refspec *rs;
@@ -1098,13 +1106,14 @@ int match_refs(struct ref *src, struct ref *dst, struct ref ***dst_tail,
int send_mirror = flags & MATCH_REFS_MIRROR;
int errs;
static const char *default_refspec[] = { ":", 0 };
struct ref **dst_tail = tail_ref(dst);
if (!nr_refspec) {
nr_refspec = 1;
refspec = default_refspec;
}
rs = parse_push_refspec(nr_refspec, (const char **) refspec);
errs = match_explicit_refs(src, dst, dst_tail, rs, nr_refspec);
errs = match_explicit_refs(src, *dst, &dst_tail, rs, nr_refspec);
/* pick the remainder */
for ( ; src; src = src->next) {
@@ -1134,7 +1143,7 @@ int match_refs(struct ref *src, struct ref *dst, struct ref ***dst_tail,
dst_side, &dst_name))
die("Didn't think it matches any more");
}
dst_peer = find_ref_by_name(dst, dst_name);
dst_peer = find_ref_by_name(*dst, dst_name);
if (dst_peer) {
if (dst_peer->peer_ref)
/* We're already sending something to this ref. */
@@ -1150,7 +1159,7 @@ int match_refs(struct ref *src, struct ref *dst, struct ref ***dst_tail,
goto free_name;
/* Create a new one and link it */
dst_peer = make_linked_ref(dst_name, dst_tail);
dst_peer = make_linked_ref(dst_name, &dst_tail);
hashcpy(dst_peer->new_sha1, src->new_sha1);
}
dst_peer->peer_ref = copy_ref(src);

View File

@@ -85,7 +85,7 @@ void ref_remove_duplicates(struct ref *ref_map);
int valid_fetch_refspec(const char *refspec);
struct refspec *parse_fetch_refspec(int nr_refspec, const char **refspec);
int match_refs(struct ref *src, struct ref *dst, struct ref ***dst_tail,
int match_refs(struct ref *src, struct ref **dst,
int nr_refspec, const char **refspec, int all);
/*

View File

@@ -276,6 +276,9 @@ test_expect_success 'fail if the index has unresolved entries' '
test_must_fail git merge "$c5" &&
test_must_fail git merge "$c5" 2> out &&
grep "You have not concluded your merge" out &&
rm -f .git/MERGE_HEAD &&
test_must_fail git merge "$c5" 2> out &&
grep "You are in the middle of a conflicted merge" out
'

View File

@@ -67,6 +67,42 @@ test_expect_success ' push to remote repository with unpacked refs' '
test $HEAD = $(git rev-parse --verify HEAD))
'
test_expect_success 'http-push fetches unpacked objects' '
cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git \
"$HTTPD_DOCUMENT_ROOT_PATH"/test_repo_unpacked.git &&
git clone $HTTPD_URL/test_repo_unpacked.git \
"$ROOT_PATH"/fetch_unpacked &&
# By reset, we force git to retrieve the object
(cd "$ROOT_PATH"/fetch_unpacked &&
git reset --hard HEAD^ &&
git remote rm origin &&
git reflog expire --expire=0 --all &&
git prune &&
git push -f -v $HTTPD_URL/test_repo_unpacked.git master)
'
test_expect_success 'http-push fetches packed objects' '
cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo.git \
"$HTTPD_DOCUMENT_ROOT_PATH"/test_repo_packed.git &&
git clone $HTTPD_URL/test_repo_packed.git \
"$ROOT_PATH"/test_repo_clone_packed &&
(cd "$HTTPD_DOCUMENT_ROOT_PATH"/test_repo_packed.git &&
git --bare repack &&
git --bare prune-packed) &&
# By reset, we force git to retrieve the packed object
(cd "$ROOT_PATH"/test_repo_clone_packed &&
git reset --hard HEAD^ &&
git remote rm origin &&
git reflog expire --expire=0 --all &&
git prune &&
git push -f -v $HTTPD_URL/test_repo_packed.git master)
'
test_expect_success 'create and delete remote branch' '
cd "$ROOT_PATH"/test_repo_clone &&
git checkout -b dev &&

View File

@@ -53,5 +53,13 @@ test_expect_success 'http remote detects correct HEAD' '
)
'
test_expect_success 'fetch packed objects' '
cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/repo.git "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git &&
cd "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git &&
git --bare repack &&
git --bare prune-packed &&
git clone $HTTPD_URL/repo_pack.git
'
stop_httpd
test_done

View File

@@ -555,6 +555,18 @@ test_expect_success 'restricting bisection on one dir and a file' '
grep "$PARA_HASH4 is first bad commit" my_bisect_log.txt
'
test_expect_success 'skipping away from skipped commit' '
git bisect start $PARA_HASH7 $HASH1 &&
para4=$(git rev-parse --verify HEAD) &&
test "$para4" = "$PARA_HASH4" &&
git bisect skip &&
hash7=$(git rev-parse --verify HEAD) &&
test "$hash7" = "$HASH7" &&
git bisect skip &&
hash3=$(git rev-parse --verify HEAD) &&
test "$hash3" = "$HASH3"
'
#
#
test_done

140
t/t7406-submodule-update.sh Executable file
View File

@@ -0,0 +1,140 @@
#!/bin/sh
#
# Copyright (c) 2009 Red Hat, Inc.
#
test_description='Test updating submodules
This test verifies that "git submodule update" detaches the HEAD of the
submodule and "git submodule update --rebase" does not detach the HEAD.
'
. ./test-lib.sh
compare_head()
{
sha_master=`git-rev-list --max-count=1 master`
sha_head=`git-rev-list --max-count=1 HEAD`
test "$sha_master" = "$sha_head"
}
test_expect_success 'setup a submodule tree' '
echo file > file &&
git add file &&
test_tick &&
git commit -m upstream
git clone . super &&
git clone super submodule &&
(cd super &&
git submodule add ../submodule submodule &&
test_tick &&
git commit -m "submodule" &&
git submodule init submodule
) &&
(cd submodule &&
echo "line2" > file &&
git add file &&
git commit -m "Commit 2"
) &&
(cd super &&
(cd submodule &&
git pull --rebase origin
) &&
git add submodule &&
git commit -m "submodule update"
)
'
test_expect_success 'submodule update detaching the HEAD ' '
(cd super/submodule &&
git reset --hard HEAD~1
) &&
(cd super &&
(cd submodule &&
compare_head
) &&
git submodule update submodule &&
cd submodule &&
! compare_head
)
'
test_expect_success 'submodule update --rebase staying on master' '
(cd super/submodule &&
git checkout master
) &&
(cd super &&
(cd submodule &&
compare_head
) &&
git submodule update --rebase submodule &&
cd submodule &&
compare_head
)
'
test_expect_success 'submodule update - rebase in .git/config' '
(cd super &&
git config submodule.submodule.update rebase
) &&
(cd super/submodule &&
git reset --hard HEAD~1
) &&
(cd super &&
(cd submodule &&
compare_head
) &&
git submodule update submodule &&
cd submodule &&
compare_head
)
'
test_expect_success 'submodule update - checkout in .git/config but --rebase given' '
(cd super &&
git config submodule.submodule.update checkout
) &&
(cd super/submodule &&
git reset --hard HEAD~1
) &&
(cd super &&
(cd submodule &&
compare_head
) &&
git submodule update --rebase submodule &&
cd submodule &&
compare_head
)
'
test_expect_success 'submodule update - checkout in .git/config' '
(cd super &&
git config submodule.submodule.update checkout
) &&
(cd super/submodule &&
git reset --hard HEAD^
) &&
(cd super &&
(cd submodule &&
compare_head
) &&
git submodule update submodule &&
cd submodule &&
! compare_head
)
'
test_expect_success 'submodule init picks up rebase' '
(cd super &&
git config submodule.rebasing.url git://non-existing/git &&
git config submodule.rebasing.path does-not-matter &&
git config submodule.rebasing.update rebase &&
git submodule init rebasing &&
test "rebase" = $(git config submodule.rebasing.update)
)
'
test_done

View File

@@ -621,4 +621,25 @@ test_expect_success 'in-reply-to but no threading' '
grep "In-Reply-To: <in-reply-id@example.com>"
'
test_expect_success 'no in-reply-to and no threading' '
git send-email \
--dry-run \
--from="Example <nobody@example.com>" \
--to=nobody@example.com \
--nothread \
$patches $patches >stdout &&
! grep "In-Reply-To: " stdout
'
test_expect_success 'threading but no chain-reply-to' '
git send-email \
--dry-run \
--from="Example <nobody@example.com>" \
--to=nobody@example.com \
--thread \
--nochain-reply-to \
$patches $patches >stdout &&
grep "In-Reply-To: " stdout
'
test_done

View File

@@ -317,4 +317,22 @@ test_expect_success 'use the same checkout for Git and CVS' '
'
test_expect_success 're-commit a removed filename which remains in CVS attic' '
(cd "$CVSWORK" &&
echo >attic_gremlin &&
cvs -Q add attic_gremlin &&
cvs -Q ci -m "added attic_gremlin" &&
rm attic_gremlin &&
cvs -Q rm attic_gremlin &&
cvs -Q ci -m "removed attic_gremlin") &&
echo > attic_gremlin &&
git add attic_gremlin &&
git commit -m "Added attic_gremlin" &&
git cvsexportcommit -w "$CVSWORK" -c HEAD &&
(cd "$CVSWORK"; cvs -Q update -d) &&
test -f "$CVSWORK/attic_gremlin"
'
test_done

View File

@@ -87,6 +87,15 @@ int main(int argc, const char *argv[])
return -1;
}
#ifdef WIN32
if (!(sb.st_mode & S_IWUSR) &&
chmod(argv[i], sb.st_mode | S_IWUSR)) {
fprintf(stderr, "Could not make user-writable %s: %s",
argv[i], strerror(errno));
return -1;
}
#endif
utb.actime = sb.st_atime;
utb.modtime = set_eq ? set_time : sb.st_mtime + set_time;

View File

@@ -439,9 +439,7 @@ static struct ref *get_refs_via_curl(struct transport *transport, int for_push)
char *ref_name;
char *refs_url;
int i = 0;
struct active_request_slot *slot;
struct slot_results results;
int http_ret;
struct ref *refs = NULL;
struct ref *ref = NULL;
@@ -461,25 +459,16 @@ static struct ref *get_refs_via_curl(struct transport *transport, int for_push)
refs_url = xmalloc(strlen(transport->url) + 11);
sprintf(refs_url, "%s/info/refs", transport->url);
slot = get_active_slot();
slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
curl_easy_setopt(slot->curl, CURLOPT_URL, refs_url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
if (start_active_slot(slot)) {
run_active_slot(slot);
if (results.curl_result != CURLE_OK) {
strbuf_release(&buffer);
if (missing_target(&results))
die("%s not found: did you run git update-server-info on the server?", refs_url);
else
die("%s download error - %s", refs_url, curl_errorstr);
}
} else {
strbuf_release(&buffer);
die("Unable to start HTTP request");
http_ret = http_get_strbuf(refs_url, &buffer, HTTP_NO_CACHE);
switch (http_ret) {
case HTTP_OK:
break;
case HTTP_MISSING_TARGET:
die("%s not found: did you run git update-server-info on the"
" server?", refs_url);
default:
http_error(refs_url, http_ret);
die("HTTP request failed");
}
data = buffer.buf;
@@ -519,6 +508,8 @@ static struct ref *get_refs_via_curl(struct transport *transport, int for_push)
free(ref);
}
strbuf_release(&buffer);
free(refs_url);
return refs;
}
@@ -1003,7 +994,6 @@ int transport_push(struct transport *transport,
if (transport->push_refs) {
struct ref *remote_refs =
transport->get_refs_list(transport, 1);
struct ref **remote_tail;
struct ref *local_refs = get_local_heads();
int match_flags = MATCH_REFS_NONE;
int verbose = flags & TRANSPORT_PUSH_VERBOSE;
@@ -1014,10 +1004,7 @@ int transport_push(struct transport *transport,
if (flags & TRANSPORT_PUSH_MIRROR)
match_flags |= MATCH_REFS_MIRROR;
remote_tail = &remote_refs;
while (*remote_tail)
remote_tail = &((*remote_tail)->next);
if (match_refs(local_refs, remote_refs, &remote_tail,
if (match_refs(local_refs, &remote_refs,
refspec_nr, refspec, match_flags)) {
return -1;
}

2
utf8.c
View File

@@ -354,7 +354,7 @@ int is_encoding_utf8(const char *name)
* with iconv. If the conversion fails, returns NULL.
*/
#ifndef NO_ICONV
#ifdef OLD_ICONV
#if defined(OLD_ICONV) || (defined(__sun__) && !defined(_XPG6))
typedef const char * iconv_ibp;
#else
typedef char * iconv_ibp;