Skip to content
Permalink
Branch: master
Commits on Apr 8, 2016
  1. localedata: iw_IL: delete old/deprecated locale [BZ #16137]

    Mike Frysinger
    Mike Frysinger committed Feb 19, 2016
    From the bug:
    Obsolete locale.  The ISO-639 code for Hebrew was changed from 'iw'
    to 'he' in 1989, according to Bruno Haible on libc-alpha 2003-09-01.
    
    Reported-by: Chris Leonard <cjlhomeaddress@gmail.com>
  2. Fix limits.h NL_NMAX namespace (bug 19929).

    Joseph Myers
    Joseph Myers committed Apr 8, 2016
    bits/xopen_lim.h (included by limits.h if __USE_XOPEN) defines
    NL_NMAX, but this constant was removed in the 2008 edition of POSIX so
    should not be defined in that case.  This patch duly disables that
    define for __USE_XOPEN2K8.  It remains enabled for __USE_GNU to avoid
    affecting sysconf (_SC_NL_NMAX), the implementation of which uses
    "#ifdef NL_NMAX".
    
    Tested for x86_64 and x86 (testsuite, and that installed stripped
    shared libraries are unchanged by the patch).
    
    	[BZ #19929]
    	* include/bits/xopen_lim.h (NL_NMAX): Do not define if
    	[__USE_XOPEN2K8 && !__USE_GNU].
    	* conform/Makefile (test-xfail-XOPEN2K8/limits.h/conform): Remove
    	variable.
  3. localedata: i18n: fix typos in tel_int_fmt

    Mike FABIAN Mike Frysinger
    Mike FABIAN authored and Mike Frysinger committed Nov 1, 2015
    Adding the %t avoids a double space if the area code %a happens
    to be empty.  There are countries without area codes.
  4. Fix termios.h XCASE namespace (bug 19925).

    Joseph Myers
    Joseph Myers committed Apr 8, 2016
    bits/termios.h (various versions under sysdeps/unix/sysv/linux)
    defines XCASE if defined __USE_MISC || defined __USE_XOPEN.  This
    macro was removed in the 2001 edition of POSIX, and is not otherwise
    reserved, so should not be defined for 2001 and later versions of
    POSIX.  This patch fixes the conditions accordingly (leaving the macro
    defined for __USE_MISC, so still in the default namespace).
    
    Tested for x86_64 and x86 (testsuite, and that installed shared
    libraries are unchanged by the patch).
    
    	[BZ #19925]
    	* sysdeps/unix/sysv/linux/alpha/bits/termios.h (XCASE): Do not
    	define if [!__USE_MISC && __USE_XOPEN2K].
    	* sysdeps/unix/sysv/linux/bits/termios.h (XCASE): Likewise.
    	* sysdeps/unix/sysv/linux/mips/bits/termios.h (XCASE): Likewise.
    	* sysdeps/unix/sysv/linux/powerpc/bits/termios.h (XCASE):
    	Likewise.
    	* sysdeps/unix/sysv/linux/sparc/bits/termios.h (XCASE): Likewise.
    	* conform/Makefile (test-xfail-XOPEN2K/termios.h/conform): Remove
    	variable.
    	(test-xfail-XOPEN2K8/termios.h/conform): Likewise.
Commits on Apr 7, 2016
  1. powerpc: Add optimized P8 strspn

    Paul E. Murphy
    Paul E. Murphy committed Mar 14, 2016
    This utilizes vectors and bitmasks.  For small needle, large
    haystack, the performance improvement is upto 8x.  For short
    strings (0-4B), the cost of computing the bitmask dominates,
    and is a tad slower.
  2. hsearch_r: Include <limits.h>

    Florian Weimer
    Florian Weimer committed Apr 7, 2016
    It is needed for UINT_MAX.
  3. scratch_buffer_set_array_size: Include <limits.h>

    Florian Weimer
    Florian Weimer committed Apr 7, 2016
    It is needed for CHAR_BIT.
Commits on Apr 6, 2016
  1. X86-64: Prepare memmove-vec-unaligned-erms.S

    H.J. Lu
    H.J. Lu committed Apr 6, 2016
    Prepare memmove-vec-unaligned-erms.S to make the SSE2 version as the
    default memcpy, mempcpy and memmove.
    
    	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
    	(MEMCPY_SYMBOL): New.
    	(MEMPCPY_SYMBOL): Likewise.
    	(MEMMOVE_CHK_SYMBOL): Likewise.
    	Replace MEMMOVE_SYMBOL with MEMMOVE_CHK_SYMBOL on __mempcpy_chk
    	symbols.  Replace MEMMOVE_SYMBOL with MEMPCPY_SYMBOL on
    	__mempcpy symbols.  Provide alias for __memcpy_chk in libc.a.
    	Provide alias for memcpy in libc.a and ld.so.
  2. X86-64: Prepare memset-vec-unaligned-erms.S

    H.J. Lu
    H.J. Lu committed Apr 6, 2016
    Prepare memset-vec-unaligned-erms.S to make the SSE2 version as the
    default memset.
    
    	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S
    	(MEMSET_CHK_SYMBOL): New.  Define if not defined.
    	(__bzero): Check VEC_SIZE == 16 instead of USE_MULTIARCH.
    	Disabled fro now.
    	Replace MEMSET_SYMBOL with MEMSET_CHK_SYMBOL on __memset_chk
    	symbols.  Properly check USE_MULTIARCH on __memset symbols.
  3. Add memcpy/memmove/memset benchmarks with large data

    H.J. Lu
    H.J. Lu committed Apr 6, 2016
    Add memcpy, memmove and memset benchmarks with large data sizes.
    
    	* benchtests/Makefile (string-benchset): Add memcpy-large,
    	memmove-large and memset-large.
    	* benchtests/bench-memcpy-large.c: New file.
    	* benchtests/bench-memmove-large.c: Likewise.
    	* benchtests/bench-memmove-large.c: Likewise.
    	* benchtests/bench-string.h (TIMEOUT): Don't redefine.
  4. Mention Bug in ChangeLog for S390: Save and restore fprs/vrs while re…

    Stefan Liebler
    Stefan Liebler committed Apr 6, 2016
    …solving symbols.
    
    The Bugzilla 19916 is added to the ChangeLog for
    commit 4603c51.
Commits on Apr 5, 2016
  1. Force 32-bit displacement in memset-vec-unaligned-erms.S

    H.J. Lu
    H.J. Lu committed Apr 5, 2016
    	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: Force
    	32-bit displacement to avoid long nop between instructions.
  2. Add a comment in memset-sse2-unaligned-erms.S

    H.J. Lu
    H.J. Lu committed Apr 5, 2016
    	* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S: Add
    	a comment on VMOVU and VMOVA.
Commits on Apr 4, 2016
Commits on Apr 3, 2016
  1. Don't put SSE2/AVX/AVX512 memmove/memset in ld.so

    H.J. Lu
    H.J. Lu committed Apr 3, 2016
    Since memmove and memset in ld.so don't use IFUNC, don't put SSE2, AVX
    and AVX512 memmove and memset in ld.so.
    
    	* sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms.S: Skip
    	if not in libc.
    	* sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S:
    	Likewise.
    	* sysdeps/x86_64/multiarch/memset-avx2-unaligned-erms.S:
    	Likewise.
    	* sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S:
    	Likewise.
  2. Fix memmove-vec-unaligned-erms.S

    H.J. Lu
    H.J. Lu committed Apr 3, 2016
    __mempcpy_erms and __memmove_erms can't be placed between __memmove_chk
    and __memmove it breaks __memmove_chk.
    
    Don't check source == destination first since it is less common.
    
    	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:
    	(__mempcpy_erms, __memmove_erms): Moved before __mempcpy_chk
    	with unaligned_erms.
    	(__memmove_erms): Skip if source == destination.
    	(__memmove_unaligned_erms): Don't check source == destination
    	first.
Commits on Apr 1, 2016
  1. Remove Fast_Copy_Backward from Intel Core processors

    H.J. Lu
    H.J. Lu committed Apr 1, 2016
    Intel Core i3, i5 and i7 processors have fast unaligned copy and
    copy backward is ignored.  Remove Fast_Copy_Backward from Intel Core
    processors to avoid confusion.
    
    	* sysdeps/x86/cpu-features.c (init_cpu_features): Don't set
    	bit_arch_Fast_Copy_Backward for Intel Core proessors.
  2. Use PTR_ALIGN_DOWN on strcspn and strspn

    Adhemerval Zanella
    Adhemerval Zanella committed Apr 1, 2016
    Tested on aarch64.
    
    	* string/strcspn.c (strcspn): Use PTR_ALIGN_DOWN.
    	* string/strspn.c (strspn): Likewise.
  3. Test 64-byte alignment in memset benchtest

    H.J. Lu
    H.J. Lu committed Apr 1, 2016
    Add 64-byte alignment tests in memset benchtest for 64-byte vector
    registers.
    
    	* benchtests/bench-memset.c (do_test): Support 64-byte
    	alignment.
    	(test_main): Test 64-byte alignment.
  4. Test 64-byte alignment in memmove benchtest

    H.J. Lu
    H.J. Lu committed Apr 1, 2016
    Add 64-byte alignment tests in memmove benchtest for 64-byte vector
    registers.
    
    	* benchtests/bench-memmove.c (test_main): Test 64-byte
    	alignment.
  5. Test 64-byte alignment in memcpy benchtest

    H.J. Lu
    H.J. Lu committed Apr 1, 2016
    Add 64-byte alignment tests in memcpy benchtest for 64-byte vector
    registers.
    
    	* benchtests/bench-memcpy.c (test_main): Test 64-byte alignment.
  6. Remove powerpc64 strspn, strcspn, and strpbrk implementation

    Adhemerval Zanella
    Adhemerval Zanella committed Mar 28, 2016
    This patch removes the powerpc64 optimized strspn, strcspn, and
    strpbrk assembly implementation now that the default C one
    implements the same strategy.  On internal glibc benchtests
    current implementations shows similar performance with -O2.
    
    Tested on powerpc64le (POWER8).
    
    	* sysdeps/powerpc/powerpc64/strcspn.S: Remove file.
    	* sysdeps/powerpc/powerpc64/strpbrk.S: Remove file.
    	* sysdeps/powerpc/powerpc64/strspn.S: Remove file.
  7. Improve generic strpbrk performance

    Adhemerval Zanella
    Adhemerval Zanella committed Mar 27, 2016
    With now a faster strcspn implementation, it is faster to just use
    it with some return tests than reimplementing strpbrk itself.
    As for strcspn optimization, it is generally at least 10 times faster
    than the existing implementation on bench-strspn on a few AArch64
    implementations.
    
    Also the string/bits/string2.h inlines make no longer sense, as current
    implementation will already implement most of the optimizations.
    
    Tested on x86_64, i386, and aarch64.
    
    	* string/strpbrk.c (strpbrk): Rewrite function.
    	* string/bits/string2.h (strpbrk): Use __builtin_strpbrk.
    	(__strpbrk_c2): Likewise.
    	(__strpbrk_c3): Likewise.
    	* string/string-inlines.c
    	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strpbrk_c2):
    	Likewise.
    	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strpbrk_c3):
    	Likewise.
  8. Improve generic strspn performance

    Adhemerval Zanella
    Adhemerval Zanella committed Mar 26, 2016
    As for strcspn, this patch improves strspn performance using a much
    faster algorithm.  It first constructs a 256-entry table based on
    the accept string and then uses it as a lookup table for the
    input string.  As for strcspn optimization, it is generally at least
    10 times faster than the existing implementation on bench-strspn
    on a few AArch64 implementations.
    
    Also the string/bits/string2.h inlines make no longer sense, as current
    implementation will already implement most of the optimizations.
    
    Tested on x86_64, i686, and aarch64.
    
    	* string/strspn.c (strcspn): Rewrite function.
    	* string/bits/string2.h (strspn): Use __builtin_strcspn.
    	(__strspn_c1): Remove inline function.
    	(__strspn_c2): Likewise.
    	(__strspn_c3): Likewise.
    	* string/string-inlines.c
    	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strspn_c1): Add
    	compatibility symbol.
    	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strspn_c2):
    	Likewise.
    	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strspn_c3):
    	Likewise.
  9. Improve generic strcspn performance

    Wilco Dijkstra Adhemerval Zanella
    Wilco Dijkstra authored and Adhemerval Zanella committed Mar 25, 2016
    Improve strcspn performance using a much faster algorithm.  It is kept simple
    so it works well on most targets.  It is generally at least 10 times faster
    than the existing implementation on bench-strcspn on a few AArch64
    implementations, and for some tests 100 times as fast (repeatedly calling
    strchr on a small string is extremely slow...).
    
    In fact the string/bits/string2.h inlines make no longer sense, as GCC
    already uses strlen if reject is an empty string, strchrnul is 5 times as
    fast as __strcspn_c1, while __strcspn_c2 and __strcspn_c3 are slower than
    the strcspn main loop for large strings (though reject length 2-4 could be
    special cased in the future to gain even more performance).
    
    Tested on x86_64, i686, and aarch64.
    
    	* string/Version (libc): Add GLIBC_2.24.
    	* string/strcspn.c (strcspn): Rewrite function.
    	* string/bits/string2.h (strcspn): Use __builtin_strcspn.
    	(__strcspn_c1): Remove inline function.
    	(__strcspn_c2): Likewise.
    	(__strcspn_c3): Likewise.
    	* string/string-inline.c
    	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strcspn_c1): Add
    	compatibility symbol.
    	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strcspn_c2):
    	Likewise.
    	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strcspn_c3):
    	Likewise.
    	* sysdeps/i386/string-inlines.c: Include generic string-inlines.c.
  10. S390: Use ahi instead of aghi in 32bit _dl_runtime_resolve.

    Stefan Liebler
    Stefan Liebler committed Apr 1, 2016
    This patch uses ahi instead of aghi in 32bit _dl_runtime_resolve
    to adjust the stack pointer. This is no functional change,
    but a cosmetic one.
    
    ChangeLog:
    
    	* sysdeps/s390/s390-32/dl-trampoline.h (_dl_runtime_resolve):
    	Use ahi instead of aghi to adjust stack pointer.
Commits on Mar 31, 2016
  1. Increase internal precision of ldbl-128ibm decimal printf [BZ #19853]

    Paul E. Murphy
    Paul E. Murphy committed Feb 29, 2016
    When the signs differ, the precision of the conversion sometimes
    drops below 106 bits.  This strategy is identical to the
    hexadecimal variant.
    
    I've refactored tst-sprintf3 to enable testing a value with more
    than 30 significant digits in order to demonstrate this failure
    and its solution.
    
    Additionally, this implicitly fixes a typo in the shift
    quantities when subtracting from the high mantissa to compute
    the difference.
  2. Add x86-64 memset with unaligned store and rep stosb

    H.J. Lu
    H.J. Lu committed Mar 31, 2016
    Implement x86-64 memset with unaligned store and rep movsb.  Support
    16-byte, 32-byte and 64-byte vector register sizes.  A single file
    provides 2 implementations of memset, one with rep stosb and the other
    without rep stosb.  They share the same codes when size is between 2
    times of vector register size and REP_STOSB_THRESHOLD which defaults
    to 2KB.
    
    Key features:
    
    1. Use overlapping store to avoid branch.
    2. For size <= 4 times of vector register size, fully unroll the loop.
    3. For size > 4 times of vector register size, store 4 times of vector
    register size at a time.
    
    	[BZ #19881]
    	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
    	memset-sse2-unaligned-erms, memset-avx2-unaligned-erms and
    	memset-avx512-unaligned-erms.
    	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
    	(__libc_ifunc_impl_list): Test __memset_chk_sse2_unaligned,
    	__memset_chk_sse2_unaligned_erms, __memset_chk_avx2_unaligned,
    	__memset_chk_avx2_unaligned_erms, __memset_chk_avx512_unaligned,
    	__memset_chk_avx512_unaligned_erms, __memset_sse2_unaligned,
    	__memset_sse2_unaligned_erms, __memset_erms,
    	__memset_avx2_unaligned, __memset_avx2_unaligned_erms,
    	__memset_avx512_unaligned_erms and __memset_avx512_unaligned.
    	* sysdeps/x86_64/multiarch/memset-avx2-unaligned-erms.S: New
    	file.
    	* sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S:
    	Likewise.
    	* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S:
    	Likewise.
    	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:
    	Likewise.
  3. Add x86-64 memmove with unaligned load/store and rep movsb

    H.J. Lu
    H.J. Lu committed Mar 31, 2016
    Implement x86-64 memmove with unaligned load/store and rep movsb.
    Support 16-byte, 32-byte and 64-byte vector register sizes.  When
    size <= 8 times of vector register size, there is no check for
    address overlap bewteen source and destination.  Since overhead for
    overlap check is small when size > 8 times of vector register size,
    memcpy is an alias of memmove.
    
    A single file provides 2 implementations of memmove, one with rep movsb
    and the other without rep movsb.  They share the same codes when size is
    between 2 times of vector register size and REP_MOVSB_THRESHOLD which
    is 2KB for 16-byte vector register size and scaled up by large vector
    register size.
    
    Key features:
    
    1. Use overlapping load and store to avoid branch.
    2. For size <= 8 times of vector register size, load  all sources into
    registers and store them together.
    3. If there is no address overlap bewteen source and destination, copy
    from both ends with 4 times of vector register size at a time.
    4. If address of destination > address of source, backward copy 8 times
    of vector register size at a time.
    5. Otherwise, forward copy 8 times of vector register size at a time.
    6. Use rep movsb only for forward copy.  Avoid slow backward rep movsb
    by fallbacking to backward copy 8 times of vector register size at a
    time.
    7. Skip when address of destination == address of source.
    
    	[BZ #19776]
    	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
    	memmove-sse2-unaligned-erms, memmove-avx-unaligned-erms and
    	memmove-avx512-unaligned-erms.
    	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
    	(__libc_ifunc_impl_list): Test
    	__memmove_chk_avx512_unaligned_2,
    	__memmove_chk_avx512_unaligned_erms,
    	__memmove_chk_avx_unaligned_2, __memmove_chk_avx_unaligned_erms,
    	__memmove_chk_sse2_unaligned_2,
    	__memmove_chk_sse2_unaligned_erms, __memmove_avx_unaligned_2,
    	__memmove_avx_unaligned_erms, __memmove_avx512_unaligned_2,
    	__memmove_avx512_unaligned_erms, __memmove_erms,
    	__memmove_sse2_unaligned_2, __memmove_sse2_unaligned_erms,
    	__memcpy_chk_avx512_unaligned_2,
    	__memcpy_chk_avx512_unaligned_erms,
    	__memcpy_chk_avx_unaligned_2, __memcpy_chk_avx_unaligned_erms,
    	__memcpy_chk_sse2_unaligned_2, __memcpy_chk_sse2_unaligned_erms,
    	__memcpy_avx_unaligned_2, __memcpy_avx_unaligned_erms,
    	__memcpy_avx512_unaligned_2, __memcpy_avx512_unaligned_erms,
    	__memcpy_sse2_unaligned_2, __memcpy_sse2_unaligned_erms,
    	__memcpy_erms, __mempcpy_chk_avx512_unaligned_2,
    	__mempcpy_chk_avx512_unaligned_erms,
    	__mempcpy_chk_avx_unaligned_2, __mempcpy_chk_avx_unaligned_erms,
    	__mempcpy_chk_sse2_unaligned_2, __mempcpy_chk_sse2_unaligned_erms,
    	__mempcpy_avx512_unaligned_2, __mempcpy_avx512_unaligned_erms,
    	__mempcpy_avx_unaligned_2, __mempcpy_avx_unaligned_erms,
    	__mempcpy_sse2_unaligned_2, __mempcpy_sse2_unaligned_erms and
    	__mempcpy_erms.
    	* sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms.S: New
    	file.
    	* sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S:
    	Likwise.
    	* sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S:
    	Likwise.
    	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:
    	Likwise.
  4. S390: Extend structs La_s390_regs / La_s390_retval with vector-regist…

    Stefan Liebler
    Stefan Liebler committed Mar 31, 2016
    …ers.
    
    Starting with z13, vector registers can also occur as argument registers.
    Thus the passed input/output register structs for
    la_s390_[32|64]_gnu_plt[enter|exit] functions should reflect those new
    registers. This patch extends these structs La_s390_regs and La_s390_retval
    and adjusts _dl_runtime_profile() to handle those fields in case of
    running on a z13 machine.
    
    ChangeLog:
    
    	* sysdeps/s390/bits/link.h: (La_s390_vr) New typedef.
    	(La_s390_32_regs): Append vector register lr_v24-lr_v31.
    	(La_s390_64_regs): Likewise.
    	(La_s390_32_retval): Append vector register lrv_v24.
    	(La_s390_64_retval): Likeweise.
    	* sysdeps/s390/s390-32/dl-trampoline.h (_dl_runtime_profile):
    	Handle extended structs La_s390_32_regs and La_s390_32_retval.
    	* sysdeps/s390/s390-64/dl-trampoline.h (_dl_runtime_profile):
    	Handle extended structs La_s390_64_regs and La_s390_64_retval.
  5. S390: Save and restore fprs/vrs while resolving symbols.

    Stefan Liebler
    Stefan Liebler committed Mar 31, 2016
    On s390, no fpr/vrs were saved while resolving a symbol
    via _dl_runtime_resolve/_dl_runtime_profile.
    
    According to the abi, the fpr-arguments are defined as call clobbered.
    In leaf-functions, gcc 4.9 and newer can use fprs for saving/restoring gprs
    instead of saving them to the stack.
    If gcc do this in one of the resolver-functions, then the floating point
    arguments of a library-function are invalid for the first library-function-call.
    Thus, this patch saves/restores the fprs around the resolving code.
    
    The same could occur for vector registers. Furthermore an ifunc-resolver
    could also clobber the vector/floating point argument registers.
    Thus this patch provides the further variants _dl_runtime_resolve_vx/
    _dl_runtime_profile_vx, which are used if the kernel claims, that
    we run on a machine with vector registers.
    
    Furthermore, if _dl_runtime_profile calls _dl_call_pltexit,
    the pointers to inregs-/outregs-structs were setup invalid.
    Now they point to the correct location in the stack-frame.
    Before branching back to the caller, the return values are now
    restored instead of containing the return values of the
    _dl_call_pltexit() call.
    On s390-32, an endless loop occurs if _dl_call_pltexit() should be called.
    Now, this code-path branches to this function instead of just after the
    preceding basr-instruction.
    
    ChangeLog:
    
    	* sysdeps/s390/s390-32/dl-trampoline.S: Include dl-trampoline.h twice
    	to create a non-vector/vector version for _dl_runtime_resolve and
    	_dl_runtime_profile. Move implementation to ...
    	* sysdeps/s390/s390-32/dl-trampoline.h: ... here.
    	(_dl_runtime_resolve) Save and restore fpr/vrs.
    	(_dl_runtime_profile) Save and restore vrs and fix some issues
    	if _dl_call_pltexit is called.
    	* sysdeps/s390/s390-32/dl-machine.h (elf_machine_runtime_setup):
    	Choose the correct resolver function if running on a machine with vx.
    	* sysdeps/s390/s390-64/dl-trampoline.S: Include dl-trampoline.h twice
    	to create a non-vector/vector version for _dl_runtime_resolve and
    	_dl_runtime_profile. Move implementation to ...
    	* sysdeps/s390/s390-64/dl-trampoline.h: ... here.
    	(_dl_runtime_resolve) Save and restore fpr/vrs.
    	(_dl_runtime_profile) Save and restore vrs and fix some issues
    	* sysdeps/s390/s390-64/dl-machine.h: (elf_machine_runtime_setup):
    	Choose the correct resolver function if running on a machine with vx.
  6. Fix tst-dlsym-error build

    Adhemerval Zanella
    Adhemerval Zanella committed Mar 31, 2016
    This patch fixes the new test tst-dlsym-error build on aarch64
    (and possible other architectures as well) due missing strchrnul
    definition.
    
    	* elf/tst-dlsym-error.c: Include <string.h> for strchrnul.
  7. Report dlsym, dlvsym lookup errors using dlerror [BZ #19509]

    Florian Weimer
    Florian Weimer committed Mar 31, 2016
    	* elf/dl-lookup.c (_dl_lookup_symbol_x): Report error even if
    	skip_map != NULL.
    	* elf/tst-dlsym-error.c: New file.
    	* elf/Makefile (tests): Add tst-dlsym-error.
    	(tst-dlsym-error): Link against libdl.
Commits on Mar 29, 2016
  1. [microblaze] Remove __ASSUME_FUTIMESAT.

    Joseph Myers
    Joseph Myers committed Mar 29, 2016
    MicroBlaze has a special version of futimesat.c because it gained the
    futimesat syscall later than other non-asm-generic architectures.  Now
    the minimum kernel is recent enough that this syscall can always be
    assumed to be present for MicroBlaze, so this patch removes the
    special version and the __ASSUME_FUTIMESAT macro, resulting in the
    sysdeps/unix/sysv/linux/futimesat.c version being used.
    
    Untested.
    
    	* sysdeps/unix/sysv/linux/microblaze/kernel-features.h
    	(__ASSUME_FUTIMESAT): Remove macro.
    	* sysdeps/unix/sysv/linux/microblaze/futimesat.c: Remove file.
  2. CVE-2016-3075: Stack overflow in _nss_dns_getnetbyname_r [BZ #19879]

    Florian Weimer
    Florian Weimer committed Mar 29, 2016
    The defensive copy is not needed because the name may not alias the
    output buffer.
Older
You can’t perform that action at this time.