Nobody Expects the Spanish Inquisition, or INT_MIN to be Divided by -1


INT_MIN % -1 and INT_MIN / -1 in C/C++ are little gifts that keep on giving. Recently, Xi Wang has been using this construct to knock over languages implemented in C/C++. Then today Tavis Ormandy posted an excellent local DOS for a Windows 8 machine.

But the fun doesn’t stop there. For one thing, as I wrote a while ago, it’s not even clear that INT_MIN % -1 is undefined behavior in C99, C++98, or any earlier version. It is explicitly undefined in C11 and C++11.

Since undefined behavior is undefined, we get fun results from programs like this:

#include <limits.h>

int foo (int a, int b) {
  return a / b;
}

int main (void) {
  return foo (INT_MIN, -1);
}

Now we’ll compile at a few optimization levels using gcc 4.7.2 on either x86 or x86-64 and see what happens:

$ gcc -O0 intmin.c -o intmin
$ ./intmin 
Floating point exception (core dumped)
$ gcc -O1 intmin.c -o intmin
$ ./intmin 
Floating point exception (core dumped)
$ gcc -O2 intmin.c -o intmin
$ ./intmin 
$

Why does it crash at low optimization levels and not at high ones? It happens that -O2 is the first optimization level where the call to foo() is inlined, exposing its guts to constant propagation and folding, with the result that no idiv needs to be executed, and thus no exception gets thrown. A recent build of Clang has exactly the same behavior. If we replace / with %, the behavior is unchanged (under both compilers).

What should we take away from this? A few things:

  • If you can’t prove that division and modulus operations in your C/C++ code aren’t going to do these operations, explicit checks need to be added.
  • If you are testing a C/C++ program, try to get it to perform these operations, but keep in mind that all such testing can be unreliable due to the effect shown above.
  • Use Clang’s new -fsanitize=integer mode. It’s not in 3.2 but it’s in trunk and should be in 3.3 when it comes out in a few months. The integer sanitizer is based on the excellent work that Will Dietz has done in getting IOC into LLVM’s trunk.

Here’s what happens:

$ clang -fsanitize=integer intmin.c -o intmin
$ ./intmin 
intmin.c:5:12: runtime error: division of -2147483648 by -1 cannot be represented in type 'int'
Floating point exception (core dumped)

How is this different from the errors above? First, we get the nice diagnostic. Second, we’re guaranteed to get the error regardless of optimization level.

UPDATE: A comment on Xi’s post shows how to crash a Bash shell. For example, on my Ubuntu 12.10 machine:

$ ($((-2**63/-1)))
Floating point exception (core dumped)
$ bash -version
GNU bash, version 4.2.37(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ 

The behavior on Ubuntu 12.04 is the same. I would love to see more examples like this. If you know of one, please leave a comment.


17 responses to “Nobody Expects the Spanish Inquisition, or INT_MIN to be Divided by -1”

  1. Seems this may be x86 specific. Testing on PowerPC gets 255 (at -O0), 255 (at -O1) and 0 (at -O2) with no crashes or exceptions.

    True, it is still an issue of undefined behavior, but it doesn’t appear to be a crash vector.

  2. Hi Ben, good point. I’ll try this on Linux + ARM11 later on today, and would like to see data points from iOS and from Linux on newer ARM chips.

  3. This class of bugs has an eery sort of immortality; the GCC C-Torture file 20010404-1.c crashed an early version of the PalmSource ARM compiler (a completely different codebase from GCC: EDG front-end and Apogee optimizer, plus plenty of our own code) at exactly the same optimization levels. The best general introduction I found to the phenomenon was p. 228 of Harbison & Steele’s _C: A Reference Manual_.

  4. I exhibit the issue on x86_64 only, Debian Sid. As I am on 32-bit X86. Debian Sid. Both current as of January 29th, 2012. Here is the 32-bit scrape.
    —–32-bit—–
    greg:~/intmin [0] $ cat intmin.c
    #include

    int foo (int a, int b) {
    return a / b;
    }

    int main (void) {
    return foo (INT_MIN, -1);
    }
    greg:~/intmin [0] $ gcc -O0 intmin.c -o intmin
    greg:~/intmin [0] $ ./intmin
    greg:~/intmin [0] $ gcc -O1 intmin.c -o intmin
    greg:~/intmin [0] $ ./intmin
    greg:~/intmin [0] $ gcc -O2 intmin.c -o intmin
    greg:~/intmin [0] $ ./intmin
    greg:~/intmin [0] $ gcc -O3 intmin.c -o intmin
    greg:~/intmin [0] $ ./intmin
    greg:~/intmin [0] $ ($((-2**63/-1)))
    bash: -9223372036854775808: command not found
    greg:~/intmin [127] $ bash -version
    GNU bash, version 4.2.37(1)-release (i486-pc-linux-gnu)
    Copyright (C) 2011 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later

    This is free software; you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
    greg:~/intmin [0] $

  5. Hi Greg, interesting, I also cannot reproduce the bash crash on 32-bit Ubuntu. We’d have to look into the code to figure out what’s really going on.

    The C program results are the same for me on Ubuntu 12.04 for x86 as they were on the 64-bit OS:

    regehr@regehr-VirtualBox:~$ gcc -O0 intmin.c ; ./a.out
    Floating point exception (core dumped)
    regehr@regehr-VirtualBox:~$ gcc -O1 intmin.c ; ./a.out
    Floating point exception (core dumped)
    regehr@regehr-VirtualBox:~$ gcc -O2 intmin.c ; ./a.out
    regehr@regehr-VirtualBox:~$ uname -a
    Linux regehr-VirtualBox 3.2.0-36-generic-pae #57-Ubuntu SMP Tue Jan 8 22:01:06 UTC 2013 i686 i686 i386 GNU/Linux
    regehr@regehr-VirtualBox:~$

  6. Hi Greg, I took a quick look at bash’s math code. It does all arithmetic on type intmax_t. This is 8 bytes on a 32-bit platform, meaning that integer divide happens in a library function instead of an idiv instruction. Presumably this library function has been written so that it does not throw an exception on overflow.

    On 64-bit we get this:

    [regehr@gamow ~]$ cat intmax.c
    #include

    intmax_t divide (intmax_t a, intmax_t b)
    {
    return a/b;
    }
    [regehr@gamow ~]$ gcc -O -S -o – intmax.c
    .file “intmax.c”
    .text
    .globl divide
    .type divide, @function
    divide:
    .LFB4:
    .cfi_startproc
    movq %rdi, %rax
    movq %rdi, %rdx
    sarq $63, %rdx
    idivq %rsi
    ret
    .cfi_endproc
    .LFE4:
    .size divide, .-divide
    .ident “GCC: (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3″
    .section .note.GNU-stack,””,@progbits
    [regehr@gamow ~]$

    But on 32-bit we get this:

    divide:
    .LFB4:
    .cfi_startproc
    subl $28, %esp
    .cfi_def_cfa_offset 32
    movl 40(%esp), %eax
    movl 44(%esp), %edx
    movl %eax, 8(%esp)
    movl 32(%esp), %eax
    movl %edx, 12(%esp)
    movl 36(%esp), %edx
    movl %eax, (%esp)
    movl %edx, 4(%esp)
    call __divdi3
    addl $28, %esp
    .cfi_def_cfa_offset 4
    ret

  7. ZSH has the same trouble as bash:
    traviscj@traviscj> ($((-2**63/-1)))
    zsh: floating point exception ( $((-2**63/-1)); )
    [136] traviscj@traviscj> zsh –version
    zsh 4.3.11 (i386-apple-darwin12.0)

  8. ksh (ksh93u+ 2012-08-1) on x86_64 (OS X 10.6.8) hangs until killed on $((2**63/-1)). The 32-bit binary works correctly.

    Then it gets strange, as ksh’s promotion to floating point kicks in… and keeps kicking.

    $ echo $((2 ** 63))
    9.22337203685477581e+18
    $ echo $((2 ** 63 * 1))
    9.22337203685477581e+18
    $ echo $((2 ** 63 / 1))
    -9223372036854775808

    And stranger.

    $ n=$((2 ** 63))
    $ echo $n
    9.22337203685477581e+18
    $ echo $(((0 + n) / -1))
    0
    $ echo $(((n + 0) / -1))
    -9.22337203685477581e+18

    And this is fun:

    $ echo $((n / -1))
    0
    $ echo $(($n / -1))
    -9.22337203685477581e+18

  9. At all optimization levels my Raspberry Pi gives:

    INT_MIN / -1 = INT_MIN
    INT_MIN % -1 = 0

    The ARM11 chip in the RPi lacks a hardware divide unit so these operations are done in runtime library calls, or in the compiler at higher optimization levels.

  10. For ARM cores with hardware divide (to a first approximation, Cortex-A15 and later), the architecture defines the semantics of INT_MIN / -1 : the result is 0x80000000. (This actually falls out of the pseudocode specfication of division as “infinite precision division, round towards zero, take bottom 32 bits”; there’s also a note about it in the text.) It will never throw an exception, so you won’t get the crash that x86 architecture behaviour provokes.

    I think “generate some semi-arbitrary result” is a common approach for handling these corner cases (including div-by-zero) on RISC-heritage architectures.

    [for completeness: on ARM R profile cores division-by-zero can be configured to cause an exception, but on A profile cores it never will.]