#include <stdio.h>
int main(void)
{
int i = 0;
i = i++ + ++i;
printf("%d\n", i); // 3
i = 1;
i = (i++);
printf("%d\n", i); // 2 Should be 1, no ?
volatile int u = 0;
u = u++ + ++u;
printf("%d\n", u); // 1
u = 1;
u = (u++);
printf("%d\n", u); // 2 Should also be one, no ?
register int v = 0;
v = v++ + ++v;
printf("%d\n", v); // 3 (Should be the same as u ?)
int w = 0;
printf("%d %d\n", ++w, w); // shouldn't this print 1 1
int x[2] = { 5, 8 }, y = 0;
x[y] = y ++;
printf("%d %d\n", x[0], x[1]); // shouldn't this print 0 8? or 5 0?
}
14 s
C has the concept of undefined behavior, i.e. some language constructs are syntactically valid but you can’t predict the behavior when the code is run.
As far as I know, the standard doesn’t explicitly say why the concept of undefined behavior exists. In my mind, it’s simply because the language designers wanted there to be some leeway in the semantics, instead of i.e. requiring that all implementations handle integer overflow in the exact same way, which would very likely impose serious performance costs, they just left the behavior undefined so that if you write code that causes integer overflow, anything can happen.
So, with that in mind, why are these “issues”? The language clearly says that certain things lead to undefined behavior. There is no problem, there is no “should” involved. If the undefined behavior changes when one of the involved variables is declared volatile
, that doesn’t prove or change anything. It is undefined; you cannot reason about the behavior.
Your most interesting-looking example, the one with
u = (u++);
is a text-book example of undefined behavior (see Wikipedia’s entry on sequence points).