Wednesday, May 23, 2012

Try-catch speeding up my code?


I wrote some code for testing the impact of try-catch, but seeing some surprising results.




static void Main(string[] args)
{
Thread.CurrentThread.Priority = ThreadPriority.Highest;
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.RealTime;

long start = 0, stop = 0, elapsed = 0;
double avg = 0.0;

long temp = Fibo(1);

for (int i = 1; i < 100000000; i++)
{
start = Stopwatch.GetTimestamp();
temp = Fibo(100);
stop = Stopwatch.GetTimestamp();

elapsed = stop - start;
avg = avg + ((double)elapsed - avg) / i;
}

Console.WriteLine("Elapsed: " + avg);
Console.ReadKey();
}

static long Fibo(int n)
{
long n1 = 0, n2 = 1, fibo = 0;
n++;

for (int i = 1; i < n; i++)
{
n1 = n2;
n2 = fibo;
fibo = n1 + n2;
}

return fibo;
}



On my computer, this consistently prints out a value around 0.96..



When I wrap the for loop inside Fibo() with a try-catch block like this:




static long Fibo(int n)
{
long n1 = 0, n2 = 1, fibo = 0;
n++;

try
{
for (int i = 1; i < n; i++)
{
n1 = n2;
n2 = fibo;
fibo = n1 + n2;
}
}
catch {}

return fibo;
}



Now it consistently prints out 0.69... -- it actually runs faster! But why?



Note: I compiled this using the Release configuration and directly ran the EXE file (outside Visual Studio).



EDIT: Jon Skeet's excellent analysis shows that try-catch is somehow causing the x86 CLR to use the CPU registers in a more favorable way in this specific case (and I think we're yet to understand why). I confirmed Jon's finding that x64 CLR doesn't have this difference, and that it was faster than the x86 CLR. I also tested using int types inside the Fibo method instead of long types, and then the x86 CLR was as equally fast as the x64 CLR.


Source: Tips4all

1 comment:

  1. One of the Roslyn engineers who specializes in understanding optimization of stack usage took a look at this and reports to me that there seems to be a problem in the interaction between the way the C# compiler generates local variable stores and the way the JIT compiler does register scheduling in the corresponding x86 code. The result is suboptimal code generation on the loads and stores of the locals.

    For some reason unclear to all of us, the problematic code generation path is avoided when the JITter knows that the block is in a try-protected region.

    This is pretty weird. We'll follow up with the JITter team and see if we can get a bug entered so that they can fix this up.

    Also, we are working on improvements for Roslyn to the C# and VB compilers' algorithms for determining when locals can be made "ephemeral" -- that is, just pushed and popped on the stack, rather than allocated a specific location on the stack for the duration of the activation. We believe that the JITter will be able to do a better job of register allocation and whatnot if we give it better hints about when locals can be made "dead" earlier.

    Thanks for bringing this to our attention, and apologies for the odd behaviour.

    ReplyDelete