Hi, Consider this code: *** typedef struct _largestruct { char a[1019]; } t_largestruct; extern t_largestruct funclarge(void); int test(void) { char c; c=funclarge().a[0]; c+=funclarge().a[1]; return c; } *** When compiled for Darwin/x86_86 with either Clang or GCC and LLVM 2.7, the frontends (via alloca) and subsequently LLVM (on the cpu stack) allocate two separate temps for the result of the calls to funclarge(), even though the temp used for the first call can be safely reused for the second one. I'm working on a frontend for a language where implicit stack temps are much more common than in C, and I was wondering whether it's the responsibility of the frontend to merge temps that can be merged, or whether LLVM should be doing that (not just for function results, but in general). If it's the frontend's responsibility (at least for the foreseeable future), are there certain rules that should be observed so as not to step on any optimization's toes (in particular mem2reg)? E.g., maybe never use a temp location first for an integer and later for a floating point value, maybe only reuse temp locations for temps of exactly the same size, ... Note that we already have native code generators with full temp management available (which works quite well), but I guess doing one big alloca() on entry and managing it like we normally manage temps on a cpu stack will not play nice at all with mem2reg and other optimizations (e.g., we can concatenate multiple temps into a single block used for a larger temp, and reuse temps for multiple different data types). Thanks, Jonas