Hi, I have noticed that the run times for MCMCglmm models (mainly written in C/C++) have suddenly jumped up on Linux machines (Ubuntu/Linaro 4.6.4 and Scientific Linux 6.6) yet they have remained stable on Windows where they run much faster than on Linux. I wondered whether something had changed with the deafult optimization flags for gcc? I am using R 3.2.3. Cheers, Jarrod -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.