I got my first personal computer, a Commodore Plus/4 in my age of 13. I was wondering how so many maps in arcade games could be encoded in such small memory than 64 kbytes, and I actually reinvented pseudo-random numbers on my own. My (pseudo-)random maze generator was not as brilliant as it could be, but I pretty much enjoyed the beauty of combinatorics, stochastics and computer science.
I got an MSc as a mathematics teacher and another MSc as biology-chemistry teacher at the Eötvös Loránd University, and I stared a PhD in Theoretical Biology and Ecology at the same place. (I meant to become a bioinformatician, but there was no bioinformatics PhD program anywhere in Hungary in that time…). I worked on dynamic programming algorithms related to the topic what is called now statistical alignment. I saw that there was a limit what could be calculated in polynomial time, and I was looking for other approaches, and when I got my first postdoctoral position in Oxford at the Department of Statistics, I was very eager to learn Monte Carlo methods, especially Markov chain Monte Carlo. I spent a few years doing some engineering work on MCMC, namely, “design a Markov chain on which the Metropolis-Hastings algorithm is applicable, and enjoy it”…
The enjoyment flushed away when I realized how badly my Markov chain mixes with increasing the size of the input data… Instead of my own software, we used BADGER, and still tens of millions of MCMC steps were necessary to get reasonable effective sample size… I felt I have to understand more than the engineering of MCMC, I jumped into that topic, and my very first publication on the convergence of Markov chains was to show that my Markov chain is indeed torpidly mixing.
Since then I am devoted to do research on mixing of Markov chains as well as on counting and sampling computational complexity.