Abstract
The study of fractals has a long history in mathematics and signal analysis, providing formal tools to describe self-similar structures and scale-invariant phenomena. In recent years, cognitive science has developed a set of powerful theoretical and experimental tools capable of probing the representations that enable humans to extend hierarchical structures beyond given input and to generate fractal-like patterns across multiple domains, including language, music, vision, and action. These paradigms target recursive hierarchical embedding (RHE), a generative capacity that supports the production and recognition of self-similar structures at multiple scales. This article reviews the theoretical framework of RHE, surveys empirical methods for measuring it across behavioral and neural domains, and highlights their potential for cross-domain comparisons and developmental research. It also examines applications in linguistic, musical, visual, and motor domains, summarizing key findings and their theoretical implications. Despite these advances, the computational and biological mechanisms underlying RHE remain poorly understood. Addressing this gap will require linking cognitive models with algorithmic architectures and leveraging the large-scale behavioral and neuroimaging datasets generated by these paradigms for fractal analyses. Integrating theory, empirical tools, and computational modelling offers a roadmap for uncovering the mechanisms that give rise to recursive generativity in the human mind.