Present-day artificial intelligence systems (AI), virtual assistants, and devices connected to the Internet of Things (IoT) are playing an increasingly important role in decision-making processes in the everyday lives of individuals and daily operations of organizations. In this respect, the users’ trust is
[...] Read more.
Present-day artificial intelligence systems (AI), virtual assistants, and devices connected to the Internet of Things (IoT) are playing an increasingly important role in decision-making processes in the everyday lives of individuals and daily operations of organizations. In this respect, the users’ trust is a key factor determining their acceptance and effective use. In contemporary digital ecosystems, this trust increasingly becomes a component of sustainable digital marketing, in which transparent data practices and responsible communication shape long-term consumer–technology relationships. This paper analyzes the halo effect as a psychological mechanism affecting the perception of competences, reliability, and ethics in the case of technologies based on AI. Based on the literature on behavioral economics, it was shown how positive associations with the interface, brand, or previous experience of the user may lead to excessive trust in technology. Such mechanisms also play a significant role in shaping sustainable consumption patterns, as users—guided by cognitive shortcuts—can adopt technologies in ways that either strengthen or weaken responsible digital behaviors. Moreover, the potential risks associated with this phenomenon were also indicated. The aim of this paper was to present how the utilization of the halo effect influences the generation of trust in smart systems and the formulation of implication for management practices and technology design. These implications are increasingly important in the context of sustainable digital marketing policy, where organizations must align persuasive communication with ethical standards and with rising expectations regarding sustainable digital transformation. Relationships between variables were analyzed using structural equation modeling (SEM), making it possible to verify complex dependencies between the perceived image of technology, the halo effect, and the users’ trust. This study tested three core hypotheses regarding the halo effect’s role, the foundational importance of security, and the mediating function of trust in technology adoption. The results of these analyses indicate that the halo effect significantly affects the level of trust in each of the investigated areas, with the strongest effect observed in the case of virtual assistants, where perception of the human-like characteristics of the interface considerably strengthened trust in the competences and reliability of the system. This finding has particular relevance for AI-driven personalization mechanisms, which increasingly guide consumer decision-making and shape their long-term behavioral patterns in online environments, with direct implications for sustainable consumption. This paper provides contribution to innovation management and technical marketing, stressing the importance of cognitive and emotional factors in the acceptance of new technologies. At the same time, it highlights the theoretical need to integrate responsible AI design with sustainable digital marketing strategies The findings suggest that ensuring trust, once established, has the potential to support not only technological innovation but broader societal goals related to responsible consumption, environmental stewardship, and long-term digital well-being aligned with sustainable development principles. However, this study stops short of empirically measuring sustainable consumption behaviors, offering instead a conceptual link that requires further empirical validation.
Full article