Next Article in Journal
MPC-Based Sensor Fault-Tolerant Control: Application to a Heat Exchanger with Measurement Noise
Previous Article in Journal
Intelligent Fault Diagnosis of Gas Pressure Regulator Based on AE-GWO-SVM Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Haircutting Robots: From Theory to Practice

1
Faculty of Information Technology and Electrical Engineering, University of Oulu, 90570 Oulu, Finland
2
VTT, Technical Research Centre of Finland, 90590 Oulu, Finland
Automation 2025, 6(3), 47; https://doi.org/10.3390/automation6030047
Submission received: 28 August 2025 / Revised: 7 September 2025 / Accepted: 16 September 2025 / Published: 18 September 2025
(This article belongs to the Section Robotics and Autonomous Systems)

Abstract

The field of haircutting robots is poised for a significant transformation, driven by advancements in artificial intelligence, mechatronics, and humanoid robotics. This perspective paper examines the emerging market for haircutting robots, propelled by decreasing hardware costs and a growing demand for automated grooming services. We review foundational technologies, including advanced hair modeling, real-time motion planning, and haptic feedback, and analyze their application in both teleoperated and fully autonomous systems. Key technical requirements and challenges in safety certification are discussed in detail. Furthermore, we explore how cutting-edge technologies like direct-drive systems, large language models, virtual reality, and big data collection can empower these robots to offer a human-like, personalized, and efficient experience. We propose a business model centered on supervised autonomy, which enables early commercialization and sets a path toward future scalability. This perspective paper provides a theoretical and technical framework for the future deployment and commercialization of haircutting robots, highlighting their potential to create a new sector in the automation industry.

1. Introduction

In recent years, the field of robotics has experienced rapid advancement, driven by improvements in automation, supply chains, and key enabling technologies such as artificial intelligence, mechatronics, and direct-drive motors. Humanoid robots, long regarded as the pinnacle of robotic development due to their complex control requirements and historically high costs, are increasingly becoming commercially viable. In 2025, Unitree’s R1 humanoid robot is available at an estimated retail price of approximately USD 5900, representing a fraction of the cost of Honda’s ASIMO robot two decades ago, which was priced more than 200 times higher, around USD 1.3 million. Beyond AI and control technologies, advancements in mechatronic systems and global supply chain optimization have contributed significantly to this cost reduction. This reduction in cost, combined with improvements in reliability and performance, is enabling humanoid robots to enter application domains that were previously inaccessible. Haircutting robots, once considered prohibitively expensive and technically unfeasible, are now becoming viable and in-demand due to the rising cost of skilled labor. This field is set to establish a new sector within automation, with a substantial market potential. This perspective paper aims to discuss the market landscape and the enabling technologies driving this emerging area.
The development of intelligent haircutting robots [1,2] is an interdisciplinary challenge that integrates technologies from computer graphics, robotics, and haptics. A significant body of prior research has addressed individual components of this complex task, laying a critical foundation for an integrated robotic system. Hair simulation, modeling, and editing has been an active area of research in computer graphics. A systematic search across IEEE Xplore, ACM Digital Library, Scopus, and Google Scholar using keywords such as hair simulation, hair modeling, hair editing, hair manipulation, hair animation, hair styling, and hair rendering returned 820 papers. However, to the best of our knowledge, the exploration of haircutting robots from a robotics perspective remains largely unexplored, apart from our previous work [1,2], a few patents, and several online demonstration videos. Related research has instead focused on adjacent areas such as hair brushing, combing, washing, and scalp massage. Using keywords such as haircutting robot, robotic haircutting, automated haircutting, and robot barber, and after removing duplicates and screening abstracts and full texts for relevance to robotic haircutting or closely related technologies, 30 papers were identified on the topics of hair brushing, combing, washing, and scalp massage.
As can be anticipated, the number of publications, research efforts, and commercialization initiatives related to haircutting robots is expected to grow at an explosive rate in the coming years, driven by the significantly reduced cost of robots and the intensive research on robotics and supporting technologies. As this field is still in its infancy and expanding rapidly, reviewing past and current works, together with outlining future research directions, is crucial for ensuring sustainable development. In this perspective paper, we aim to provide a survey of existing studies and related supporting technologies, offering researchers from diverse backgrounds a comprehensive examination of the potential of this interdisciplinary area. Furthermore, to stimulate future research, we highlight the major opportunities and challenges that lie ahead.
This paper is organized as follows. Section 2 reviews prior work related to haircutting robots. Section 3 provides an estimation of the market size for haircutting robots. Section 4 examines key technical requirements for robotic haircuts, with Section 5 focusing in particular on safety certification. Section 6 discusses human intervention and supervised autonomy, emphasizing practical considerations. Section 7 reviews empowering technologies applicable to haircutting robots. Section 8 explores possible business models for integrating haircutting robots into daily life. Finally, Section 9 concludes the paper.

2. Prior Work

In this section, we review key developments in hair modeling and simulation within the area of computer graphics, as well as robotic manipulation within robotics, that are relevant to autonomous haircutting.

2.1. Computer Graphics

Accurate digital modeling of hair is a prerequisite for any robotic system that aims to cut or style it. Early work in computer graphics, such as the approach presented in Ref. [3], has long utilized physically based models to characterize the elastic properties of individual hair strands. By treating hair as a series of interconnected particles, these models allow for dynamic simulations that accurately represent hair’s complex behavior. However, a major challenge is that hair cannot be simply treated as a rigid 3D object. Its intricate, strand-based structure necessitates specialized reconstruction techniques. This has led to methods presented in Ref. [4], which leverage learned priors and a strand-based representation to achieve more accurate and detailed hair models from a single camera view. This type of technology is essential for a robot to perceive the current state of a client’s hair. In Ref. [5], a method called AutoHair is proposed, which takes a single portrait image as input and analyzes it to generate two key outputs: a hair segmentation map and a hair growth direction map. These outputs are then used to find a matching hair shape from a large library of 3D hair models. Unlike standard image-based methods [6,7] that only reconstruct the visible surface, computed tomography scans create a 3D density volume of the hair, giving a much richer and more accurate representation of the full shape and volume of hairs [8]. Different from above works on static hair modeling, Ref. [9] presents a model for the use in animations to simulate the dynamic behaviors of hairs. In Ref. [10], a comprehensive survey is provided to review the progress on hair modeling including hairstyling, hair simulation, and hair rendering, from a computer graphics perspective. A summary of existing works on hair modeling and hair editing is provided in Table 1.
Hair manipulation involves isolating hair from its background in an image and then editing its style or color. In Ref. [11], a hair manipulation method called HairManip is proposed to separate hair information from a source image by training two networks, with one for hairstyle and one for hair color editing separately. A joint hair modeling and manipulation method is presented in Ref. [12] to achieve high-resolution strand-based 3D hair model and the successful editing of hair styles and colors. Furthermore, dynamic hair manipulation in images and videos is explored in Ref. [13], which allows the separation and alteration of hair in real-time. This capability is of great practical value, as it makes it possible to provide customers with a virtual preview of a potential hairstyle on their own head before the physical haircut is performed, and the result can potentially be incorporated with augmented reality to show the performance live.

2.2. Robotics

The earliest documented automatic haircutting system, patented in 1966 [14], was a mechanical and analog circuit-based machine designed to cut hair according to a predetermined program, showing the enduring dream of automatic haircuts. The physical act of manipulating hair is another major hurdle for robotics due to its highly deformable and compliant nature. Unlike handling rigid objects, interacting with hair requires delicate and precise force control. In Ref. [15], experimental studies were conducted on non-impaired subjects to capture various static and dynamic force levels when making contact with different robotic tools for three tasks, namely, hair brushing, face wiping, and shaving. In Ref. [16], it has been demonstrated that a robot can effectively interact with hair by employing careful force and torque control. This work shows that a robot can manage the complex, contact-rich interactions required for basic hair grooming tasks. Without careful control of force or torque, gentle interaction between a robot and human can still be achieved by using soft materials as the media. In Refs. [17,18], general methods have been explored for a robot to handle delicate materials, including robotic hair manipulation without causing damage or discomfort, with two apparent advantages, namely, safety through mechanical compliance and sensing through observing deformation. Instead of using force information, a front hair styling robotic system is presented in Ref. [19]. The system uses a closed-loop feedback system to style hair. It repeatedly compares the current orientation map of hairs to a desired target map. By focusing on the differences in strand orientation, it makes precise adjustments at the root of each hair to achieve the desired style. In Ref. [20], a three-stage robotic hair combing method is presented. It starts with a segmentation module to locate the hair, followed by a path planning module that generates combing paths, and finally a trajectory module creates the motion of the robot. The entire process relies solely on vision-based feedback, without using any force or torque sensors. This lack of force feedback has been a significant issue, with reported comments from users that the robot seems to use excessive force when combing wigs. In Ref. [21], an off-the-shelf head-care robot made by Panasonic Ltd., Tokyo, Japan, with functionality for shampooing and scalp massaging was tested for relaxation evaluation, with satisfactory performance.
Figure 1. Sixty years of endeavor in human history towards haircutting robots [14,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]. (a) A patent for an automated haircutting system filed in 1966, representing the earliest haircutting robots we found. (b) A patented haircut machine featuring specialized structural mechanisms for haircutting. (cg) Published scientific research on hair brushing, hair combing, and front hair styling using various techniques, including vision and force/torque feedback. (hm) Videos published on YouTube, demonstrating either teleoperated or automated haircutting. (n) A commercial product by Panasonic Ltd. for hair washing and massaging. (o) An imaginary helmet concept for haircutting. (p) A real system developed for drone-based haircutting, still at a very early conceptual stage. (Descriptions summarized in Table 2).
Figure 1. Sixty years of endeavor in human history towards haircutting robots [14,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]. (a) A patent for an automated haircutting system filed in 1966, representing the earliest haircutting robots we found. (b) A patented haircut machine featuring specialized structural mechanisms for haircutting. (cg) Published scientific research on hair brushing, hair combing, and front hair styling using various techniques, including vision and force/torque feedback. (hm) Videos published on YouTube, demonstrating either teleoperated or automated haircutting. (n) A commercial product by Panasonic Ltd. for hair washing and massaging. (o) An imaginary helmet concept for haircutting. (p) A real system developed for drone-based haircutting, still at a very early conceptual stage. (Descriptions summarized in Table 2).
Automation 06 00047 g001
The development of robotic haircutting systems has gained significant traction in recent years, particularly during the COVID-19 pandemic. The mandatory quarantines and subsequent inability to access public barbers resulted in many people having unkempt, long hair, which spurred increased interest in implementing robotic haircutting solutions. In Ref. [22], a patent on a haircutting robot is disclosed, which allows the user to select a haircut through a mobile application, followed by a precisely generated cutting path. The robotic arm of the system moves a cutter along this path, with force sensors on the cutter adjusting pressure on the head of the user and a vacuum tool simultaneously collecting the cut hair. In addition to officially published papers or patents, various attempts have been posted online in video format, including automated haircutting, such as the two versions of automatic haircutting robots by Wighton [23,24], iPhone 3D scanning-based robotic haircutting [25], and collaborative robot-based haircutting [24], and teleoperated haircutting, such as Refs. [27,28]. A more futuristic format of haircutting robots includes Shaving Helmet [29] (it was ultimately revealed to be a hoax, but there is no denying the concept of shaving helmet to be realized potentially in future), and drone-based haircutting [30], which might come true when the maneuverability of drones increases greatly.

3. Market Size

In Shanghai, China, the average cost of a man’s haircut is around RMB 30. With a population of 25 million and an approximate 1:1 male-to-female ratio, the male population is about 12.5 million. Assuming each man gets a haircut once a month, the annual spending on men’s haircuts can be estimated as: 12.5 million (men) × 12 (haircuts per year) × RMB 30 (per haircut) = RMB 4.5 billion. Although women typically get haircuts less frequently, the cost per visit is significantly higher. According to 2017 statistics [31], the average annual spending on hair care in the USA was USD 154.44 for men and USD 257.42 for women. Since then, annual spending for both men and women has increased significantly, but the ratio of women’s to men’s spending has remained relatively stable. For simplicity, we assume women’s annual spending equals that of men. Therefore, the total annual expenditure on haircuts in Shanghai is about RMB 9 billion. Using the same approach, we can estimate for other cities: 4.85 million men × 12 × GBP 30 × 2 = GBP 3.49 billion for London annually, and 4.125 million men × 12 × USD 40 × 2 = USD 3.96 billion for New York City annually.
The global annual market size for haircutting can be estimated by segmenting the world’s population of roughly 8.1 billion into three tiers. First, high-income countries, with a population of approximately 1.2 billion and a monthly per capita spending of USD 40, are estimated to contribute USD 576 billion annually. Next, middle-income countries, with a population of 2 billion and a monthly per capita spending of USD 4.50, add about USD 216 billion. Finally, low-income countries, with a population of 1.45 billion and a monthly per capita spending of USD 1, account for an estimated USD 34.8 billion. Summing these contributions gives a global total of approximately USD 826.8 billion, which is roughly consistent with the estimate by Grand View Horizon for the global beauty and personal care market size [32].

4. Technical Requirements

We categorize haircutting robots into two types, teleoperated haircutting and automated haircutting. The former is more technically feasible with today’s technology, while the latter may be implemented in the longer term. In this section, we discuss the technical requirements for both types.

4.1. Teleoperated Haircutting

Long-distance communication inevitably introduces time delays. Consider a barber in Liaocheng, China, cutting hair for a customer in New York City, USA. The shortest distance between these locations is approximately 11,395 km, corresponding to a one-way propagation delay of about 0.038 s. In a closed-loop system with sensors and actuators, the minimum round-trip delay would therefore be roughly 0.076 s. In practice, internet-based remote haircutting (see Figure 2) incurs longer delays due to network infrastructure and the non-straight-line route. Additionally, processing, queuing, and transmission delays [33] contribute substantially to overall latency, often exceeding the minimum propagation delay. While advances in communication technology have reduced jitter and lowered these delays, the speed-of-light propagation delay establishes a fundamental lower bound.
Modern communication platforms such as Zoom, Skype, WeChat, and WhatsApp demonstrate that high-quality, low-latency video interaction is achievable under favorable network conditions, giving us an intuitive feeling about the length of a typical time delay and its impact on quality of service. Remote haircutting involves transmitting control data, such as clipper orientation and position, alongside sensing data like camera images. Video compression and encoding techniques can reduce bandwidth requirements, as consecutive frames often contain redundant information.
Nevertheless, delays in the closed-loop system can destabilize operations. Ensuring stable and responsive remote haircutting requires carefully designed control laws that account for both inherent and network-induced latencies. Passivity-based control [34,35,36,37,38,39] enables the design of stable systems even in the presence of constant time delays, while modified versions of passivity-based control can handle time-varying delays. Smith predictive control [40,41,42,43] offers high tolerance to time delays by compensating for known or estimated latency in the control loop. These methods have been successfully applied in teleoperation and networked control systems to maintain stability and performance under communication constraints.

4.2. Automated Haircutting

In the context of robotic haircutting, the human head can be conceptualized as a workpiece, and the desired hairstyle represents the target shape to be achieved. The primary objective of the system is to manipulate the cutting tool in such a way that the hair is removed or shaped to match this target form with high accuracy (see Figure 3). This abstraction allows us to leverage concepts from traditional manufacturing processes, where precise material removal is required to achieve a predefined geometry. By framing the problem in this manner, robotic haircutting can be analyzed using methodologies analogous to those employed in computer-aided manufacturing (CAM) [44], particularly in the domain of precision machining.
In conventional manufacturing, five-axis computer numerical control (CNC) [45] systems utilize toolpath planning to generate precise motions of cutting tools over complex surfaces. Toolpath planning [46] is a mature and well-established technology in CAM, enabling automated machining of intricate parts with high dimensional accuracy. Drawing a parallel to the haircutting scenario, the clipper can be regarded as the cutting tool, while the robotic platform holding the clipper provides multiple degrees of freedom, typically five including three translational and two rotational axes, allowing the tool to access all regions of the workpiece with the desired orientation. This multi-degree-of-freedom motion planning is essential to ensure that the cutter follows the contours of the head, replicating the desired style accurately while avoiding collisions or excessive force application.
However, a key distinction between robotic manufacturing and robotic haircutting lies in the dynamic nature of the workpiece. Unlike a fixed metal or plastic part in a CNC machine, the human head is rarely stationary, and small involuntary movements can occur throughout the procedure. Therefore, a critical challenge in automated haircutting is the real-time compensation for head motion [47,48] based on online estimation of head pose [49,50,51,52,53]. This requires integrating precise sensing and tracking systems with the motion planning algorithms to dynamically adjust the toolpath. Techniques such as visual servoing, inertial measurement, and predictive modeling can be employed to continuously update the trajectory of the clipper, ensuring that the cutting operation remains aligned with the intended hairstyle. Effectively handling this dynamic workpiece introduces additional complexity compared to conventional CAM applications, and necessitates robust control and perception frameworks capable of maintaining high precision in an inherently uncertain and non-rigid environment.

5. Safety Certification of Haircutting Robots

Safety is a fundamental requirement for haircutting robots due to their direct physical interaction with humans. Risks arise from multiple sources, including sharp or heated tools, excessive forces exerted by the robot body, sensor or control failures, and cybersecurity vulnerabilities. In teleoperated systems, network latency and congestion can introduce significant hazards. For example, if scissors or clippers approach too close to the scalp, the remote operator may only perceive the event after a delay, potentially resulting in injury. To mitigate this, network conditions must be evaluated prior to task authorization, and real-time monitoring should enforce thresholds, triggering alerts or local intervention when delays exceed acceptable limits. Another critical issue is sensor misjudgment or failure, as evidenced by incidents in autonomous driving. Robustness can be improved through startup self-diagnostics, sensor redundancy (homogeneous or heterogeneous), and predictive maintenance strategies [54]. Furthermore, the robot body and attached tools must be regarded as potential obstacles, since client motion during a haircut can lead to collisions. This risk can be reduced through obstacle-avoidance algorithms [55,56,57,58] and soft material coverings [59,60] to provide compliance. With respect to tools, safety can be enhanced by selecting inherently safer alternatives; clippers, for example, provide functionality comparable to scissors while significantly reducing the risk of cuts. Additional devices such as hairdryers, curlers, and sprayers should similarly employ safety-optimized designs. Cybersecurity represents another important dimension [61]. In teleoperated mode, hackers could hijack control, while in autonomous mode, compromised planning may produce hazardous actions. Countermeasures include local monitoring of tool proximity and automatic shutdown in unsafe scenarios, as well as path planning with client approval, where deviations beyond tolerance result in interruption.

6. Human Intervention and Supervised Autonomy

Ideally, a robotic haircutting system would be capable of autonomously completing the entire haircutting process without any human intervention. However, achieving full autonomy in robotic haircutting remains a challenging task and is currently not feasible. Nevertheless, the lack of full autonomy does not preclude the industrial and commercial application of such systems. A useful analogy can be drawn from autonomous driving: although no vehicle today is capable of fully autonomous operation under all road conditions, partial autonomy has already been widely implemented and has generated significant commercial value.
Similarly, enhancing the autonomy of haircutting robots, even without reaching full independence, can substantially reduce human labor requirements and gradually diminish the reliance on professional barbers. For example, while a traditional salon setup requires one barber to serve a single customer, current robotic systems could allow one barber to supervise multiple robots simultaneously, for instance, monitoring five robotic units. In this model, the barber intervenes only when unexpected events, malfunctions, or operational delays occur, ensuring a high-quality customer experience while still achieving operational efficiency and commercial viability.
The following scenarios illustrate situations in which barber intervention may be required.
Trajectory Deviations during Teleoperation: if the pre-agreed hairstyle trajectory deviates from the current hair state, e.g., when a barber under teleoperation moves away from the intended trajectory, the system automatically halts and transfers control to the local human barber. (See Figure 4 for the workflow).
Safety Monitoring via Sensors: Safety-critical sensors continuously monitor the operational environment. Upon detecting potential safety risks, the robot immediately stops its motion and hands control over to the local human barber.
Obstacle Avoidance: Proximity sensors detect obstacles in real time. If a pre-planned trajectory is blocked, the system either transfers control to the local human barber or invokes an obstacle-avoidance or trajectory re-planning routine to maintain safe operation.
Explicit Human Requests: When the customer or the teleoperating hairdresser requests human intervention, the robot immediately transfers control to the local human barber to ensure precise or corrective action.
The supervised autonomy framework allows robotic haircutting systems to operate safely and efficiently while incrementally increasing autonomy. By combining automated operations with human oversight, these systems can provide high-quality service in real-world conditions, facilitating early commercialization and gradual adoption in the haircare industry with greatly reduced labor. While Figure 4 illustrates a conceptual workflow for supervised autonomy, similar shared-control architectures have been implemented in other robotic domains, where robots learn from human corrections or teleoperation. For example, Ref. [62] presents a teleoperation framework that enables scalable robot data collection while improving autonomy through human guidance, Ref. [63] demonstrates incremental robot learning from human-in-the-loop corrections during deployment, and Ref. [64] presents a five-dimensional feedback model to modify the output to better reflect common dimensions of variation in human feedback.

7. Empowering Haircutting Robots with the Latest Technology

7.1. Direct Drive Technology

Conventional electric motors typically generate high rotational speed but low torque, requiring large reduction ratios to amplify torque while reducing speed. Direct drive technology [65], in contrast, allows the motor to achieve high torque output with minimal reduction, enabling rapid and precise movements with a high cutoff frequency in motion. In the context of robotic haircutting, this translates to higher responsiveness and smoother control of the cutting tool, which is crucial when following the complex contours of a human head. By reducing mechanical backlash and increasing system stiffness, direct drive systems also improve the accuracy of both translational and rotational movements, allowing the robot to execute intricate haircuts with minimal error. Furthermore, higher responsiveness permits real-time adjustment to small head movements, ensuring a safe and comfortable experience for the client.

7.2. Large Language Models (LLMs) and Artificial Intelligence (AI)

LLM: In autonomous haircutting scenarios, LLMs [66] can serve as interactive companions, providing emotional value and facilitating communication with the client. Haircutting is not merely a mechanical task; it is a social interaction in which barbers typically engage in conversation to enhance the customer experience. Systems powered by LLMs can replicate this social aspect, responding to queries, engaging in small talk, and maintaining a friendly atmosphere.
AI: Beyond social interaction, LLMs and AI [67] can provide practical guidance, including hairstyle recommendations, design suggestions based on facial features, and personalized haircare tips. By integrating them into the haircutting workflow, robotic systems can offer advisory services that complement the physical cutting process, effectively extending the capabilities of traditional barbers and personalizing the client experience. This represents a significant step in bringing AI and LLMs into everyday service robotics applications.
End-to-end (E2E) training: E2E training involves training a single model to map inputs (such as visual data or multimodal data) directly to outputs (robot actions) without the need for separate modules. This approach simplifies the system architecture and enhances the model’s ability to generalize across different tasks. For example, Bojarski et al. [68] demonstrated that convolutional neural networks could be trained E2E to map raw camera images directly to steering commands in self-driving cars, showcasing the potential of E2E training to perform complex perception-to-action tasks without decomposing them into separate modules. In automated haircutting, E2E-trained models could similarly integrate visual perception and motor control, providing a seamless service to clients. Such integration would allow the robot to assess hair type, hair texture, and cutting status to execute precise haircutting actions, all within a unified system that learns strategies directly from examples.
Vision-Language-Action (VLA): VLA models unify perception, language understanding, and learned control to overcome longstanding challenges in robotics. They enable robots to perceive, reason, and act over complex tasks, performing precise dexterous movements. For instance, OpenVLA [69] employs a unified architecture that integrates vision, language, and action components, enabling robots to interpret inputs and generate appropriate actions. It builds on a Llama 2 language model combined with a visual encoder that fuses pretrained features from DINOv2 and SigLIP, enabling the robot to perform complex tasks with generalization based on visual and linguistic cues. In the context of automated haircutting, VLA models can interpret visual inputs of the client’s hair, understand the client’s verbal instructions or preferences, and execute precise cutting actions. This integration ensures that the robot not only understands the task, but also performs it with the necessary dexterity and adaptability.

7.3. Virtual Reality (VR) Technology

VR provides remote barbers with an immersive, high-fidelity visual experience of the client’s hair and head geometry. Using VR headsets [70], remote barbers can perceive the client’s head as if they were physically present, enabling precise assessment of hair length, volume, and style. This high degree of immersion improves the effectiveness of teleoperated haircutting, as operators can better judge angles, depth, and tool positioning. Additionally, VR can facilitate training and simulation, allowing both human barbers and AI algorithms to practice complex styles in a virtual environment before applying them to real clients.

7.4. Haptic Feedback Technology

Haptic feedback [71] enables remote barbers to perceive forces applied by the cutting tools in real time. By sensing the resistance and pressure exerted on the client’s head, operators can adjust their force precisely, ensuring a safe and comfortable haircut. While humans primarily rely on visual information, tactile feedback is a critical component in tasks requiring fine force control. In robotic haircutting, integrating haptic sensors allows the system to replicate the nuanced touch of a professional barber, improving precision when trimming close to the scalp and reducing the risk of accidental injury. Haptic feedback also provides the foundation for force-aware autonomous algorithms, allowing the robot to adaptively modulate cutting pressure during operation.

7.5. Big Data Collection

During the haircutting process, data can be systematically collected to improve the performance of robotic systems. Customer feedback, ratings, and real-time motion data from successful haircuts can be recorded and analyzed to train machine learning models. This big data approach enables continuous refinement of cutting algorithms, tool trajectories, and stylistic decision-making. By leveraging these data-driven insights, the system can progressively enhance its accuracy, efficiency, and overall client satisfaction. Over time, this creates a virtuous cycle in which robotic systems become increasingly capable, eventually approaching or surpassing human-level performance in both technical execution and stylistic judgment.

8. Business Model

Haircutting robots, as a product, have the potential to fundamentally transform the haircare industry by delivering precise, efficient, and high-quality haircutting services to customers. Beyond their immediate function as automated hairstyling tools, these robots open up multiple avenues for business development and commercialization. The emergence of this technology creates opportunities not only for direct service provision, but also for supporting industries, platform-based models, and value-added services.
The most straightforward and immediate business model revolves around offering haircutting services directly to consumers. Robotic salons can operate autonomously or under minimal supervision from barbers, providing consistent and high-quality haircuts while reducing labor costs. By deploying a network of such robots in urban centers, shopping malls, airports, or corporate facilities, companies can offer convenient and scalable grooming services. In addition to service delivery, the manufacturing, distribution, and maintenance of haircutting robots represent significant revenue streams. Companies can specialize in producing high-performance robots equipped with advanced sensors, actuators, and AI systems, while service providers can focus on repair, maintenance, and software updates to ensure optimal performance.
Another promising business model is the teleoperated haircutting mode, in which licensed barbers can offer their services remotely. In this scenario, remote barbers, like Uber or Didi taxi drivers, can connect with customers through a digital platform, allowing them to perform haircutting tasks via robotic systems from virtually anywhere. This approach not only expands the reach of skilled barbers, but also enables flexible work arrangements, allowing barbers to work across geographic regions without physical relocation.
Furthermore, the platform connecting barbers and customers through haircutting robots has additional commercial potential beyond service provision. Such a platform can function as an information hub, aggregating customer reviews, hairstyling trends, and product recommendations. It can also serve as a marketing and advertisement channel for haircare products, salons, and training courses. In addition, the platform can foster social interactions among users by sharing styling tips, tutorial videos, and AI-generated hairstyle suggestions. By integrating these digital services with robotic operations, the business model evolves from a purely functional service into a multi-layered ecosystem, encompassing commerce, communication, and community engagement.
Ultimately, the commercialization of haircutting robots can generate diverse revenue streams that span direct services, hardware sales, teleoperated services, maintenance, and digital platform operations. By leveraging the capabilities of robotic systems and combining them with platform-based business strategies, the haircare industry can achieve higher efficiency, wider reach, and enhanced customer experiences while opening opportunities for new forms of value creation in both physical and digital domains.

9. Conclusions

With the maturation of technology, the reduction of costs, and the shortage of skilled barbers, haircutting robots are expected to become an important avenue for human–robot interaction in daily life. Although, as of 2025, no commercial haircutting robots have been deployed, and academic research on this topic remains limited, it is highly likely that commercial applications will emerge within the next five years. This paper reviews the relevant technologies underlying robotic haircutting, thereby providing a theoretical and technical foundation for the future deployment and commercialization of these systems.

Funding

This review received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, S. Haircutting Robots; Springer Nature: Berlin/Heidelberg, Germany, 2025. [Google Scholar]
  2. Khan, A.T.; Li, S. Robotic Haircutting Systems: A Survey of Methods, Challenges, and Hair Modeling Insights. IEEE J. Sel. Areas Sens. 2025. [Google Scholar]
  3. Selle, A.; Lentine, M.; Fedkiw, R. A mass spring model for hair simulation. In ACM SIGGRAPH 2008 Papers; Association for Computing Machinery: New York, NY, USA, 2008; pp. 1–11. [Google Scholar] [CrossRef]
  4. Sklyarova, V.; Chelishev, J.; Dogaru, A.; Medvedev, I.; Lempitsky, V.; Zakharov, E. Neural haircut: Prior-guided strand-based hair reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 19762–19773. [Google Scholar]
  5. Chai, M.; Shao, T.; Wu, H.; Weng, Y.; Zhou, K. Autohair: Fully automatic hair modeling from a single image. ACM Trans. Graph. 2016, 35. [Google Scholar] [CrossRef]
  6. Zhou, Y.; Hu, L.; Xing, J.; Chen, W.; Kung, H.-W.; Tong, X.; Li, H. Hairnet: Single-view hair reconstruction using convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 235–251. [Google Scholar] [CrossRef]
  7. Hu, L.; Ma, C.; Luo, L.; Li, H. Single-view hair modeling using a hairstyle database. ACM Trans. Graph. (ToG) 2015, 34, 1–9. [Google Scholar] [CrossRef]
  8. Shen, Y.; Saito, S.; Wang, Z.; Maury, O.; Wu, C.; Hodgins, J.; Zheng, Y.; Nam, G. Ct2hair: High-fidelity 3d hair modeling using computed tomography. ACM Trans. Graph. (TOG) 2023, 42, 1–13. [Google Scholar] [CrossRef]
  9. Kmoch, P.; Bonanni, U.; Magnenat-Thalmann, N. Hair simulation model for real-time environments. In Proceedings of the 2009 Computer Graphics International Conference, Victoria, BC, Canada, 26–29 May 2009; pp. 5–12. [Google Scholar]
  10. Ward, K.; Bertails, F.; Kim, T.Y.; Marschner, S.R.; Cani, M.P.; Lin, M.C. A survey on hair modeling: Styling, simulation, and rendering. IEEE Trans. Vis. Comput. Graph. 2007, 13, 213–234. [Google Scholar] [CrossRef] [PubMed]
  11. Zhao, H.; Zhang, L.; Rosin, P.L.; Lai, Y.-K.; Wang, Y. HairManip: High quality hair manipulation via hair element disentangling. Pattern Recognit. 2024, 147, 110132. [Google Scholar] [CrossRef]
  12. Chai, M.; Wang, L.; Weng, Y.; Yu, Y.; Guo, B.; Zhou, K. Single-view hair modeling for portrait manipulation. ACM Trans. Graph. (TOG) 2012, 31, 1–8. [Google Scholar] [CrossRef]
  13. Chai, M.; Wang, L.; Weng, Y.; Jin, X.; Zhou, K. Dynamic hair manipulation in images and videos. ACM Trans. Graph. (TOG) 2013, 32, 1–8. [Google Scholar] [CrossRef]
  14. Gronier, J. Hair-Cutting Machine. U.S. Patent 3,241,562, 22 March 1966. [Google Scholar]
  15. Bilyea, A.J.N.; French, S.H.; Abdullah, H.A. Modeling contact forces during human-robot interactions for performing activities of daily living. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2023, 237, 829–840. [Google Scholar] [CrossRef]
  16. Hughes, J.; Plumb-Reyes, T.; Charles, N.; Mahadevan, L.; Rus, D. Detangling hair using feedback-driven robotic brushing. In Proceedings of the IEEE 4th International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 12–16 April 2021; pp. 487–494. [Google Scholar]
  17. Yoo, U.; Dennler, N.; Xing, E.; Matarić, M.; Nikolaidis, S.; Ichnowski, J.; Oh, J. Soft and compliant contact-rich hair manipulation and care. In Proceedings of the 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Melbourne, Australia, 4–6 March 2025; pp. 610–619. [Google Scholar]
  18. Yoo, U.; Dennler, N.; Mataric, M.; Nikolaidis, S.; Oh, J.; Ichnowski, J. Moe-hair: Toward soft and compliant contact-rich hair manipulation and care. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Boulder CO, USA, 11–15 March 2024; pp. 1163–1167. [Google Scholar]
  19. Kim, S.; Kanazawa, N.; Hasegawa, S.; Kawaharazuka, K.; Okada, K. Front Hair Styling Robot System Using Path Planning for Root-Centric Strand Adjustment. In Proceedings of the IEEE/SICE International Symposium on System Integration (SII), Munich, Germany, 21–24 January 2025; pp. 544–549. [Google Scholar]
  20. Dennler, N.; Shin, E.; Mataric, M.; Nikolaidis, S. In Proceedings of Design and evaluation of a hair combing system using a general-purpose robotic arm. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 3739–3746. [Google Scholar]
  21. Ando, T.; Takeda, M.; Maruyama, T.; Susuki, Y.; Hirose, T.; Fujioka, S.; Mizuno, O.; Yamada, K.; Ohno, Y.; Yukio, H. Biosignal-based relaxation evaluation of head-care robot. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 6732–6735. [Google Scholar]
  22. Aladabbah, M. Automatic Hair Cutter Robot. WO 2023/080812 A1, 11 May 2023. [Google Scholar]
  23. Wighton, S. I Made A Hair Cutting Machine. Available online: https://www.youtube.com/watch?v=7zBrbdU_y0s&t=578s (accessed on 10 September 2025).
  24. Wighton, S. Sometimes You Have to Test Robots on Yourself. Available online: https://www.youtube.com/watch?v=WQ8Xgp8ALFo&t=583s (accessed on 10 September 2025).
  25. Buzz, R. Available online: https://www.youtube.com/watch?v=jY_gpi_gsRI (accessed on 10 September 2025).
  26. This Robot Will Cut Your Hair in the Future. Available online: https://www.youtube.com/watch?v=-cgU1RgjWXw (accessed on 10 September 2025).
  27. Robot Barber Shaves Human Head for Charity. Available online: https://www.nbcnews.com/tech/tech-news/robot-barber-shaves-human-head-charity-flna510080 (accessed on 10 September 2025).
  28. Touching Myself with A Robot. Available online: https://www.youtube.com/watch?v=RXhBjsDLKDM (accessed on 10 September 2025).
  29. The Shaving Helmet. Available online: https://www.youtube.com/watch?v=5bgRszdUdhQ (accessed on 10 September 2025).
  30. A Barber Uses A Drone to Give a Haircut. Available online: https://www.youtube.com/shorts/9RHroJPhlKw (accessed on 10 September 2025).
  31. How Much Do People Spend On Haircuts? Available online: https://medium.com/data-science/analyzing-who-spends-more-on-haircuts-men-or-women-a90003e98312 (accessed on 10 September 2025).
  32. Global Beauty and Personal Care Products Market Size & Outlook. Available online: https://www.grandviewresearch.com/horizon/outlook/beauty-and-personal-care-products-market-size/global?utm_source=chatgpt.com (accessed on 10 September 2025).
  33. Monge, P.R.; Contractor, N.S. Theories of Communication Networks; Oxford University Press: Oxfordshire, UK, 2003. [Google Scholar]
  34. Anderson, R.J.; Spong, M.W. Bilateral control of teleoperators with time delay. In Proceedings of the 1988 IEEE International Conference on Systems, Man, and Cybernetics., Beijing, China, 8–12 August 1988; Volume 1, pp. 131–138. [Google Scholar]
  35. Niemeyer, G.; Slotine, J.E. Stable adaptive teleoperation. IEEE J. Ocean. Eng. 2002, 16, 152–162. [Google Scholar] [CrossRef]
  36. Lee, D.; Huang, K. Passive position feedback over packet-switching communication network with varying-delay and packet-loss. In Proceedings of the 2008 Symposium on Haptic Interfaces for Virtual Environment and Teleoperator System, Delhi, Indian, 13–14 March 2008; pp. 335–342. [Google Scholar]
  37. Lee, D.; Spong, M.W. Passive bilateral teleoperation with constant time delay. IEEE Trans. Robot. 2006, 22, 269–281. [Google Scholar] [CrossRef]
  38. Niemeyer, G.; Slotine, J.J.E. Telemanipulation with time delays. Int. J. Robot. Res. 2004, 23, 873–890. [Google Scholar] [CrossRef]
  39. Nuño, E.; Basañez, L.; Ortega, R. Passivity-based control for bilateral teleoperation: A tutorial. Automatica 2011, 47, 485–495. [Google Scholar] [CrossRef]
  40. Furukawa, T.; Shimemura, E. Predictive control for systems with time delay. Int. J. Control. 1983, 37, 399–412. [Google Scholar] [CrossRef]
  41. Deng, Y.; Léchappé, V.; Moulay, E.; Chen, Z.; Liang, B.; Plestan, F.; Han, Q.-L. Predictor-based control of time-delay systems: A survey. Int. J. Syst. Sci. 2022, 53, 2496–2534. [Google Scholar] [CrossRef]
  42. Chen, H.; Liu, Z. Time-delay prediction–based Smith predictive control for space teleoperation. J. Guid. Control. Dyn. 2021, 44, 872–879. [Google Scholar] [CrossRef]
  43. Cao, L.; Pan, Y.; Liang, H.; Huang, T. Observer-based dynamic event-triggered control for multiagent systems with time-varying delay. IEEE Trans. Cybern. 2022, 53, 3376–3387. [Google Scholar] [CrossRef]
  44. Bi, Z.; Wang, X. Computer Aided Design and Manufacturing; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  45. Lasemi, A.; Xue, D.; Gu, P. Recent development in CNC machining of freeform surfaces: A state-of-the-art review. Comput.-Aided Des. 2010, 42, 641–654. [Google Scholar] [CrossRef]
  46. Makhanov, S.S. Vector fields for five-axis machining. A survey. Int. J. Adv. Manuf. Technol. 2022, 122, 533–575. [Google Scholar] [CrossRef]
  47. Du, C.; Li, H.; Thum, C.; Lewis, F.; Wang, Y. Simple disturbance observer for disturbance compensation. IET Control. Theory Appl. 2010, 4, 1748–1755. [Google Scholar] [CrossRef]
  48. Chen, W.-H.; Yang, J.; Guo, L.; Li, S. Disturbance-observer-based control and related methods—An overview. IEEE Trans. Ind. Electron. 2015, 63, 1083–1095. [Google Scholar] [CrossRef]
  49. Murphy-Chutorian, E.; Trivedi, M.M. Head pose estimation in computer vision: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 607–626. [Google Scholar] [CrossRef]
  50. Chun, S.; Chang, J.Y. 6DoF Head Pose Estimation through Explicit Bidirectional Interaction with Face Geometry. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; Springer Nature: Cham, Switzerland; pp. 146–163. [Google Scholar]
  51. Asperti, A.; Filippini, D. Deep learning for head pose estimation: A survey. SN Comput. Sci. 2023, 4, 349. [Google Scholar] [CrossRef]
  52. Felsheim, R.C.; Brendel, A.; Naylor, P.A.; Kellermann, W. Head orientation estimation from multiple microphone arrays. In Proceedings of the 28th European Signal Processing Conference (EUSIPCO), Amsterdam, The Netherlands, 18–22 January 2021; pp. 491–495. [Google Scholar]
  53. Meyer, G.P.; Gupta, S.; Frosio, I.; Reddy, D.; Kautz, J. Robust model-based 3d head pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, Washington, DC, USA, 7–13 December 2015; pp. 3649–3657. [Google Scholar]
  54. Hashemian, H.M. State-of-the-art predictive maintenance techniques. IEEE Trans. Instrum. Meas. 2010, 60, 226–236. [Google Scholar] [CrossRef]
  55. Khan, A.H.; Li, S.; Luo, X. Obstacle avoidance and tracking control of redundant robotic manipulator: An RNN-based metaheuristic approach. IEEE Trans. Ind. Inform. 2019, 16, 4670–4680. [Google Scholar] [CrossRef]
  56. Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. Int. J. Robot. Res. 1986, 5, 90–98. [Google Scholar] [CrossRef]
  57. Xu, Z.; Zhou, X.; Wu, H.; Li, X.; Li, S. Motion planning of manipulators for simultaneous obstacle avoidance and target tracking: An RNN approach with guaranteed performance. IEEE Trans. Ind. Electron. 2021, 69, 3887–3897. [Google Scholar] [CrossRef]
  58. Li, X.; Xu, Z.; Li, S.; Su, Z.; Zhou, X. Simultaneous obstacle avoidance and target tracking of multiple wheeled mobile robots with certified safety. IEEE Trans. Cybern. 2021, 52, 11859–11873. [Google Scholar] [CrossRef]
  59. Khan, A.H.; Li, S. Discrete-Time Impedance Control for Dynamic Response Regulation of Parallel Soft Robots. Biomimetics 2024, 9, 323. [Google Scholar] [CrossRef]
  60. Khan, A.H.; Shao, Z.; Li, S.; Wang, Q.; Guan, N. Which is the best PID variant for pneumatic soft robots an experimental study. IEEE/CAA J. Autom. Sin. 2020, 7, 451. [Google Scholar] [CrossRef]
  61. Ten, C.W.; Manimaran, G.; Liu, C.C. Cybersecurity for critical infrastructures: Attack and defense modeling. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2010, 40, 853–865. [Google Scholar] [CrossRef]
  62. Dass, S.; Pertsch, K.; Zhang, H.; Lee, Y.; Lim, J.; Nikolaidis, S. Pato: Policy assisted teleoperation for scalable robot data collection. arXiv 2022, arXiv:2212.04708. [Google Scholar]
  63. Liu, H.; Nasiriany, S.; Zhang, L.; Bao, Z.; Zhu, Y. Robot learning on the job: Human-in-the-loop autonomy and learning during deployment. Int. J. Robot. Res. 2024, 5, 83–96. [Google Scholar]
  64. Huang, J.; Aronson, R.M.; Short, E.S. Modeling variation in human feedback with user inputs: An exploratory methodology. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 11–15 March 2024; pp. 303–312. [Google Scholar]
  65. Hernández-Guzmán, V.M.; Orrante-Sakanassi, J. PID control of robot manipulators actuated by BLDC motors. Int. J. Control. 2021, 94, 267–276. [Google Scholar] [CrossRef]
  66. Haque, M.A.; Li, S. Exploring ChatGPT and its impact on society. AI Ethics 2025, 5, 791–803. [Google Scholar] [CrossRef]
  67. Soori, M.; Arezoo, B.; Dastres, R. Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cogn. Robot. 2023, 3, 54–70. [Google Scholar] [CrossRef]
  68. Xiao, Y.; Codevilla, F.; Gurram, A.; Urfalioglu, O.; López, A.M. Multimodal end-to-end autonomous driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 537–547. [Google Scholar] [CrossRef]
  69. Kim, M.J.; Pertsch, K.; Karamcheti, S.; Xiao, T.; Balakrishna, A.; Nair, S.; Rafailov, R.; Foster, E.; Lam, G.; Sanketi, P.; et al. Openvla: An open-source vision-language-action model. arXiv 2024, arXiv:2406.09246. [Google Scholar]
  70. Anthes, C.; Garcia-Hernandez, R.J.; Wiedemann, M.; Kranzlmuller, D. State of the art of virtual reality technology. In Proceedings of the 2016 IEEE Aerospace Conference, Big Sky, MT, USA, 5–12 March 2016; pp. 1–19. [Google Scholar]
  71. Stone, R.J. Haptic feedback: A brief history from telepresence to virtual reality. In International Workshop on Haptic Human-Computer Interaction; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1–16. [Google Scholar]
Figure 2. Teleoperated haircutting, which allows remote haircut of customers in metropolitan areas by barbers living in rural areas. (This figure is partially edited with Gemini 2.5 Flash).
Figure 2. Teleoperated haircutting, which allows remote haircut of customers in metropolitan areas by barbers living in rural areas. (This figure is partially edited with Gemini 2.5 Flash).
Automation 06 00047 g002
Figure 3. Many robotic systems can be viewed as the combination of mobile platforms and sensors/actuators. In this perspective, a haircutting robot can be viewed as a clipper as an actuator plus a robotic arm as the mobile platform.
Figure 3. Many robotic systems can be viewed as the combination of mobile platforms and sensors/actuators. In this perspective, a haircutting robot can be viewed as a clipper as an actuator plus a robotic arm as the mobile platform.
Automation 06 00047 g003
Figure 4. Flowchart of robotic haircutting process with possible human intervention.
Figure 4. Flowchart of robotic haircutting process with possible human intervention.
Automation 06 00047 g004
Table 1. Existing works on hair modeling and hair editing.
Table 1. Existing works on hair modeling and hair editing.
ReferencesTasksApplication to Haircutting RobotsCode Availability
[3]Hair simulationHair previewNo
[4]Hair 3D reconstruction2D image to hair 3D modelhttps://github.com/hoseok-tong/NeuralHaircutTHS (accessed on 10 September 2025)
[5]Hair segmentationHair preview and recommendationNo
[6]Hair 3D reconstruction from images2D image to hair 3D modelhttps://github.com/papagina/HairNet_DataSetGeneration (accessed on 10 September 2025)
[7]Hair 3D reconstruction from images2D image to hair 3D modelNo
[8]CT-based hair 3D reconstructionCT based hair 3D modelhttps://github.com/facebookresearch/CT2Hair (accessed on 10 September 2025)
[9]Hair dynamic simulationHair previewNo
[11]Image-based hair style and color editingHair preview and recommendationhttps://github.com/Zlin0530/HairManip (accessed on 10 September 2025)
[12]Strand-based 3D hair model and hair style and color editingHair preview and recommendationNo
[13]Video-based dynamic image manipulation and editingHair preview and recommendationNo
Table 2. Existing attempts towards haircutting robots.
Table 2. Existing attempts towards haircutting robots.
ReferencesNo. in Figure 1Publication FormatTypeTargeted AreaDescription
[14]aPatentSystem developmentProductMechanical and circuit design
[16]fPaperProgrammingResearchCoding on commercial robots with force feedback
[17]ePaperProgrammingResearchCoding on commercial robots with force feedback
[18]dPaperSystem developmentResearchCoding on self-developed soft gripper with commercial robots
[19]cPaperProgrammingResearchCoding on commercial robots for front hair styling
[20]gPaperProgrammingResearchCoding on commercial robots for vision-based hair combing
[21]nPaperResearchProductA commercial product by Panasonic Ltd. for shampooing and scalp massaging
[22]bPatentSystem developmentProductA commercial product by StudioRed Ltd., Palo Alto, CA, USA for MyBarberRobot
[23]lVideoSystem developmentDIYA vision-based DIY system for automated haircutting
[24]mVideoSystem developmentDIYA vision-based DIY system for automated haircutting
[25]jVideoProgrammingDIYiPhone scanning-based automated haircutting using a commercial robot
[26]kVideoProgrammingDIYAutomated haircutting using a collaborative robot
[27]hVideoProgrammingDIYTeleoperated haircutting
[28]iVideoProgrammingDIYTeleoperated haircutting
[29]oVideoFutureDIYImaginary haircutting helmet
[30]pVideoFutureDIYConceptual design with drones for haircutting
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, S. Haircutting Robots: From Theory to Practice. Automation 2025, 6, 47. https://doi.org/10.3390/automation6030047

AMA Style

Li S. Haircutting Robots: From Theory to Practice. Automation. 2025; 6(3):47. https://doi.org/10.3390/automation6030047

Chicago/Turabian Style

Li, Shuai. 2025. "Haircutting Robots: From Theory to Practice" Automation 6, no. 3: 47. https://doi.org/10.3390/automation6030047

APA Style

Li, S. (2025). Haircutting Robots: From Theory to Practice. Automation, 6(3), 47. https://doi.org/10.3390/automation6030047

Article Metrics

Back to TopTop