The virtual robotic system developed in this study was structured around four core tasks aligned with the learning progression and stages of cognitive development: (1) Spatial Coordinates, (2) Vector Concepts, (3) Rotation Concepts, and (4) Spatial Vectors. Each task incorporated interactive activities and instructional content designed to progressively guide students from fundamental spatial positioning to the contextual application of spatial vector concepts.
3.5.2. Learning Materials
To support self-directed learning, the system provides embedded materials aligned with the core mathematical concepts of each task, adapted from high school textbooks and tailored to the system’s objectives. Accompanying printed worksheets enable learners to preview key concepts and reinforce understanding before and after interacting with the virtual robotic system through guided reading and practice exercises. Each knowledge section is structured according to the learning objectives of the corresponding task and is subdivided into thematic subtopics.
The layout of the learning materials is designed for clarity and logical organization, enabling easy navigation and supporting learners in tracking their progress. Each subsection engages learners with worksheet-based questions designed to reinforce comprehension and foster active participation. Additionally, the system features a “lightbulb” function that learners can click to access hints or supplementary explanations during problem-solving. This feature supports content comprehension and encourages the development of strategic thinking skills. The integration of textual guidance, interactive engagement, and scaffolded support offers a balanced learning experience that fosters both learner autonomy and conceptual clarity. Through structured interaction with the virtual robotic system and guided learning activities, students are better equipped to internalize spatial vector concepts and enhance their overall learning outcomes.
3.5.3. Learning Tasks
After learners engage with the learning materials to understand fundamental concepts of spatial vectors, the system guides them into corresponding interactive environments to complete the learning tasks and reinforce their mathematical understanding. Each interactive space aligns with its task, providing realistic scenarios and clear instructions that guide students in exploring and working interactively within a 3D virtual environment. Each user interface features functional buttons in the upper-right corner, offering quick access to support tools such as vector visualization, coordinate information, and operational hints. If learners encounter difficulties during the activity, they can click the “question mark” icon to access real-time guidance and explanatory prompts, ensuring a smoother learning experience and fostering self-directed participation. The interactive modules emphasize the integration of conceptual understanding with procedural practice, allowing abstract concepts such as spatial coordinates and vector transformations to be visualized and explored in a concrete, dynamic format. To further support learners, the system provides interface illustrations and operational guides that explain the purpose, structure, and functions of each interactive task space.
This task is designed to help learners develop a fundamental understanding of the three-dimensional coordinate system, emphasizing the geometric meaning and interrelationships among the X-, Y-, and Z-axes. To visualize this abstract concept, the system presents a 3D virtual environment featuring clearly defined axes and grid lines, allowing learners to perceive spatial structure and point positioning intuitively. Learners interact with a cube object by manipulating its vertices in space to observe corresponding changes in coordinates. This hands-on experience strengthens their understanding of the relationship between point locations and numerical values in 3D space.
In addition, the system includes a built-in assessment activity in which learners are instructed to move a specific vertex of the cube to a given coordinate. This target-positioning exercise enhances learners’ spatial accuracy and supports the development of basic skills in coordinate transformation and spatial estimation. Learners can use the arrows next to X, Y, and Z to adjust the square’s position on the screen. They can click the (X,Y) button to display the coordinates of the current point A and point B. After pressing the confirmation button, a feedback message immediately appears to inform learners whether point A has been moved to the correct position (
Figure 7).
This task is designed to help learners develop the basic concepts of plane vectors and understand the correspondence between vectors and coordinate points. Through visualizing the movement of a virtual robot in a 2D plane, the interactive operation allows learners to directly observe the changes in the vectors generated when the robotic arm moves between different coordinate points. Through such operations, learners can gradually understand the direction and magnitude of spatial vectors, as well as the geometric meaning of their differences with the coordinates and their slopes in various problem-solving contexts, enhancing overall mathematical reasoning skills.
In the “Verification” phase, learners engage in vector addition through interactive reasoning and manipulation. The system presents a sequence of points (A → B → C → D → E) and guides learners to compute the cumulative sum of vectors:
+
+
+
, and verify whether it equals the direct vector
. This activity not only reinforces the computational rules and geometric interpretation of vector addition but also enhances learners’ conceptual understanding of a vector as a cumulative displacement from the starting point to the endpoint. Through visual representation and hands-on interaction, learners deepen their understanding of vector chaining and total displacement in three-dimensional space. When learners drag the scroll bars to adjust the joint angles with the mouse, the corresponding robot arm axis on the screen rotates accordingly, as shown in the upper part of
Figure 8. Pressing the Vectors button
displays all vectors and their slopes. Learners can click any line segment to highlight the corresponding vector (in red) along with its coordinate values, as shown in the lower part of
Figure 8.
This task is designed to help learners understand how rotating an object in three-dimensional space affects its coordinate values and spatial orientation. Unlike previous tasks, the individual axis angles of the virtual robot are fixed and cannot be directly adjusted. This constraint encourages learners to concentrate on observing and reasoning about the robot’s overall rotational movement within the 3D space. The virtual robot is initially positioned at the origin of the coordinate system. Learners can rotate the entire robot model to observe changes in its direction and coordinate position. This rotation simulates real-world spatial transformations, allowing learners to understand how changes in object orientation correspond to adjustments in three-dimensional coordinates.
In addition to observing rotation, learners can also adjust the virtual robot’s position to explore object displacement within the 3D coordinate system. This task emphasizes the development of spatial awareness and relative positioning, guiding learners to construct an understanding of direction and location in 3D space. Through interactive manipulation, learners strengthen spatial visualization and reasoning skills—serving as a foundational experience for understanding the application of spatial vectors (
Figure 9).
This task represents the final stage of the learning activity, with the primary goal of integrating concepts from previous tasks—spatial coordinates, vector representation, and rotation—to enhance learners’ understanding and application of spatial vectors in three-dimensional space. Compared to Task 2, which focuses on planar vectors, this task introduces more advanced features. Learners can manipulate both the rotational angles of individual robot axes and the overall orientation of the virtual robot, allowing them to fully observe and control changes in spatial coordinates and directionality.
During the interactive process, learners observe how the virtual robot’s rotation and displacement in 3D space result in changes to its joint coordinates. This helps them understand the relationship between the start and end points of a vector and construct a conceptual framework for spatial vector representation. In the “Assessment” phase, a goal-oriented task is presented: learners must manipulate the virtual robot to change the direction and position of point E (the end effector) to contact a designated target block. Learners can adjust the angles of A, B, C, and D axis points to control the movement of the virtual arm, guiding its end to touch the square. Upon successful contact, the system displays the target’s coordinates, providing immediate feedback on the learner’s spatial estimation and vector reasoning skills. This task consolidates previously learned concepts while introducing greater complexity, effectively enhancing learners’ understanding and operational proficiency with spatial vectors in a three-dimensional environment (
Figure 10).
3.5.4. Technical Implementation
The virtual robotic system was implemented with Unity 2019, using the link and joint parameters of the WLKATA Mirobot 6-axis robotic arm to construct a 1:1 scale virtual model [
23]. The system employed the Denavit–Hartenberg (DH) convention to represent the kinematic chain, defining each joint’s coordinate system and corresponding transformation matrices. Using these DH parameters, the transformation matrix for each joint was calculated, enabling precise simulation of the arm’s motion in three-dimensional space. This approach allowed users to manipulate the virtual arm interactively, visualizing spatial vector relationships while maintaining fidelity to the real-world robotic specifications, supporting both learning and experimentation.
The virtual robotic arm was modeled using Blender, adhering to the specifications of the WLKATA Mirobot. This model was then imported into Unity, where it was animated to simulate realistic movements. The arm’s motion was controlled through inverse kinematics (IK), allowing users to manipulate joint parameters to achieve desired positions. This approach enabled precise control of the robotic arm’s end-effector, facilitating interactive learning experiences.
The system employed inverse kinematics to calculate the necessary joint parameters for the robotic arm to reach a specified target position. This method simplified the complex mathematical calculations traditionally required in robotics education. By adjusting joint angles, users could observe the resulting changes in the arm’s position and orientation, enhancing their understanding of spatial vector concepts.
Real-time feedback was integrated into the system to support learners in constructing accurate spatial understandings. The virtual environment provided immediate visual and contextual prompts, allowing students to observe the effects of their actions on the robotic arm’s movement. This interactive feedback mechanism aimed to reduce cognitive load and enhance conceptual comprehension, fostering greater learning motivation.