Next Article in Journal
Large Eddy Simulation of Wind Flow over A Realistic Urban Area
Previous Article in Journal
Creating Collections with Embedded Documents for Document Databases Taking into Account the Queries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Addressing Examination Timetabling Problem Using a Partial Exams Approach in Constructive and Improvement

1
Graduate School of Software & Information Science, Iwate Prefectural University, Iwate 020-0693, Japan
2
Faculty of Computer Systems & Software Engineering, Universiti Malaysia Pahang, Kuantan 25000, Pahang, Malaysia
3
University of Nottingham Malaysia Campus, Semenyih, Selangor 43500, Malaysia
4
Automated Scheduling, Optimisation and Planning (ASAP) Group, School of Computer Science, University of Nottingham, Jubilee Campus, Nottingham NG8 1BB, UK
*
Author to whom correspondence should be addressed.
Computation 2020, 8(2), 46; https://doi.org/10.3390/computation8020046
Submission received: 10 April 2020 / Revised: 27 April 2020 / Accepted: 8 May 2020 / Published: 17 May 2020
(This article belongs to the Section Computational Engineering)

Abstract

:
The paper investigates a partial exam assignment approach for solving the examination timetabling problem. Current approaches involve scheduling all of the exams into time slots and rooms (i.e., produce an initial solution) and then continuing by improving the initial solution in a predetermined number of iterations. We propose a modification of this process that schedules partially selected exams into time slots and rooms followed by improving the solution vector of partial exams. The process then continues with the next batch of exams until all exams are scheduled. The partial exam assignment approach utilises partial graph heuristic orderings with a modified great deluge algorithm (PGH-mGD). The PGH-mGD approach is tested on two benchmark datasets, a capacitated examination dataset from the 2nd international timetable competition (ITC2007) and an un-capacitated Toronto examination dataset. Experimental results show that PGH-mGD is able to produce quality solutions that are competitive with those of the previous approaches reported in the scientific literature.

1. Introduction

Examination timetabling problem has been intensely studied because it is a complex problem and has practical significance in educational institutions. The problem involves a set of exams that have to be placed into a number of time slots and rooms, subject to numerous hard and soft constraints [1]. Hard constraints must be fully satisfied for the timetable to be feasible. Soft constraints should be satisfied as much as possible, with the minimisation of the soft constraint violations providing a quality timetable. The examination timetabling problem is a real-world combinatorial optimisation problem that is difficult to solve due to having numerous constraints and limited resources (i.e., time slots and rooms) in allocating a large number of exams. There are two types of examination timetabling problem: capacitated and un-capacitated. In the un-capacitated version, the capacity of the rooms is not considered, whereas the capacitated variant considers room capacity as a hard constraint.
Examination timetabling is a type of scheduling problem that has NP-hard complexity and thus many researchers have investigated stochastic methods such as heuristic and meta-heuristic approaches to find optimal or near-optimal solutions. Until recently, numerous approaches have been proposed in scientific literature to solve the examination timetabling problem.
In earlier research, graph heuristic orderings were widely used for addressing timetabling problems [2,3,4]. Carter et al. [5] developed a software for the examination timetabling problem where several graph colouring heuristic orderings were successfully employed. Kahar and Kendall [6] used four graph heuristic orderings for the University Malaysia Pahang examination timetabling problem (a real-world university examination timetabling problem), which produced feasible solutions for all datasets and which were of better quality than the university’s solutions. Asmuni et al. [7] proposed a fuzzy concept with graph heuristic orderings to construct feasible solutions for the timetabling problem. Pillay and Banzhaf [8] used a hierarchical combination of graph heuristic orderings in their proposed hyper-heuristic framework for examination timetabling problem. Four heuristic combination lists were generated using low-level heuristics including Saturation degree (SD), Largest degree (LD), Largest weighted degree (LWD) and Largest enrollment (LE). Similar approach of graph colouring hyper-heuristics for constructing exam timetable was also proposed by Sabar et al. [9]. Graph heuristic orderings have received considerable attention because they are straightforward and effective in constructing initial feasible solutions. However, they are not suitable for improving of solution vectors.
In time scheduling literature, great deluge (GD) algorithm, a local search meta-heuristic, has been proposed to improve the initial solution quality. Dueck [10] applied GD to the travelling salesman problem and observed that GD was able to perform better than simulated annealing (SA) and hill climbing (HC). Burke and Newall [11] utilised GD and other local search methods individually for the Toronto exam dataset and found that GD was superior in terms of exploration. Burke and Bykov [12] designed flex-GD for the examination timetabling problem. They increased the flexibility of GD by adding features such as an adaptive acceptance condition for each particular move and repairing infeasible moves with Kemp chains. This approach was tested on the Toronto dataset and outperformed other reported approaches. Landa-Silva and Obit [13] proposed a variant of GD where a non-linear decay rate and an increasing boundary level were introduced. The boundary level was decreased nonlinearly to avoid the algorithm being too greedy, whereas converging to a local optimum was avoided by increasing the boundary level. The approach was able to produce good quality solutions. Müller [14] used a GD algorithm combined with Iterative Forward Search (IFS) and HC for examination timetabling problem. IFS produced the initial solution, which was then improved by HC until reaching a local optimum. GD was used to improve the solution further. This multi-phase approach was shown to be effective in the ITC2007 timetabling competition. McCollum et al. [15] implemented an extended GD to tackle the ITC2007 exam dataset. The proposed approach used a reheating mechanism in the GD algorithm when no improvement was found after a number of iterations. The methodology produced good quality solutions. Kahar and Kendall [16] highlighted the application of the GD algorithm for solving a real-world examination timetabling problem faced by Universiti Malaysia Pahang. They employed a modified GD algorithm and experimented with different initial solutions and neighbourhood structures. The approach produced better results than the authors’ previous approach of graph heuristic orderings [6]. Hybridising GD with other approaches has also been effective for university course timetabling problems. Abdullah et al. [17] combined GD with tabu search. The approach generated the best solution from four neighbourhood moves and increased the boundary level randomly when no improvement was found for a specific period. Experimental results were competitive compared to other reported works. Turabieh and Abdullah [18] hybridised GD with the Electromagnetic Mechanism (EM). This approach produced good results when tested on Toronto and ITC2007 exam datasets. Fong et al. [19] hybridised a modified GD (Imperialist Nelder-Mead great deluge) with an artificial bee colony algorithm (ABC) to improve the global exploration capability of the basic ABC. This approach was shown to be effective in the course timetabling problem. Abuhamdah [20] used modified GD for clustering problems in the medical domain. A list was created that stored previous boundary levels when a better solution was obtained. However, no improvement for a certain number of iterations updated the boundary level by a value selected randomly from this list. The approach performed better than basic GD in clustering problems. Besides, GD has been used successfully in other optimisation problems such as course timetabling [13], patient-admission [21] and rough set attribute reduction [22].
Several other local search meta-heuristic approaches have been reported to address the time scheduling problem. Burke and Bykov [23] employed late-acceptance hill climbing—a variant of hill climbing—for examination timetabling problem. Here a list of a specific length was used to delay comparison between candidate solutions with the current best solution. Bykov and Petrovic [24] proposed step counting hill climbing algorithm. The idea was based on employing the current cost as an acceptance boundary for the next several steps. This algorithm produced better solutions than hill climbing search methods. Battistutta et al. [25] employed SA with feature-based tuning for timetabling problem. The tuning process selected the most important parameters and fixed the value of other ones. This tuning mechanism was able to produce competitive results. SA in examination timetabling was also reported in the literature [26]. The notable features of local search meta-heuristic approaches are that they are simple, can avoid convergence into local optima and take fewer parameter settings (one or two). However, the effectiveness of this strategy highly depends on neighbourhood structure and the quality of the initial solution.
Another category of solution approaches includes population-based meta-heuristic and hyper-heuristic approach. Alinia Ahandani et al. [27] investigated a discrete particle swarm optimisation for the examination timetabling problem. The particle’s position was updated using mutation and crossover, and its position was improved using three types of local searches. The approach provided satisfactory results when tested on the Toronto dataset. Another similar approach in addressing the timetabling problem is combining the particle swarm optimisation algorithm with a local search approach [28]. Bolaji et al. [29] proposed a hybrid ABC for solving un-capacitated examination timetabling problems. The hybridisation of an ABC with harmony search algorithm and local search was performed in two phases so that exploration and exploitation can be balanced. Another population-based approach Prey-predator algorithm has recently been used to solve some instances of Toronto dataset [30]. Memetic algorithm has also been popular for examination timetabling problem. Adaptive co-evolutionary memetic algorithm [31], a cellular memetic algorithm [32] and memetic algorithm based on Multi-objective Evolutionary Algorithm with Decomposition (MOEA/D) [32] have been reported to be effective in producing a quality exam timetable. Demeester et al. [33] presented a hyper-heuristic framework based on tournament selection in genetic algorithm. At the lower level of the hyper-heuristic, the number of move operations for each problem type was selected. Higher-level heuristics encompassed selection processes of these low-level heuristics and four move acceptance criteria. In the study, random selection performed better than intelligent selection. The approach was implemented on both examination and course timetabling problems, and it tended to produce quality solutions. Anwar et al. [34] investigated a harmony search hyper-heuristic approach for ITC2007 exam dataset. A basic harmony search algorithm was employed as a high-level heuristic that controlled the low-level heuristics. Two neighbourhood structures, move and swap, were employed. The approach was able to produce some better results than state-of-the-art approaches. Recently, Nelishia Pillay and Özcan [35] has examined two hyper-heuristic strategies for examination timetabling, which are arithmetic heuristics (AHH) and hierarchical heuristics (HHH). In general, above hyper-heuristic and population-based approaches tend to be effective in producing quality timetable solutions, but they are complex and often require more parameter settings.
It is observed from the literature that examination timetabling is often addressed in two stages. At the first stage, one or more complete initial feasible solution(s) is constructed. At the second stage, the quality of the feasible solution(s) is improved [7,16,36]. Initial feasible solution(s) are usually produced using graph heuristic orderings, and the solution vector(s) are improved using meta-heuristics such as hill climbing, simulated annealing, tabu search, GD algorithm and genetic algorithm. Frequently, in the sequential approach, result(s) produced by the improvement step is biased by the initial solution(s), with good initial solution(s) tending to result in good improved solution(s) [16,37]. Moreover, the improvement techniques are occasionally incapable of working effectively in generating the quality solution in the case of badness (local optima) of the initial solution [38,39]. Hence the improvement algorithms must possess a good ability to explore (breadth) the search area as well as the capability to focus on the promising search area (depth). This makes the whole (sequential approach) search methodologies dependent on the improvement strategy, and sometimes the improvement strategy is unable to work effectively. Therefore, another approach is required that could encourage the search to produce quality solutions.
This paper proposes a new approach that includes partial graph heuristic orderings with a modified great deluge algorithm (PGH-mGD) for solving the examination timetabling problem. The strategy partially schedules a pool of exams, which are selected based on graph heuristic orderings and exam assignment value. These partially scheduled exams are improved using a modified GD. The procedure then schedules the next pool of selected examinations. This process continues until all examinations have been scheduled. This proposed approach is tested on two well-known examination timetabling problems, namely Toronto and ITC2007 exam dataset. To evaluate the performance, PGH-mGD is compared with traditional graph heuristic orderings with a modified great deluge algorithm (TGH-mGD) and related state-of-the-art methods.
In this paper, we use different graph heuristic orderings as a solution construction because they are straightforward and effective in constructing feasible solutions. In earlier research, several static graph colouring approaches such as Largest degree (LD), Largest weighted degree (LWD) and Largest enrollment (LE) were reported to construct timetables [5,6,40]. Another ordering named Saturation degree (SD) showed better performance than static orderings due to its dynamic property [9]. Subsequently, SD was hybridised with static orderings and exhibited better results compared to SD in terms of solution construction [41,42]. We have used both hybridised and static approaches for constructing the timetable. The motivation for using GD in the improvement phase is due to its consistently good performance in addressing different scheduling problems. These include job shop scheduling [43], university timetabling problems [13,16,24,44], vehicle routing problem [45], feature selection problem [46,47] dynamic facility layout problem [48] and patient-admission problem [21]. In addition to having the ability to avoid local optima, GD has only one parameter to be tuned. In this proposed approach, we have used a modified GD algorithm that use reheating mechanism to better avoid local optima. This type of strategy has showed effectiveness in several scheduling problems [15,16,20,21].
The rest of the paper is structured as follows. Section 2 provides a brief introduction on examination timetabling problem followed by a description and mathematical formulations of the examination benchmark datasets (i.e., Toronto and ITC2007 exam datasets). Section 3 describes a general overview of graph heuristic orderings and GD. Section 4 presents our proposed PGH-mGD and Section 5 describes TGH-mGD. The experimental setup for the examination timetabling problem is shown in Section 6. The experimental results are presented in Section 7, whereas the results are discussed and analysed in Section 8. Concluding remarks and suggestions for future research are provided in Section 9.

2. Examination Timetabling Problem

In this section, we give a general definition of the examination timetabling problem and then describe two examination timetabling benchmark datasets, namely Toronto and ITC2007 datasets.

2.1. Definition

Examination timetabling problem can be defined as the process of allocating a set of examinations E = { e 1 , e 2 , , e m } into a limited number of time slots T = { t 1 , t 2 , , t n } and rooms (optional) R = { r 1 , r 2 , , r p } subject to a set of constraints C = { c 1 , c 2 , , c q } , where m , n , p , q Z + . There are two types of constraints: hard constraints and soft constraints. An examination timetable produces a feasible solution when only hard constraints are considered. A common hard constraint is that two examinations taken by the same student must be assigned in different time slots. Soft constraint satisfaction is associated with solution quality. The more we minimise the number of soft constraint violations, the better we will produce a quality exam timetable. A simple example of a soft constraint is to spread the examinations as evenly as possible over the time periods.
The examination timetabling problem is categorised into capacitated and un-capacitated. In a capacitated examination timetabling, room capacity is considered as a hard constraint, but an un-capacitated one does not consider room capacity. Two datasets that have been frequently mentioned for examination timetabling problem are Toronto and ITC2007 exam datasets. Toronto dataset is an un-capacitated and more popular dataset in research communities, whereas the ITC2007 exam dataset is a capacitated, more realistic and more difficult to solve. These two datasets are described in the following subsections.

2.2. Toronto Dataset

2.2.1. Problem Description

The most widely used examination benchmark dataset was introduced by Carter and Laporte [5]. This dataset is also known as the Toronto dataset and is available in Reference [49]. This dataset is an un-capacitated examination timetabling benchmark dataset, and it considers an unlimited number of seats. There are 13 instances in the Toronto dataset. They are shown in Table 1. In this table, conflict density indicates the difficulties of instances with respect to examinations in conflict. This is measured by counting the number of events in which examinations have conflicted with each other, followed by dividing it with the total number of events aggregating both conflicted and non-conflicted examinations with each other.
The Toronto dataset has one hard and one soft constraint, which are defined as follows:
  • Hard constraint: One student can attend only one exam at a time (it is known as a clashing constraint).
  • Soft constraint: Conflicting examinations are spread as far as possible. This constraint measures the quality of the timetable.

2.2.2. Problem Formulation

The objective of the Toronto dataset problem is to satisfy the hard constraint and minimise the penalty value of the soft constraint violations. The formulations of the Toronto examination timetabling problem as well as the objective function, which are adopted from Carter et al. [5], are defined as follows.
  • N is the number of exams.
  • M is the total number of students.
  • E i is an examination, i { 1 , , N } .
  • T is the available time slots.
  • ( c i j ) N × N is the conflict matrix. where each element in the matrix is the number of students taking examination i and j, and where i , j { 1 , , N } . Each element c i j is n if examination i and examination j are taken by n students ( n > 0 ) , otherwise c i j is zero.
  • t k ( 1 t k T ) is the time slot associated with examination k, and k { 1 , , N } .
The objective function of the problem is expressed in Equation (1). Its aim is to minimise the proximity cost, that is, reducing the penalty value of the soft constraint violation as much as possible.
m i n i m i s e i = 1 N 1 F ( i ) M ,
where
F ( i ) = j = i + 1 N c i j × p r o x i m i t y ( t i , t j )
and
p r o x i m i t y ( t i , t j ) = 2 5 2 t i t j , if 1 t i t j 5 0 , otherwise
subject to
i = 1 N 1 j = i + 1 N c i j × λ ( t i , t j ) = 0 ,
where
λ ( t i , t j ) = 1 , if t i = t j 0 , otherwise .
Equation (2) presents the penalty cost of the examination i and this cost is calculated by multiplying the proximity value with the conflict matrix. Equation (3) indicates the proximity value between the two examinations. Based on Equation (3), penalty value 16 is given in the case of assigning two examinations of a student consecutively; penalty value 8 is used if a student has a time slot gap between examinations. In this way, a penalty value 4, 2 and 1 are given for 2, 3 and 4 time slot gap between examinations, respectively. It is also noted that the penalty value zero is given for more than 4 time slot gap between examinations of a student. Equation (4) indicates the hard constraint of the problem.

2.3. ITC2007 Exam Dataset

2.3.1. Problem Description

The 2nd international timetable competition (ITC2007) examination dataset was established to facilitate researchers to carry out exploration on real world examination timetabling problems to reduce the gap between theory and practice. It is a capacitated problem where a set of examinations is allocated into limited time slots and rooms subject to a set of numerous real-world hard and soft constraints. To allocate each examination at most one room and time slot is a requirement of the dataset. Although more than one examination can be allocated into a single room, splitting individual examinations among rooms is strictly prohibited.
There are eight instances which are publicly open for the ITC2007 exam dataset, and each of the instances has several hard and soft constraints. They are shown in Table 2 and are available in Reference [50]. This dataset contains many additional elements compared to previous benchmark datasets, thus increasing the depth and difficulty of the problem. Descriptions of some important features, including the number of time slots, the number of rooms, room capacity, time slot length, period hard constraints, room hard constraints, front load expression and different hard and soft constraints are highlighted as follows:
  • Number of examinations: Every instance has a specific number of examinations. Each individual examination has a definite duration with a set of students who enrolled for the examination.
  • Number of time slots: Every instance has a specific number of time slots or periods. There are some time slots that have penalties. Besides, penalties are associated with two examinations in a row, two examinations in a day, examination spreading and later period.
  • Number of rooms: Every instance has some predefined rooms where examinations have to be placed. This collection of rooms is assigned to every time slot of an instance. Also, some rooms are associated with penalties.
  • Room capacity: Different room has a different capacity. It is observed that ITC2007 exam dataset does not allow splitting of an examination into several rooms, albeit multiple examinations can be assigned in a single room.
  • Time slot length: All of the time slots of an instance have either different or same time slot duration. The penalty is involved when the different duration of examinations share the same room and period.
  • Period hard constraint-AFTER: For any pair of examinations ( e 1 , e 2 ), this constraint means e 1 must be assigned after e 2 . Some of the instances have more complex constraints where the same examinations are associated with AFTER constraint more than one time. For instance, e 1 AFTER e 2 ; e 2 AFTER e 3 . This increases the complexity for the dataset ( see E x a m _ 1 and E x a m _ 8 ).
  • Period hard constraint-EXCLUSION: For any pair of examinations ( e 1 , e 2 ), this constraint means e 1 must not be assigned with e 2 .
  • Period hard constraint-EXAM-COINCIDENCE: For any pair of examinations ( e 1 , e 2 ), this constraint means e 1 and e 2 must be placed in the same time slot.
  • Room hard constraint- EXCLUSIVE: This constraint indicates an examination must be assigned in an allocated room. Any other examinations must not be assigned in this particular room. Only three instances ( E x a m _ 2 , E x a m _ 3 , and E x a m _ 8 ) have this constraint.
  • Front load expression: It is expected that examinations with a large number of students should be assigned at the beginning of the examination schedule. Every instance defines the number of larger examinations. Another parameter is the number of last periods to be considered for penalty. If any larger examination falls in that period, an associated penalty is included.
A feasible examination timetable is occurred when each examination of an instance is assigned in a single room and a time slot without violating hard constraints. The hard constraints of ITC2007 exam dataset are defined as follows:
  • H 1 . One student can sit only one exam at a time.
  • H 2 . The capacity of the exam will not exceed the capacity of the room.
  • H 3 . The examination duration will not violate the duration of the period.
  • H 4 . There are three types of examination ordering that must be respected.
    -
    Precedences: exam i will be scheduled before exam j.
    -
    Exclusions: exam i and exam j must not be placed at the same period.
    -
    Coincidences: exam i and exam j must be scheduled in the same period.
  • H 5 . Room exclusiveness must be kept. For instance, an exam i must take place only in room number 206.
The quality of an examination timetable is related to satisfying the soft constraint, albeit can be violated if required. The soft constraints of ITC2007 exam dataset are summarised as follows:
  • S 1 . Two Exams in a Row: There is a restriction for a student to sit successive exams on the same day.
  • S 2 . Two Exams in a Day: There is a restriction for a student to sit two exams in a day.
  • S 3 . Spreading of Exams: The exams are spread as evenly as possible over the periods so that the preferences of students such as avoiding closeness of exams are preserved.
  • S 4 . Mixed Duration: Scheduling exams of different duration should be avoided in the same room and period.
  • S 5 . Scheduling of Larger Exams: There is a restriction to assign larger exams at the late of the timetables.
  • S 6 . Room Penalty: Some rooms are restricted, and an associated penalty is imposed if assigned.
  • S 7 . Period Penalty: Some periods are restricted, and an associated penalty is imposed if assigned.
The objective of solving the ITC2007 exam dataset is to satisfy all five hard constraints and minimise the violations of seven soft constraints (penalty) so that a quality examination timetable can be produced. The mathematical formulation of the ITC2007 exam dataset is shown below.

2.3.2. Problem Formulation

A formal model for the ITC2007 exam dataset presented here has been adopted from the model proposed by McCollum et al. [51]. The formal model, such as the mathematical formulation, provides a rigorous definition of the problem.To develop the mathematical model of ITC2007 exam dataset, the following notation has been used in Reference [51].
  • E: Set of Examinations.
  • s i E : Size of examination i E .
  • d i E : Duration of Examination i E .
  • f i E : A boolean value that is 1 iff examination i is subject to front load penalties, 0 otherwise.
  • D: Set of durations used i d i E .
  • u i d D : A boolean value that is 1 iff examination i has the duration type d D .
  • U d p r D : A boolean value that is 1 iff duration type d is used for period p and room r, 0 otherwise.
  • S: Set of students.
  • t i s : A boolean value that is 1 iff student s takes examination i, 0 otherwise.
  • R: Set of rooms.
  • s r R : Size of room r R .
  • w r R : A weight that indicates a specific penalty for using the room r.
  • P: Set of periods.
  • d p P : Duration of period p P .
  • f p P : A boolean value that is 1 iff period p is subject to FRONTLOAD penalties, 0 otherwise.
  • w p P : A weight that indicates a specific penalty for using the period p.
  • y p q : A Boolean value that is 1 iff period p and q are in the same day, 0 otherwise.
  • X i p P = 1 when examination i is in period p, 0 otherwise.
  • X i r R = 1 when examination i is in room r, 0 otherwise.
  • X i p r P R = 1 when examination i is in room r and period p, 0 otherwise.
  • w 2 R : Weight for two in a row.
  • w 2 D : Weight for two in a day.
  • w P S : Weight for period spread.
  • w N M D : Weight for no mixed duration.
  • w F L : Weight for the front load penalty.
  • C s 2 R : Two in a row penalty for student s.
  • C 2 R : Two in a row penalty for the entire set of students.
  • C s 2 D : Two in a day penalty for student s.
  • C 2 D : Two in a day penalty for the entire set of students.
  • C s P S : Period spread penalty for student s.
  • C P S : Period spread penalty for the entire set of students.
  • C N M D : No mixed duration penalty.
  • C F L : Front load penalty.
  • C p : Soft period penalty.
  • C R : Soft room penalty.
  • H a f t : Pair of examinations set where in each pair ( e 1 , e 2 ) H a f t , the examination e 1 must be assigned after the examination e 2 .
  • H c o i n : Pair of examinations sets where in each pair ( e 1 , e 2 ) H c o i n , the examination e 1 and e 2 must be assigned at the same period.
  • H e x c l : Pair of examinations set where in each pair ( e 1 , e 2 ) H e x c l , the examination e 1 and e 2 must not be assigned at the same period.
  • H s o l e : A set of examinations where each examination e H s o l e is allocated to a room r and no other examinations are allocated in this room.
If a student s takes two examinations i and j, where j is placed in a period that is immediately after the period used for examination and examination i and examination j is held on the same day, then C s 2 R gets an increment of 1. Therefore, the mathematical model of soft constraint S1 is as follows:
C s 2 R = i , j E j i p , q P q = p + 1 y p q = 1 t i s t j s X i p P X j q P ,
Considering all students,
C 2 R = s S C s 2 R .
If a student s takes two examinations i and j, where they are placed into the non-consecutive period but are held on the same day, then C s 2 D gets an increment of 1. Therefore, the mathematical model of soft constraint S2 is as follows:
C s 2 D = i , j E j i p , q P q > p + 1 y p q = 1 t i s t j s X i p P X j q P .
Considering all students,
C 2 D = s S C s 2 D .
If a student s takes two examinations i and j, and they are placed in a distinct period in such a way that there is a gap g between them and j is after i, then C s P S gets an increment of 1. Therefore, the mathematical model of soft constraint S3 is as follows:
C s P S = i , j E j i p , q P p < q p + g t i s t j s X i p P X j q P .
Considering all students,
C P S = s S C s P S .
C p r N M D is a non-negative penalty which is the number of different duration in period-room pair minus one. Therefore, the mathematical model of soft constraint S4 is as follows:
p P , r R , 1 + C p r N M D d D U d p r D a n d C p r N M D 0 .
The overall penalty,
C N M D = p P r R C p r N M D .
Front load penalties are applied with f i E = 1 and f p P = 1 . Therefore, the mathematical model of soft constraint S5 is as follows:
C F L = i E p P f i E f p P X i p P
The mathematical model of room penalties of soft constraint S6 is as follows:
C R = r R i E W r R X i r R .
The mathematical model of period penalties of soft constraint S7 is as follows:
C P = p P i E W p P X i p P
To formulate the objective function, each soft constraint is multiplied by its weight factor, and then results are totalled. The objective function is presented below, and its overall goal is to minimise the violation of soft constraints. Table 3 presents the weights associated with soft constraints for the ITC2007 exam dataset. Weights are not included with C P and C R in the objective function because they are already covered in their definition. Interested readers can get the detailed description of this examination track, constraints, and objective functions in References [51,52].
m i n i m i s e ( W 2 R C 2 R + W 2 D C 2 D + W P S C P S + W N M D C N M D + W F L C F L + C R + C P )
subject to following hard constraints (H1 to H5).
p P , s S i E t i s X i p P 1
p P , r R i E s i E X i p r P R s r R
p P , i E d i E X i p P d p P
( i , j ) H a f t , p , q P , w i t h p q X i p P + X j q P 1
( i , j ) H c o i n , p P X i p P = X j p P
( i , j ) H e x c l , p P X i p P + X j p P 1
i H s o l e , j E , p P , r R , j i X i p P + X i r R + X j p P + X j r R 3 .

3. Background

3.1. Graph Heuristic Orderings

The examination timetabling problem can be modelled using a graph colouring problem where all vertices are considered as examinations, and an edge between any pair of vertices represents conflicting examinations. That is, those conflicting examinations have at least one student in common and cannot take place in the same time slot. As the graph colouring problem is a NP-hard problem, graph heuristic orderings are used to tackle the problem in polynomial time [5]. In the context of examination timetabling, a graph heuristic ordering strategy measures the difficulty index of the exams and orders them in such a way that the most difficult exam to be scheduled will be assigned first to a proper time slot. Graph heuristic orderings are popular approaches that are used to construct an initial timetable [5,53]. The most commonly used graph heuristic orderings seen in the literature are described as follows:
  • Largest degree (LD): In this ordering, the number of conflicts is counted for each examination by checking its conflict with all other examinations. Examinations are then arranged in decreasing order such that exams with the largest number of conflicts are scheduled first.
  • Largest weighted degree (LWD): This ordering is similar to LD. The difference is that in the ordering process the number of students associated with the conflict is considered.
  • Largest enrolment (LE): The examinations are ordered based on the number of registered students for each exam.
  • Saturation degree (SD): Examination ordering is based on the availability of the remaining time slots where unscheduled examinations with the lowest number of available time slots are given priority to be scheduled first. The ordering is dynamic as it is updated after scheduling each exam.

3.2. Great Deluge Algorithm

The great deluge (GD) algorithm is a local search meta-heuristic algorithm proposed by Dueck [10]. The inspiration of this algorithm originated from the behaviour of a hill climber seeking a higher place to avoid rising water levels during a deluge. Like simulated annealing (SA), this algorithm avoids local optima by accepting worse solutions. However, SA uses a probabilistic function for accepting a worse solution, whereas GD uses a more deterministic approach. GD is also less dependent on parameter tuning compared to SA. The only parameter in the GD algorithm is a decay rate, used for controlling the boundary or acceptance level. In a minimisation problem, the initial boundary level (water level) usually starts with the quality of the initial solution. During the search, a new candidate solution is accepted if it is better than or equal to the current solution. However, a solution worse than the current solution is accepted when the quality of the candidate solution is less than or equal to the predefined boundary level B. This boundary level is then lowered according to a parameter called the decay rate ( Δ B ). This parameter is important as the speed of the search depends on the decay rate. The procedure of GD algorithm is shown in Algorithm 1.
Algorithm 1. GD algorithm.
Computation 08 00046 i001

4. Proposed Method: PGH-mGD

4.1. Definition and an Illustrative Example

In this section, we describe the proposed PGH-mGD for the examination timetabling problem. The proposed approach partially schedules a group of exams that are selected based on a graph heuristic orderings and exam assignment value v. These partially scheduled exams are then improved using a modified GD algorithm. The procedure continues to schedule the next group of selected exams based on v and graph heuristic orderings. This above procedure repeats until the scheduling of all exams. The exam assignment value v plays an important role in the partial scheduling of the exams. This parameter can be defined in the following way: Supposing that N represents a list of exams, and v is an integer equal to or less than N ( 0 < v N ) . Then v can be defined as the number of exams selected for scheduling (from the unscheduled list of exams). Here we have represented the v value as a percentage of the total exam N. For instance, if N is equal to 165, then 25% of v value would correspond to 41 (i.e., rounding of 41.25) exams being chosen for scheduling.
A simple example has been presented in Figure 1 to understand the overall PGH-mGD approach. Assuming that we have to schedule nine examinations e 1 , e 2 , e 3 , e 4 , e 5 , e 6 , e 7 , e 8 and e 9 into four time slots t 1 , t 2 , t 3 and t 4 and three rooms r 1 , r 2 and r 3 . Initially, the timetable is empty, and all nine examinations are ordered according to a graph heuristic ordering. Let us consider v is equal to 50%. Therefore, five exams (i.e., e 7 , e 3 , e 5 , e 9 and e 1 ) will be selected for scheduling. These five exams are assigned into random time slots and rooms while ensuring feasibility is maintained, resulting in a partially constructed timetable. This timetable is then improved using a modified GD algorithm, aiming to reduce the penalty cost. After improving the timetable, the PGH-mGD will again select the next pool of exams (i.e., e 8 , e 4 , e 2 and e 6 ) from the ordered list that has been rearranged using a graph heuristic ordering. When four examinations are assigned properly, the timetable contains all nine examinations. The timetable is improved again, and finally, a complete timetable is returned.

4.2. PGH-mGD Algorithm

PGH-mGD deals with partial solutions followed by improvement until all exams have been scheduled. The PGH-mGD algorithm is shown in Algorithm 2. In addition to LD, LWD and LE orderings, here we use three hybrid graph heuristic orderings such as SD-LD, SD-LWD and SD-LE to measure the exam difficulty. For example, In SD-LD, exams are ordered according to SD followed with LD, and the exam at the top of the list is scheduled first. Initially, a graph heuristic ordering is selected from the heuristic list (line 1). The exam assignment value v is assigned, which indicates how many exams at a time will be scheduled (line 2). An iterative process starts with ordering the un-scheduled exams using a selected heuristic ordering (line 4). Next, a subset of exams is selected in sequence based on v from the ordered exams set (line 5). This subset of exams is called the current partial exams set. Fulfilling the required hard constraints, these partial exams are then scheduled into predefined time slots and rooms (for the ITC2007 exam dataset). During scheduling, if an exam fails to be assigned into a time slot (after several tries), an exam resolution manager is used, which attempts to handle these exams (line 6). The partial exam scheduling and exam resolution manager are described in Algorithms 3 and 4, respectively. When exams have been successfully scheduled, these exams are removed from the unscheduled exam set and inserted into the scheduled exam set (line 8). Next, the penalty cost is calculated based on the scheduled exams (line 9). In the next step, the modified GD algorithm (see Algorithm 5) is used to improve the quality of partially scheduled exams with the aim of satisfying the soft constraints (line 10). The above process repeats for the next batch of partial exams until all exams have been scheduled. Finally, the procedure checks whether the solution vector has satisfied all hard constraints (line 11), and no examinations are in the blacklist. The penalty value of the solution vector is then stored as the final solution (line 12).
Algorithm 2. PGH-mGD approach.
Computation 08 00046 i002
Algorithm 3. Scheduling partial exams.
Computation 08 00046 i003

4.3. Scheduling Procedure of Partial Exams

Algorithm 3 presents the partial exam scheduling procedure. In scheduling the exams, the selected exams are kept in a partial ordering list (line 1). An iterative procedure starts with selecting the first exam e in the list (line 3). A pseudo-random number is generated for time slot t and room r (for the ITC2007 exam dataset) for scheduling this exam (line 5). If the hard constraints are satisfied, the exam is removed from the partial ordering list and added to the partial scheduling list (line 8). Then, the next exam is selected for scheduling until all exams have been successfully scheduled. However, when it is not possible to schedule an exam after a few iterations (line 10), a mechanism called ExamResolutionManager (see Algorithm 4) is employed to assist the scheduling exam (line 11). If this mechanism can successfully schedule exam e, the exam is added to the partial scheduling list after being removed from the partial ordering list (lines 12–13). However, the exam e is moved to the blacklist when the mechanism fails to schedule the exam (lines 14–15). The above procedure repeats until the scheduling of all exams. The partial scheduling list is set as current partial scheduled examinations (line 16). Finally, if the blacklist contains unscheduled examinations, they are given high priority to be scheduled in the next attempt (line 17). That is, in the next partial ordering selection, blacklisted examinations are ordered (scheduled) first, followed by other examinations in the unscheduled list.

4.4. Exam Resolution Manager Procedure

Algorithm 4 illustrates the ExamResolutionManager procedure. As mentioned previously, when a particular exam cannot be scheduled (after several tries), it is considered as a complexExam (lines 1–2). From the current partial scheduling vector, the number of conflicting exams with complexExam in each time slot is calculated (lines 3–5). The time slots are then sorted in ascending order based on the number of conflicting exams (line 6). Next, the time slot that has less conflicted exams with the complexExam is chosen (line 8). The conflicting exams (with complexExam) are moved to a different time slot(s), ensuring feasibility (line 9). If they are moved successfully, the complexExam is assigned to the selected time slot and room of the timetable, the partial scheduled vector is updated, and the procedure is terminated indicating the success of scheduling the complexExam (lines 10–14). Otherwise, the next less-conflicted time slot is taken, and the same process is executed until all available time slots have been checked.
Algorithm 4.ExamResolutionManager procedure.
Computation 08 00046 i004

4.5. Improvement Using Modified GD

Algorithm 5 shows the proposed modified GD algorithm used for improving the partially constructed solution. At the start, a partial solution vector s, generated from a graph heuristic ordering, is taken as an initial solution (Algorithm 3). The penalty value of this solution f ( s ) is calculated, which is also set as the initial boundary level B (lines 1–3). Then the number of iterations I, neighbourhood structure n, and desired value D are defined (lines 4–6). The decay rate β is initialised dynamically as ( B D ) / I , and s is set as best solution s b e s t (lines 7–8). Until the stopping condition is met, the following steps are executed. Neighbourhood structures are employed on the current solution s, and a set of new solutions is created. Among these solutions, one is considered as the candidate solution s * and its penalty cost f ( s * ) is calculated (line 10). When this penalty cost is less than or equal to the current cost, or less than or equal to boundary level B, the candidate solution s * is set as a current solution (lines 11–12). Next, the current boundary level is updated as B B β r a n d o m ( 1 , 5 ) (line 13). The candidate solution s * is set as the best solution only if the penalty cost of this candidate solution f ( s * ) is less than or equal to the cost of the best solution f ( s b e s t ) (lines 14–15). This algorithm encourages exploration by increasing the boundary level if the solution is not improved in M iterations (i.e., using the reheating mechanism [16,54]). Usually, a smaller value for M would encourage exploration, whereas a larger value would discourage exploration. Therefore, we have chosen M such a way that it can maintain the balance between exploration and exploitation. In this study, parameter M is set to 50 based on our preliminary experiment. The boundary level, B is then increased by adding a random number between 1 and 5 (line 17). Note that instead of standard formula (i.e., constant increment) to update the boundary level, the level is updated with a certain number (between 1 and 5 in this experiment) in order to allow some flexibility in accepting a worse solution. Abdullah et al. [17] also employed randomness in updating the boundary level and found efficiency for solving the timetabling problem. Finally, the best solution s b e s t is returned as a partial best solution after the required number of iterations (line 18).
Algorithm 5. Modified GD for improvement of partial scheduled exams.
Computation 08 00046 i005

5. TGH-mGD Algorithm

Algorithm 6 illustrates TGH-mGD approach that is implemented to compare with PGH-mGD. In TGH-mGD, the ordering uses six graph heuristic orderings which are LD, LWD, LE, SD-LD, SD-LWD and SD-LE. In each heuristic ordering, an initial feasible solution and its penalty cost are generated (lines 5–6). These steps are continued for 30 times for each of the heuristic orderings. Thirty iterations are chosen as we wish to generate a comparatively large pool of initial solutions for each of the instances. This will assist in selecting a good quality initial solution that will be employed in the improvement phase. After determining the best initial solution, an improvement phase is carried out using the same modified GD algorithm as described in Algorithm 5 (lines 7–8). Finally, the penalty cost is calculated. It is noted that during implementing modified GD in TGH-mGD, the only change is that instead of partial solution vector we use complete solution vector. ExamResolutionManager (see Algorithm 4) is also used when examinations have difficulty finding appropriate time slots and rooms.
Algorithm 6. TGH-mGD approach.
Computation 08 00046 i006

6. Experimental Setup

Both PGH-mGD and TGH-mGD are validated with the Toronto and ITC2007 exam datasets. We examine the effect of exam assignment values v, graph heuristic orderings and execution time on solution quality. The experiment is repeated 30 times with different random seeds. The program is coded in Java (Java SE 7) and performed on a PC with following configurations: Intel Core i7 (3 GHz), 4 GB RAM and Windows 10 OS.

6.1. Toronto Dataset

For the Toronto benchmark dataset, twelve instances are tested. Four different v values are considered: 10%, 25%, 50% and 75%. Six different graph heuristic orderings including LD, LWD, LE, SD-LD, SD-LWD and SD-LE are employed in partial construction phases. Termination criterion for Toronto dataset is the time duration, and we investigate with three execution times such as 600 s, 1800 s and 3600 s. In the Toronto dataset, three neighbourhood structures are considered:
  • N 1 : Move—an examination is randomly selected and moved to a random time slot.
  • N 2 : Swap—Two examinations are randomly selected, and their time slots are swapped.
  • N 3 : Swap time slot—Two time slots are randomly selected, and all examinations between the two time slots are swapped.
The above three neighbourhood structures are used during the improvement phase. However, we only accept neighbourhood moves that lead to an improvement.

6.2. ITC2007 Exam Dataset

We experiment with eight instances from the ITC2007 exam dataset. Six graph heuristic orderings LD, LWD, LE, SD-LD, SD-LWD and SD-LE are used to construct partial solutions. We use v = 5% and v = 10%. We experiment with two-time duration: 600 s and 3600 s. In the ITC2007 exam dataset, the following neighbourhood operations are employed:
  • N 1 : An examination is randomly selected and is moved to a random time slot and room.
  • N 2 : Two examinations are randomly selected, and their time slots and rooms are swapped.
  • N 3 : After selecting an examination, it is moved to a different room within the same time slot.
  • N 4 : Two examinations are selected randomly and moved to different time slots and rooms.
During the improvement phase, a neighbourhood (i.e., N 1 , N 2 , N 3 and N 4 ) is selected randomly and it is applied only if the solution is feasible; otherwise, a different neighbourhood structure will be selected.
It is noted that for both Toronto and ITC2007 exam datasets, the total execution time is divided among all partial improvement phases in such a way that the complete examination scheduling (i.e., last partial improvement phase) have sufficient time for improvement.

7. Experimental Result

In this section, we describe the results obtained from the experiment. The discussion starts with Toronto dataset followed by the ITC2007 dataset.

7.1. Experimental Results with Toronto Dataset

Table 4a–d present the performance of TGH-mGD and PGH-mGD when they are employed on the Toronto dataset using four different v values (i.e., 10%, 25%, 50%, and 75%) and with termination criteria: 600 s, 1800 s and 3600 s. We present the best and the average values for all problem instances. It is obvious from Table 4a–d that, with the same v value, the results with longer runs tend to outperform the results with a less amount of run (this is not surprising). Both TGH-mGD and PGH-mGD usually improve the solution quality. Using the same amount of run, the results with a lower v value are usually better than the results with a larger v value. Generally, we obtain better quality solutions when v = 10% with 3600 s compared to larger v value (i.e., 75%) with same amount of run (i.e., 3600 s). It is also noticed that in all the combinations of v and running time, the PGH-mGD outperforms TGH-mGD completely.
A comparative study of PGH-mGD with TGH-mGD is shown in Table 5. For all problem instances, PGH-mGD outperforms TGH-mGD. In car-s-91, maximum improvement of 17.92% is observed followed by car-f-92 and rye-s-93 which have around 16.50% and 14% improvement, respectively. For instances kfu-s-93, lse-f-91, tre-s-92 and ute-s-92, we obtain at least a 5% improvement, whereas for the other five instances, the improvement is below 4%. For eight instances, SD-LD produces the best results. This might be because the more conflicted exams are being prioritized. It is also observed that, on most occasions, a lower v tends to produce the best results. As can be seen, in the nine instances out of twelve, lower v (10%) produces the best results.
In a statistical analysis, we perform a t-test with a confidence level at 0.05. The null hypothesis ( H 0 ) is that there is no difference between TGH-mGD and PGH-mGD, whereas the alternative hypothesis ( H 1 ) states PGH-mGD performs better than TGH-mGD. Referring to Table 5, it is observed that the p-values of all instances are lower than the confidence level of 0.05, indicating that H 0 is rejected and H 1 is accepted. We therefore conclude that the performance of PGH-mGD is significantly better than TGH-mGD for the Toronto dataset.

7.2. Experimental Results with ITC2007 Exam Dataset

Table 6a,b present the penalty values for the ITC2007 exam dataset with v = 5 % and v = 10 % . The termination criteria is 600 s and 3600 s. Therefore, four different result sets are obtained. In all cases, we highlight the best and the average penalty costs for all the problem instances. Referring to Table 6a,b, when we increase the termination criterion from 600 s to 3600 s with the same v value, the penalty costs decrease for PGH-mGD (and TGH-mGD). However, when v value is increased from 5% to 10% (using the same time duration), the penalty values tend to be higher for PGH-mGD. We obtain the best results for most of the instances (6 out of 8 instances) with v = 5 % and time duration of 3600 s in PGH-mGD.
Comparison between TGH-mGD and PGH-mGD for ITC2007 exam dataset is shown in Table 7. For all of the eight instances, PGH-mGD produces better quality solutions compared to TGH-mGD. It is observed that a maximum improvement is obtained on Exam_2, with 59.14% improvement. Exam_3 and Exam_4 have around 50% improvement. Three exams including Exam_1, Exam_5 and Exam_7 have an average of around 40% improvement, whereas Exam_8 and Exam_6 obtain 30.45% and 13.61% improvement, respectively. We can see that PGH-mGD tends to improve the solution quality by employing lower v value, as six out of eight problem instances it produces the best results with v = 5 % . For all instances, hybrid graph heuristic orderings produce the best results as they can handle the complex exams well. We have carried out a t-test with a confidence level at 0.05. The null hypothesis ( H 0 ) is that there is no difference between TGH-mGD and PGH-mGD, and the alternative hypothesis ( H 1 ) is PGH-mGD performs better than TGH-mGD. The results indicate that p-values of all instances are smaller than the confidence level (0.05), indicating that H 0 is rejected, and H 1 is accepted. Therefore, the difference between these two approaches is significant for the ITC2007 exam dataset, with the performance of PGH-mGD being better than TGH-mGD.

7.3. Comparison with the State-of-the-Art Approaches

In this section, the proposed PGH-mGD has been compared with other approaches presented in the scientific literature for solving the Toronto and ITC2007 exam datasets. For each instance, we take the best results obtained from the PGH-mGD approach to compare with other reported results.
Ranking and average raking are computed in the following way. Let us consider total n approaches are compared for m given instances in terms of their best results. For a given instance, each approach of m is given ordinal value O K where 1 O K n and 1 K m . This value indicates the ranking of the corresponding instance. Then each approach sums all its O K values for m instances and computes the average value. Finally, all of the approaches are ranked according to their average values.

7.3.1. Toronto Dataset

Table 8 presents the best results of the proposed approach with state-of-the-art methods on the Toronto dataset. We have ranked all the approaches according to the penalty costs (shown in brackets) of each instance. For each approach, the average rank of all Toronto instances is computed.
It is observed from Table 8 that we have been able to produce solutions for all of the twelve instances of the Toronto dataset. Our approach outperforms Sabar et al. [9] and Alinia Ahandani et al. [27] in all problem instances. Apart from instance rye-s-93, for all other instances, our approach achieves better results than the approaches proposed by Carter et al. [5], Abdul Rahman et al. [41], Pillay and Banzhaf [8], Qu et al. [55] and Nelishia Pillay and Özcan [35]. Moreover, we have obtained better results than the results reported by Turabieh and Abdullah [56] for all instances except for kfu-s-93, lse-f-91 and rye-s-93. For six instances (car-s-91, car-f-92, ear-f-83, rye-s-93, tre-s-92, uta-s-92), we have produced better results than Burke et al. [57]. While comparing among all approaches, our approach has produced the best results for five instances car-s-91, car-f-92, ear-f-83, tre-s-92 and uta-s-92, and the second best results for instances hec-s-92, kfu-s-93, sta-f-83, ute-s-92 and yor-f-83. Besides, for one instance lse-f-91 we have obtained the third best result. Finally, when we compare the average rank of the approaches, our approach is ranked first in solving all problem instances of the Toronto dataset.

7.3.2. ITC2007 Exam Dataset

Table 9 highlights the comparison of our proposed approach with some results from the scientific literature on ITC2007 exam dataset. For each problem instance, we rank all of the approaches based on their penalty values. The rank value is shown next to the penalty value of each problem instance. We also compute the average rank for each approach for solving the overall ITC2007 exam dataset.
It is observed from Table 9 that Müller [14] produces the best results for five out of eight instances, and its average rank is the highest. Besides, Alzaqebah and Abdullah [58] is in second place in terms of average rank. Our approach is ranked in the third position among the ten approaches. We were able to produce the best results for Exam_6, the second best result for Exam_2, and the third best result for both Exam_4 and Exam_8. Overall, our approach shows comparable results with other approaches in solving all of the instances of ITC2007 exam dataset.

8. Discussion and Analysis

It is clear from the above experimental results that the PGH-mGD approach performs better than the TGH-mGD approach for solving both Toronto and ITC2007 exam datasets. It is also observed that lower v, the longer time duration and hybrid graph heuristic orderings can reduce the penalty costs (i.e., produce quality solutions) in the PGH-mGD approach. Our best results are comparable with the reported results from the scientific literature (see Table 8 and Table 9), with several cases producing the best results. The PGH-mGD approach can produce feasible solutions for all datasets because the ExamResolutionManager procedure helps to handle any deadlock faced during the scheduling process. As ITC2007 exam dataset is a heavily constraint problem compared to Toronto dataset, the ExamResolutionManager procedure is used more often in the ITC2007 exam dataset than in the Toronto dataset. One limitation of the partial examination approach is that in the case of some examinations with high conflict densities and complex hard constraints, it is occasionally stuck in obtaining feasible solutions, thus increasing the computational time.
In TGH-mGD, a good initial solution is first chosen from the construction phase, and then it is improved so that a good final solution is obtained. According to Kahar and Kendall [16], it is observed that in a sequential (i.e., traditional) approach, good initial solution(s) tend to produce good improved solution(s). However, with the partial examination assignment approach, we can improve the partial solution many times, especially with small v, and this improvement activity encourages the improvement phase to produce a good quality solution. Additionally, modified GD promotes exploration (with the reheating mechanism), allowing the search to escape from local optima. Hence the partial examination assignment approach (i.e., PGH-mGD) is able to produce quality solutions (for all instances) compared to the traditional approach (i.e., TGH-mGD).

9. Conclusions and Future Work

In this paper, a partial exam assignment strategy has been proposed for addressing the examination timetabling problem. In this approach, partially selected exams that are obtained based on a parameter named exam assignment value v are constructed using graph heuristic orderings, and then solution quality is improved using a modified GD algorithm until all exams have been scheduled. Both TGH-mGD and PGH-mGD are tested on two different benchmark datasets—Toronto and ITC2007 exam track, with each benchmark having its own distinctive features. Experimental results and statistical analysis indicate that PGH-mGD produces better results compared to the TGH-mGD algorithm for all instances of the datasets. To study the effectiveness of the proposed approach, we experiment with different v values, termination criteria (time duration) and graph heuristic orderings. It is observed that, with smaller v and the longer time duration, PGH-mGD approach tends to produce quality solutions. In comparison with state-of-the-art methods, our approach has produced the five best results for the Toronto dataset, with promising results obtained for other seven instances of the dataset. Although we have produced one best result for the ITC2007 exam dataset, our obtained results of other instances are competitive with the reported results from the scientific literature. For future work, the following suggestions might be worthy of investigation.
  • The partial construction phase could be extended and improved by introducing adapting ordering strategies or fuzzy logic that may select the appropriate heuristics (i.e., LD, LE, LWD) for exam ordering.
  • The partial improvement phase can also be improved by incorporating other meta-heuristic approaches, such as SA, tabu search and genetic algorithms, or hybridising of meta-heuristics.
  • This paper has studied the impact of different exam assignment value v on the penalty cost. However, it would be worthwhile to study the effect of dynamic or adaptive exam assignment value on the penalty cost.

Author Contributions

Conceptualization, A.K.M. and M.N.M.K.; methodology, A.K.M.; software, A.K.M.; validation, A.K.M., M.N.M.K. and G.K.; formal analysis, A.K.M.; investigation, M.N.M.K. and G.K.; writing—original draft preparation, A.K.M.; writing—review and editing, M.N.M.K. and G.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was supported by University Malaysia Pahang (UMP), Malaysia.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PGH-mGDPartial graph heuristic orderings with a modified great deluge
TGH-mGDTraditional graph heuristic orderings with a modified great deluge
ITC20072nd International Timetable Competition
LDLargest degree
LWDLargest weighted degree
LELargest enrolment
SDSaturation degree
GDGreat deluge
SD-LDSaturation degree - Largest degree
SD-LWDSaturation degree -Largest weighted Degree
SD-LESaturation degree- Largest enrolment

References

  1. Wren, A. Scheduling, timetabling and rostering—A special relationship. In Practice and Theory of Automated Timetabling; Springer: Berlin/Heidelberg, Germany, 1996; pp. 46–75. [Google Scholar]
  2. Johnson, D. Timetabling university examinations. J. Oper. Res. Soc. 1990, 41, 39–47. [Google Scholar] [CrossRef]
  3. Burke, E.K.; Newall, J.P. Solving examination timetabling problems through adaption of heuristic orderings. Ann. Oper. Res. 2004, 129, 107–134. [Google Scholar] [CrossRef]
  4. Burke, E.K.; Petrovic, S.; Qu, R. Case-based heuristic selection for timetabling problems. J. Sched. 2006, 9, 115–132. [Google Scholar] [CrossRef] [Green Version]
  5. Carter, M.W.; Laporte, G.; Lee, S.Y. Examination timetabling: Algorithmic strategies and applications. J. Oper. Res. Soc. 1996, 47, 373–383. [Google Scholar] [CrossRef]
  6. Kahar, M.N.M.; Kendall, G. The examination timetabling problem at Universiti Malaysia Pahang: Comparison of a constructive heuristic with an existing software solution. Eur. J. Oper. Res. 2010, 207, 557–565. [Google Scholar] [CrossRef]
  7. Asmuni, H.; Burke, E.K.; Garibaldi, J.M.; McCollum, B.; Parkes, A.J. An investigation of fuzzy multiple heuristic orderings in the construction of university examination timetables. Comput. Oper. Res. 2009, 36, 981–1001. [Google Scholar] [CrossRef] [Green Version]
  8. Pillay, N.; Banzhaf, W. A study of heuristic combinations for hyper-heuristic systems for the uncapacitated examination timetabling problem. Eur. J. Oper. Res. 2009, 197, 482–491. [Google Scholar] [CrossRef]
  9. Sabar, N.R.; Ayob, M.; Qu, R.; Kendall, G. A graph coloring constructive hyper-heuristic for examination timetabling problems. Appl. Intell. 2012, 37, 1–11. [Google Scholar] [CrossRef]
  10. Dueck, G. New optimization heuristics: The great deluge algorithm and the record-to-record travel. J. Comput. Phys. 1993, 104, 86–92. [Google Scholar] [CrossRef]
  11. Burke, E.K.; Newall, J.P. Enhancing timetable solutions with local search methods. In Practice and Theory of Automated Timetabling IV; Springer: Berlin/Heidelberg, Germany, 2003; pp. 195–206. [Google Scholar]
  12. Burke, E.K.; Bykov, Y. Solving exam timetabling problems with the flex-deluge algorithm. In Proceedings of the 6th International Conference on the Practice and Theory of Automated Timetabling (PTATA 2006), Brno, Czech Republic, 30 August–1 September 2006; pp. 370–372. [Google Scholar]
  13. Landa-Silva, D.; Obit, J.H. Great deluge with non-linear decay rate for solving course timetabling problems. In Proceedings of the 4th International IEEE Conference on Intelligent Systems, Varna, Bulgaria, 6–8 September 2008; Volume 1, pp. 11–18. [Google Scholar] [CrossRef]
  14. Müller, T. ITC2007 solver description: A hybrid approach. Ann. Oper. Res. 2009, 172, 429–446. [Google Scholar] [CrossRef] [Green Version]
  15. McCollum, B.; McMullan, P.; Parkes, A.J.; Burke, E.K.; Abdullah, S. An extended great deluge approach to the examination timetabling problem. In Proceedings of the 4th multidisciplinary international scheduling: Theory and applications 2009 (MISTA 2009), Dublin, Ireland, 10–12 August 2009; pp. 424–434. [Google Scholar]
  16. Kahar, M.N.M.; Kendall, G. A great deluge algorithm for a real-world examination timetabling problem. J. Oper. Res. Soc. 2013, 66, 16–133. [Google Scholar] [CrossRef]
  17. Abdullah, S.; Shaker, K.; McCollum, B.; McMullan, P. Construction of course timetables based on great deluge and tabu search. In Proceedings of the MIC 2009: VIII Metaheuristic International Conference, Hamburg, Germany, 13–16 July 2009; pp. 13–16. [Google Scholar]
  18. Turabieh, H.; Abdullah, S. An integrated hybrid approach to the examination timetabling problem. Omega Int. J. Manag. Sci. 2011, 39, 598–607. [Google Scholar] [CrossRef]
  19. Fong, C.W.; Asmuni, H.; McCollum, B.; McMullan, P.; Omatu, S. A new hybrid imperialist swarm-based optimization algorithm for university timetabling problems. Inf. Sci. 2014, 283, 1–21. [Google Scholar] [CrossRef]
  20. Abuhamdah, A. Modified Great Deluge for Medical Clustering Problems. Int. J. Emerg. Sci. 2012, 2, 345–360. [Google Scholar]
  21. Kifah, S.; Abdullah, S. An adaptive non-linear great deluge algorithm for the patient-admission problem. Inf. Sci. 2015, 295, 573–585. [Google Scholar] [CrossRef]
  22. Jaddi, N.S.; Abdullah, S. Nonlinear great deluge algorithm for rough set attribute reduction. J. Inf. Sci. Eng. 2013, 29, 49–62. [Google Scholar]
  23. Burke, E.K.; Bykov, Y. The Late Acceptance Hill-Climbing Heuristic; Technical Report CSM-192, Computing Science and Mathematics; University of Stirling: Stirling, UK, 2012. [Google Scholar]
  24. Burke, E.K.; Bykov, Y. An Adaptive Flex-Deluge Approach to University Exam Timetabling. INFORMS J. Comput. 2016, 28, 781–794. [Google Scholar] [CrossRef]
  25. Battistutta, M.; Schaerf, A.; Urli, T. Feature-based tuning of single-stage simulated annealing for examination timetabling. Ann. Oper. Res. 2017, 252, 239–254. [Google Scholar] [CrossRef]
  26. June, T.L.; Obit, J.H.; Leau, Y.B.; Bolongkikit, J. Implementation of Constraint Programming and Simulated Annealing for Examination Timetabling Problem. In Computational Science and Technology; Springer: Berlin/Heidelberg, Germany, 2019; pp. 175–184. [Google Scholar]
  27. Alinia Ahandani, M.; Vakil Baghmisheh, M.T.; Badamchi Zadeh, M.A.; Ghaemi, S. Hybrid particle swarm optimization transplanted into a hyper-heuristic structure for solving examination timetabling problem. Swarm Evol. Comput. 2012, 7, 21–34. [Google Scholar] [CrossRef]
  28. Abayomi-Alli, O.; Abayomi-Alli, A.; Misra, S.; Damasevicius, R.; Maskeliunas, R. Automatic examination timetable scheduling using particle swarm optimization and local search algorithm. In Data, Engineering and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 119–130. [Google Scholar]
  29. Bolaji, A.L.; Khader, A.T.; Al-Betar, M.A.; Awadallah, M.A. A hybrid nature-inspired artificial bee colony algorithm for uncapacitated examination timetabling problems. J. Intell. Syst. 2015, 24, 37–54. [Google Scholar] [CrossRef]
  30. Tilahun, S.L. Prey-predator algorithm for discrete problems: A case for examination timetabling problem. Turk. J. Electr. Eng. Comput. Sci. 2019, 27, 950–960. [Google Scholar] [CrossRef]
  31. Lei, Y.; Gong, M.; Jiao, L.; Shi, J.; Zhou, Y. An adaptive coevolutionary memetic algorithm for examination timetabling problems. Int. J. Bio-Inspired Comput. 2017, 10, 248–257. [Google Scholar] [CrossRef]
  32. Leite, N.; Fernandes, C.M.; Melicio, F.; Rosa, A.C. A cellular memetic algorithm for the examination timetabling problem. Comput. Oper. Res. 2018, 94, 118–138. [Google Scholar] [CrossRef]
  33. Demeester, P.; Bilgin, B.; De Causmaecker, P.; Vanden Berghe, G. A hyperheuristic approach to examination timetabling problems: Benchmarks and a new problem from practice. J. Sched. 2012, 15, 83–103. [Google Scholar] [CrossRef]
  34. Anwar, K.; Khader, A.T.; Al-Betar, M.A.; Awadallah, M.A. Harmony Search-based Hyper-heuristic for examination timetabling. In Proceedings of the 9th IEEE International Colloquium on Signal Processing and its Applications (CSPA), Kuala Lumpur, Malaysia, 8–10 March 2013; pp. 176–181. [Google Scholar] [CrossRef]
  35. Pillay, N.; Özcan, E. Automated generation of constructive ordering heuristics for educational timetabling. Ann. Oper. Res. 2019, 275, 181–208. [Google Scholar] [CrossRef] [Green Version]
  36. Gogos, C.; Alefragis, P.; Housos, E. An improved multi-staged algorithmic process for the solution of the examination timetabling problem. Ann. Oper. Res. 2012, 194, 203–221. [Google Scholar] [CrossRef]
  37. Burke, E.K.; Pham, N.; Qu, R.; Yellen, J. Linear combinations of heuristics for examination timetabling. Ann. Oper. Res. 2012, 194, 89–109. [Google Scholar] [CrossRef]
  38. Ei Shwe, S. Reinforcement learning with EGD based hyper heuristic system for exam timetabling problem. In Proceedings of the 2011 IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS), Beijing, China, 15–17 September 2011; pp. 462–466. [Google Scholar] [CrossRef]
  39. Sabar, N.R.; Ayob, M.; Kendall, G.; Qu, R. Automatic Design of a Hyper-Heuristic Framework With Gene Expression Programming for Combinatorial Optimization Problems. IEEE Trans. Evol. Comput. 2015, 19, 309–325. [Google Scholar] [CrossRef] [Green Version]
  40. Qu, R.; Burke, E.; McCollum, B.; Merlot, L.; Lee, S. A survey of search methodologies and automated system development for examination timetabling. J. Sched. 2009, 12, 55–89. [Google Scholar] [CrossRef]
  41. Abdul Rahman, S.; Bargiela, A.; Burke, E.K.; Özcan, E.; McCollum, B.; McMullan, P. Adaptive linear combination of heuristic orderings in constructing examination timetables. Eur. J. Oper. Res. 2014, 232, 287–297. [Google Scholar] [CrossRef] [Green Version]
  42. Soghier, A.; Qu, R. Adaptive selection of heuristics for assigning time slots and rooms in exam timetables. Appl. Intell. 2013, 39, 438–450. [Google Scholar] [CrossRef]
  43. Eng, K.; Muhammed, A.; Mohamed, M.A.; Hasan, S. A hybrid heuristic of Variable Neighbourhood Descent and Great Deluge algorithm for efficient task scheduling in Grid computing. Eur. J. Oper. Res. 2020, 284, 75–86. [Google Scholar] [CrossRef]
  44. Obit, J.H.; Landa-Silva, D. Computational study of non-linear great deluge for university course timetabling. In Intelligent Systems: From Theory to Practice; Springer: Berlin/Heidelberg, Germany, 2010; pp. 309–328. [Google Scholar]
  45. Bagayoko, M.; Dao, T.M.; Ateme-Nguema, B.H. Optimization of forest vehicle routing using the metaheuristics: Reactive tabu search and extended great deluge. In Proceedings of the 2013 International Conference on Industrial Engineering and Systems Management (IESM), Rabat, Morocco, 28–30 October 2013; pp. 1–7. [Google Scholar]
  46. Guha, R.; Ghosh, M.; Kapri, S.; Shaw, S.; Mutsuddi, S.; Bhateja, V.; Sarkar, R. Deluge based Genetic Algorithm for feature selection. Evolut. Intell. 2019. [Google Scholar] [CrossRef]
  47. Mafarja, M.; Abdullah, S. Fuzzy modified great deluge algorithm for attribute reduction. In Recent Advances on Soft Computing and Data Mining; Springer: Berlin/Heidelberg, Germany, 2014; pp. 195–203. [Google Scholar]
  48. Nahas, N.; Nourelfath, M. Iterated great deluge for the dynamic facility layout problem. In Metaheuristics for Production Systems; Springer: Berlin/Heidelberg, Germany, 2016; pp. 57–92. [Google Scholar]
  49. Benchmark Data Sets in Exam Timetabling. Available online: http://www.asap.cs.nott.ac.uk/resources/data.shtml (accessed on 30 November 2019).
  50. Examination Timetabling Track. Available online: http://www.cs.qub.ac.uk/itc2007/examtrack/ (accessed on 30 November 2019).
  51. McCollum, B.; McMullan, P.; Parkes, A.J.; Burke, E.K.; Qu, R. A new model for automated examination timetabling. Ann. Oper. Res. 2012, 194, 291–315. [Google Scholar] [CrossRef]
  52. McCollum, B.; McMullan, P.; Burke, E.K.; Parkes, A.J.; Qu, R. The Second International Timetabling Competition: Examination Timetabling Track; Technical Report QUB/IEEE/Tech/ITC2007/Exam/v4. 0/17; Queen’s University: Belfast, UK, 2007. [Google Scholar]
  53. Burke, E.K.; Kingston, J.; De Werra, D. Applications to Timetabling. In Handbook of Graph Theory; Rosen, K.H., Ed.; CRC Press: Boca Raton, FL, USA, 2004; pp. 445–474. [Google Scholar]
  54. Abdullah, S.; Turabieh, H.; McCollum, B.; McMullan, P. A hybrid metaheuristic approach to the university course timetabling problem. J. Heuristics 2012, 18, 1–23. [Google Scholar] [CrossRef] [Green Version]
  55. Qu, R.; Pham, N.; Bai, R.B.; Kendall, G. Hybridising heuristics within an estimation distribution algorithm for examination timetabling. Appl. Intell. 2015, 42, 679–693. [Google Scholar] [CrossRef] [Green Version]
  56. Turabieh, H.; Abdullah, S. A Hybrid Fish Swarm Optimisation Algorithm for Solving Examination Timetabling Problems. In Learning and Intelligent Optimization; Lecture Notes in Computer Science; Book Section 42; Coello, C.C., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6683, pp. 539–551. [Google Scholar] [CrossRef]
  57. Burke, E.K.; Eckersley, A.J.; McCollum, B.; Petrovic, S.; Qu, R. Hybrid variable neighbourhood approaches to university exam timetabling. Eur. J. Oper. Res. 2010, 206, 46–53. [Google Scholar] [CrossRef] [Green Version]
  58. Alzaqebah, M.; Abdullah, S. Hybrid bee colony optimization for examination timetabling problems. Comput. Oper. Res. 2015, 54, 142–154. [Google Scholar] [CrossRef]
  59. Gogos, C.; Alefragis, P.; Housos, P. Amulti-staged algorithmic process for the solution of the examination timetabling problem. In Proceedings of the 7th International Conference on the Practice and Theory of Automated Timetabling (PATAT 2008), Montreal, QC, Canada, 18–22 August 2008; pp. 19–22. [Google Scholar]
  60. Atsuta, M.; Nonobe, K.; Ibaraki, T. ITC-2007 Track2: An Approach Using General CSP Solver. In Proceedings of the Practice and Theory of Automated Timetabling (PATAT 2008), Montreal, QC, Canada, 19–22 August 2008. [Google Scholar]
  61. De Smet, G. ITC2007—Examination track. In Proceedings of the 7th International Conference on the Practice and Theory of Automated Timetabling (PATAT 2008), Montreal, QC, Canada, 18–22 August 2008. [Google Scholar]
  62. Pillay, N.; Banzhaf, W. A Developmental Approach to the Uncapacitated Examination Timetabling Problem. In Parallel Problem Solving from Nature—PPSN X. PPSN 2008; Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5199. [Google Scholar]
  63. Hamilton-Bryce, R.; McMullan, P.; McCollum, B. Directed selection using reinforcement learning for the examination timetabling problem. In Proceedings of the PATAT’14 Proceedings of the 10th International Conference on the Practice and Theory of Automated Timetabling, York, UK, 26–29 August 2014. [Google Scholar]
Figure 1. An Illustration of PGH-mGD for examination timetabling procedure.
Figure 1. An Illustration of PGH-mGD for examination timetabling procedure.
Computation 08 00046 g001
Table 1. Toronto Benchmark dataset.
Table 1. Toronto Benchmark dataset.
InstancesNo. of Time SlotsNo. of ExamsNo. of StudentsConflict Density
car-s-913568216,9250.13
car-f-923254318,4190.14
ear-f-832419011250.27
hec-s-92188128230.42
kfu-s-932046153490.06
lse-f-911838127260.06
pur-s-9342241930,0290.03
rye-s-932348611,4830.07
sta-f-83131396110.14
tre-s-922326143600.18
uta-s-923562221,2670.13
ute-s-921018427500.08
yor-f-83211819410.29
Table 2. ITC2007 Exam Dataset.
Table 2. ITC2007 Exam Dataset.
InstancesNumber
of Students
Number
of Exams
Number
of Time slots
Number
of Rooms
Period Hard
Constraint
Room Hard
Constraint
Conflict
Density
Exam_178336075471205.05%
Exam_212,48487040491221.17%
Exam_316,3659343648170152.62%
Exam_4442127321140015.0%
Exam_5871910184232700.87%
Exam_679092421682306.16%
Exam_713,795109680152801.93%
Exam_877185988082014.55%
Table 3. Weights of the ITC2007 exam dataset.
Table 3. Weights of the ITC2007 exam dataset.
InstancesWeight for
Two in a Day
W 2 D
Weight for
Two in a Row
W 2 R
Weight for
Period Spread
W PS
Weight for
No Mixed Duration
W NMD
Weight for
the Front Load Penalty
W FL
Weight for
period Penalty
W p P
Weight for
Room Penalty
W r R
Exam_157510100305
Exam_2515125250305
Exam_310154202002010
Exam_45921050105
Exam_51540502503010
Exam_65202025253015
Exam_752510152503010
Exam_801501525250305
Table 4. Penalty values of Toronto dataset for different termination criteria
(a) v = 10% for PGH-mGD
(a) v = 10% for PGH-mGD
Instances600 s1800 s3600 s
TGH-mGDPGH-mGDTGH-mGDPGH-mGDTGH-mGDPGH-mGD
BestAvgBestAvgBestAvgBestAvgBestAvgBestAvg
car-s-915.986.195.695.835.785.995.605.695.585.704.584.72
car-f-924.955.114.664.784.804.954.574.694.584.663.823.93
ear-f-8338.6740.4637.0038.6937.2339.2436.3537.3333.6834.7633.2334.49
hec-s-9211.9212.1911.2511.9111.4411.9211.1811.7510.5311.0210.3211.09
kfu-s-9315.6516.8014.6115.4315.3716.0914.4414.9114.7915.1013.3413.97
lse-f-9112.7513.6111.7812.3412.4012.9911.0511.4710.8511.1210.2410.62
rye-s-9311.9012.2411.4411.8911.7111.8811.2511.7111.3911.549.7910.29
sta-f-83158.21158.48157.66158.28157.86158.14157.50158.19157.39157.72157.14157.64
tre-s-929.429.808.588.889.219.498.208.448.248.507.748.03
uta-s-924.124.173.703.823.934.043.613.683.223.293.133.22
ute-s-9228.4929.4727.3828.6128.0428.8227.0827.8326.9627.7925.2826.04
yor-f-8340.4441.3539.0040.4339.5240.2238.3439.3935.5136.7135.6836.79
(b) v = 25% for PGH-mGD
(b) v = 25% for PGH-mGD
Instances600 s1800 s3600 s
TGH-mGDPGH-mGDTGH-mGDPGH-mGDTGH-mGDPGH-mGD
BestAvgBestAvgBestAvgBestAvgBestAvgBestAvg
car-s-915.986.195.635.825.785.995.575.675.585.704.604.68
car-f-924.955.114.684.854.804.954.634.714.584.663.894.04
ear-f-8338.6740.4636.8238.8937.2339.2435.9337.8333.6834.7633.3034.43
hec-s-9211.9212.1911.3112.0511.4411.9211.2311.7810.5311.0210.4911.09
kfu-s-9315.6516.8014.7115.5715.3716.0914.6015.2714.7915.1013.5514.34
lse-f-9112.7513.6112.0612.6712.4012.9911.4612.1010.8511.1210.2510.76
rye-s-9311.9012.2411.4011.8611.7111.8811.3811.7111.3911.5410.6811.20
sta-f-83158.21158.48157.71158.34157.86158.14157.70158.12157.39157.72157.03157.41
tre-s-929.429.808.799.069.219.498.568.858.248.507.948.15
uta-s-924.124.173.773.883.934.043.613.743.223.293.173.26
ute-s-9228.4929.4727.9728.8528.0428.8227.5628.5326.9627.7925.4725.95
yor-f-8340.4441.3539.1840.7839.5240.2239.1140.0635.5136.7135.6836.91
(c) v = 50% for PGH-mGD
(c) v = 50% for PGH-mGD
Instances600 s1800 s3600 s
TGH-mGDPGH-mGDTGH-mGDPGH-mGDTGH-mGDPGH-mGD
BestAvgBestAvgBestAvgBestAvgBestAvgBestAvg
car-s-915.986.195.826.055.785.994.634.805.585.704.094.41
car-f-924.955.114.815.004.804.955.565.724.584.664.634.74
ear-f-8338.6740.4636.9139.1137.2339.2436.2838.1333.6834.7633.4334.68
hec-s-9211.9212.1911.7012.1311.4411.9211.4111.9210.5311.0210.5911.17
kfu-s-9315.6516.8015.1215.8615.3716.0914.7815.3814.7915.1013.5114.46
lse-f-9112.7513.6112.4413.1012.4012.9911.7512.4510.8511.1210.5010.99
rye-s-9311.9012.2411.6212.0811.7111.8811.4711.8611.3911.5411.2411.55
sta-f-83158.21158.48157.86158.40157.86158.14157.43158.05157.39157.72157.15157.54
tre-s-929.429.809.009.209.219.498.778.998.248.508.038.22
uta-s-924.124.173.854.023.934.043.773.883.223.293.263.36
ute-s-9228.4929.4728.2629.0028.0428.8227.6828.4026.9627.7925.5226.57
yor-f-8340.4441.3539.3441.0039.5240.2238.9940.1935.5136.7135.4636.75
(d) v = 75% for PGH-mGD
(d) v = 75% for PGH-mGD
Instances600 s1800 s3600 s
TGH-mGDPGH-mGDTGH-mGDPGH-mGDTGH-mGDPGH-mGD
BestAvgBestAvgBestAvgBestAvgBestAvgBestAvg
car-s-915.986.194.825.025.785.994.704.855.585.704.835.06
car-f-924.955.115.926.154.804.955.685.914.584.663.964.11
ear-f-8338.6740.4637.1639.5537.2339.2436.7338.0933.6834.7632.4833.46
hec-s-9211.9212.1911.5512.1911.4411.9211.3311.8510.5311.0210.5210.99
kfu-s-9315.6516.8015.3516.1415.3716.0914.7615.4714.7915.1013.5114.43
lse-f-9112.7513.6112.6613.4312.4012.9912.4412.9110.8511.1210.3711.15
rye-s-9311.9012.2411.5912.1611.7111.8811.5711.8211.3911.5410.5311.09
sta-f-83158.21158.48157.87158.31157.86158.14157.64158.05157.39157.72157.20157.63
tre-s-929.429.809.059.339.219.498.688.988.248.508.048.25
uta-s-924.124.173.944.063.934.043.793.923.223.293.263.36
ute-s-9228.4929.4727.9428.9728.0428.8227.6928.4826.9627.7925.4826.12
yor-f-8340.4441.3539.4240.7839.5240.2238.8140.0035.5136.7135.4836.68
Table 5. Comparisons of results between TGH-mGD and PGH-mGD for Toronto dataset.
Table 5. Comparisons of results between TGH-mGD and PGH-mGD for Toronto dataset.
Instances
(A)
TGH-mGDPGH-mGD%Impro-
vement
=
C E C
× 100 %
(I)
t-Test
Initial
Solution
with Graph
Heuristics
(B)
Modified GD
Algorithm
Best
(E)
Avg
(F)
Graph
Heuristic
Orderings
(G)
v(%)
(H)
Best
(C)
Avg
(D)
p-Value
(J)
car-s-918.33
(LD)
5.585.704.584.72SD-LD10%17.921.41 × 10 54
car-f-927.00
(LD)
4.584.663.823.93SD-LD10%16.591 × 10 52
ear-f-8352.35
(SD-LE)
33.6834.7632.4833.46SD-LD75%3.560.00047
hec-s-9216.21
(SD-LWD)
10.5311.0210.3210.99SD-LD10%1.990.046692
kfu-s-9323.68
(LD)
14.7915.1013.3413.97SD-LD10%9.801.84 × 10 23
lse-f-9118.83
(LE)
10.8511.1210.2410.62SD-LD10%5.621.45 × 10 14
rye-s-9318.28
(SD-LD)
11.3911.549.7910.29SD-LD10%14.054.16 × 10 29
sta-f-83166.43
(SD-LE)
157.39157.72157.03157.41SD-LWD25%0.230.018815
tre-s-9212.07
(SD-LE)
8.248.507.748.03SD-LD10%6.079.53 × 10 19
uta-s-925.53
(LE)
3.223.293.133.22SD-LWD10%2.801.32 × 10 8
ute-s-9238.03
(SD-LD)
26.9627.7925.2826.08SD-LWD10%6.233.95 × 10 20
yor-f-8349.8
(LD)
35.5136.7135.4636.68SD-LWD50%0.140.043694
Column A shows instances. Column B is the best initial solution with graph heuristic orderings with TGH-mGD approach. Column C and D show the best and the average cost after improvement with the modified GD algorithm, respectively. In PGH-mGD, column E and F show the best and the average cost respectively with their graph heuristic orderings (column G) and v value (column H). Column I shows the improvement of PGH-mGD over TGH-mGD by comparing their best-reported results. Column J shows the p-values of t-test result.
Table 6. Penalty values of ITC2007 exam dataset for different termination criteria.
(a) v = 5% for PGH-mGD
(a) v = 5% for PGH-mGD
Instances600 s3600 s
TGH-mGDPGH-mGDTGH-mGDPGH-mGD
BestAvgBestAvgBestAvgBestAvg
Exam_112,09612,599.2678378687.40979110,727.6453285676.74
Exam_221143160.9914482171.439961170.77407653.49
Exam_334,33341,308.8819,11922,332.9323,05227,456.6711,69213,461.00
Exam_433,91034,693.5119,98826,017.3233,62134,323.0816,20419,005.79
Exam_510,11811,006.8887149711.0774157877.1747864993.68
Exam_628,15030,506.5426,35026,754.6329,96030,941.2525,88026,085.05
Exam_ 716,57218,459.6711,08111,904.7793999757.4161816834.67
Exam _815,36617,662.6111,86319,124.7011,84512,847.2382389708.57
(b) v = 10% for PGH-mGD
(b) v = 10% for PGH-mGD
Instances600 s3600 s
TGH-mGDPGH-mGDTGH-mGDPGH-mGD
BestAvgBestAvgBestAvgBestAvg
Exam_112,09612,599.2683229124.77979110,727.6457946039.93
Exam_221143160.9913562623.739961170.77476529.33
Exam_334,33341,308.8821,02725,542.0023,05227,456.6711,70913,806.63
Exam_433,91034,693.5122,29828,292.9933,62134,323.0816,59119,897.41
Exam_510,11811,006.88855710,233.2774157877.1743004941.47
Exam_628,15030,506.5426,51029,284.9529,96030,941.2526,24526,839.18
Exam _716,57218,459.6711,27612,289.6393999757.4155076119.60
Exam _815,36617,662.6112,41422,078.4311,84512,847.23962610,461.21
Table 7. Comparisons of results between TGH-mGD and PGH-mGD for the ITC2007 exam dataset.
Table 7. Comparisons of results between TGH-mGD and PGH-mGD for the ITC2007 exam dataset.
Instances
(A)
TGH-mGDPGH-mGD%Impro-
vement
=
C E C
× 100 %
(I)
t-Test
Initial
Solution
with Graph
Heuristics
(B)
Modified GD
Algorithm
Best
(E)
Avg
(F)
Graph
Heuristic
Orderings
(G)
v(%)
(H)
Best
(C)
Avg
(D)
p-Value
(J)
Exam_125,989
(SD-LE)
979110,727.6453285676.74SD-LWD5%45.585.82 × 10 44
Exam_230,960
(SD-LE)
9961170.77407653.49SD-LD5%59.141.49 × 10 37
Exam_385,356
(SD-LD)
23,05227,456.6711,69213,461.00SD-LE5%49.274.94 × 10 29
Exam_441,702
(SD-LD)
33,62134,323.0816,20419,005.79SD-LD5%51.807.66 × 10 26
Exam_5132,953
(LD)
74157877.1743004941.47SD-LE10%42.004.24 × 10 27
Exam_644,160
(SD-LE)
29,96030,941.2525,88026,085.05SD-LWD5%13.611.95 × 10 39
Exam_753,405
(SD-LE)
93999757.4155076119.60SD-LWD10%41.411.8 × 10 55
Exam_892,767
(SD-LE)
11,84512,847.2382389708.57SD-LD5%30.451.7 × 10 15
Column A shows instances. Column B is the best initial solution with graph heuristic orderings with TGH-mGD approach. Column C and D show the best and the average cost after improvement with the modified GD algorithm, respectively. In PGH-mGD, column E and F show the best and the average cost respectively with their graph heuristic orderings (column G) and v value (column H). Column I shows the improvement of PGH-mGD over TGH-mGD by comparing their best-reported results. Column J shows the p-values of t-test result.
Table 8. Best results from our proposed approach (PGH-mGD) compared to the results from the scientific literature for the Toronto dataset.
Table 8. Best results from our proposed approach (PGH-mGD) compared to the results from the scientific literature for the Toronto dataset.
InstancesCarter
et al.
[5]
Alinia
Ahandani
et al.
[27]
Abdul
Rahman
et al.
[41]
Qu
et al.
[55]
Turabieh
and
Abdullah
[56]
Burke
et al.
[57]
Nelishia
Pillay
and Özcan
[35]
Pillay
and
Banzhaf
[8]
Sabar
et al.
[9]
Our
Approach
car-s-917.1
(10)
5.22
(9)
5.12
(6)
4.95
(4)
4.8
(3)
4.6
(2)
5.16
(8)
4.97
(5)
5.14
(7)
4.58
(1)
car-f-926.2
(10)
4.67
(8)
4.41
(7)
4.09
(3)
4.1
(4)
3.9
(2)
4.32
(6)
4.28
(5)
4.7
(9)
3.82
(1)
ear-f-8336.4
(7)
35.74
(5)
36.91
(9)
34.97
(4)
34.92
(3)
32.8
(2)
36.52
(8)
35.86
(6)
37.86
(10)
32.48
(1)
hec-s-9210.8
(5)
10.74
(4)
11.31
(7)
11.11
(6)
10.73
(3)
10
(1)
11.87
(9)
11.85
(8)
11.9
(10)
10.32
(2)
kfu-s-9314
(3)
14.47
(5)
14.75
(8)
14.09
(4)
13
(1)
13
(1)
14.67
(7)
14.62
(6)
15.3
(9)
13.34
(2)
lse-f-9110.5
(4)
10.76
(6)
11.41
(9)
10.71
(5)
10.01
(2)
10
(1)
10.81
(7)
11.14
(8)
12.33
(10)
10.24
(3)
rye-s-937.3
(1)
9.95
(7)
9.61
(4)
9.2
(2)
9.65
(5)

(9)
9.48
(3)
9.65
(5)
10.71
(8)
9.79
(6)
sta-f-83161.5
(9)
157.1
(3)
157.52
(4)
157.64
(5)
158.26
(6)
156.9
(1)
157.64
(5)
158.33
(7)
160.12
(8)
157.03
(2)
tre-s-929.6
(9)
8.47
(6)
8.76
(8)
8.27
(4)
7.88
(2)
7.9
(3)
8.48
(7)
8.48
(7)
8.32
(5)
7.72
(1)
uta-s-923.5
(6)
3.52
(7)
3.54
(8)
3.33
(3)
3.2
(2)
3.2
(2)
3.35
(4)
3.4
(5)
3.88
(9)
3.13
(1)
ute-s-9225.8
(3)
25.86
(4)
26.25
(7)
26.18
(6)
26.11
(5)
24.8
(1)
27.16
(8)
28.88
(9)
32.67
(10)
25.28
(2)
yor-f-8341.7
(10)
38.72
(5)
39.67
(6)
37.88
(4)
36.22
(3)
34.9
(1)
41.31
(9)
40.74
(8)
40.53
(7)
35.46
(2)
Avg.
Rank
6.41
(6)
5.75
(5)
6.91
(9)
4.17
(4)
3.25
(3)
2.17
(2)
6.75
(8)
6.58
(7)
8.50
(10)
2.00
(1)
The best results obtained are highlighted in bold. Value in bracket shows the rank of the corresponding penalty value.
Table 9. Best results from our proposed approach (PGH-mGD) compared to the results from the scientific literature for the ITC2007  exam dataset.
Table 9. Best results from our proposed approach (PGH-mGD) compared to the results from the scientific literature for the ITC2007  exam dataset.
InstancesMüller
[14]
Gogos
et al.
[59]
Atsuta
et al.
[60]
De
Smet
[61]
Pillay
[62]
Leite
et al.
[32]
Abdul
Rahman
et al.
[41]
Hamilton
-Bryce
et al.
[63]
Alzaqebah
and
Abdullah
[58]
Our
Approach
Exam_14370
(1)
5905
(6)
8006
(9)
6670
(8)
12,035
(10)
6207
(7)
5231
(3)
5517
(5)
5154
(2)
5328
(4)
Exam_2400
(1)
1008
(8)
3470
(10)
623
(7)
3074
(9)
535
(5)
433
(4)
538
(6)
420
(3)
407
(2)
Exam_310,049
(2)
13,862
(7)
18,622
(9)
-
(10)
15,917
(8)
13,022
(6)
9265
(1)
10,325
(4)
10,182
(3)
11,692
(5)
Exam_418,141
(6)
18,674
(7)
22,559
(8)
-
(10)
23,582
(9)
14,302
(1)
17,787
(5)
16,589
(4)
15,716
(2)
16,204
(3)
Exam_52988
(1)
4139
(7)
4714
(9)
3847
(6)
6860
(10)
3829
(5)
3083
(2)
3632
(4)
3350
(3)
4300
(8)
Exam_626,950
(6)
27,640
(7)
29,155
(9)
27,815
(8)
32,250
(10)
26,710
(5)
26,060
(2)
26,275
(4)
26,160
(3)
25,880
(1)
Exam_74213
(1)
6683
(7)
10,473
(8)
5420
(4)
17,666
(10)
5508
(6)
10,712
(9)
4592
(3)
4271
(2)
5507
(5)
Exam_87861
(1)
10,521
(6)
14,317
(8)
-
(10)
16,184
(9)
8716
(5)
12,713
(7)
8328
(4)
7922
(2)
8238
(3)
Avg
Rank
2.38
(1)
6.88
(7)
8.75
(9)
7.88
(8)
9.38
(10)
5.0
(6)
4.13
(4)
4.25
(5)
2.50
(2)
3.88
(3)
The best results obtained are highlighted in bold. Value in bracket shows the rank of the corresponding penalty value.

Share and Cite

MDPI and ACS Style

Mandal, A.K.; Kahar, M.N.M.; Kendall, G. Addressing Examination Timetabling Problem Using a Partial Exams Approach in Constructive and Improvement. Computation 2020, 8, 46. https://doi.org/10.3390/computation8020046

AMA Style

Mandal AK, Kahar MNM, Kendall G. Addressing Examination Timetabling Problem Using a Partial Exams Approach in Constructive and Improvement. Computation. 2020; 8(2):46. https://doi.org/10.3390/computation8020046

Chicago/Turabian Style

Mandal, Ashis Kumar, M. N. M. Kahar, and Graham Kendall. 2020. "Addressing Examination Timetabling Problem Using a Partial Exams Approach in Constructive and Improvement" Computation 8, no. 2: 46. https://doi.org/10.3390/computation8020046

APA Style

Mandal, A. K., Kahar, M. N. M., & Kendall, G. (2020). Addressing Examination Timetabling Problem Using a Partial Exams Approach in Constructive and Improvement. Computation, 8(2), 46. https://doi.org/10.3390/computation8020046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop