Article Preview
TopIntroduction
Games are often considered a suitable test environment for artificial intelligence techniques, and game developers often focus on finding approaches that can automatically create computer game content to help minimize the load on developers (Zhang et al., 2022). Procedural content generation (PCG) can be used to build all types of game content, including levels, environments, and components. As such, it can be considered a tool that aids the designer in creating this content by offering speed advantages in comparison to manually designing the content and potentially through the discovery of novel content. PCG can create a realistic and aesthetically pleasing user experience (Korn et al., 2017) and has been effectively used in a wide range of commercial games. Such games have their content algorithmically developed either during the design process or, in some cases, during runtime, rather than being designed by humans. A wide range of approaches have been used (Barriga, 2019; Zhang et al., 2022), and there is a growing interest in search-based approaches (Blasco et al., 2023; Skjærseth et al., 2021; Zafar et al., 2020; H. Zhang et al., 2018).
The literature shows that evolutionary approaches, such as genetic algorithms (GAs), are one of the most widely used metaheuristic methods in search-based PCG (SBPCG); they have been shown to be successful in solving a variety of complicated problems (Alyaseri et al., 2024). In other domains of study, comparisons of metaheuristic algorithm performance have shown that in many cases, the popularity of GAs is not entirely justified. Although GAs excel in various tasks, in some contexts they might exhibit limitations, such as slow convergence or becoming trapped in local optima. This has motivated the exploration of alternative algorithms (Connor & Tilley, 2016; Panda, 2018; Schmidt et al., 2018; Youssef et al., 2001).
Therefore, the main purpose of this study was to compare GAs with two different metaheuristic algorithms—artificial bee colony (ABC) and particle swarm optimization (PSO)—in the context of PCG for computer games. These algorithms have demonstrated promising characteristics in other domains, such as faster convergence for PSO and improved exploration of ABC. PSO in particular offers advantages because of its simplicity and ease of implementation. PSO also leverages a form of collective memory by considering both the best positions experienced by individual particles and those discovered by the entire swarm. This information sharing helps particles explore the search space more effectively and avoid getting stuck in the local optima. In contrast, the GA primarily focuses on individual selection and crossover, potentially discarding valuable information from less successful individuals (Couceiro et al., 2016). ABC also possesses a distinct exploration strategy. In the ABC algorithm, both onlookers and employed bees play a role in exploring the search space. Employed bees exploit the food sources (potential solutions) they discovered in previous iterations. Onlookers, based on the employed bees’ dancing behavior (a form of information sharing), select promising food sources for further exploration.
Furthermore, scout bees handle diversification by randomly generating new food sources, ensuring the algorithm does not get stuck in local optima. This combination of exploitation (employed bees), informed exploration (onlookers), and random exploration (scouts) contributes to ABC’s potential effectiveness in finding good solutions (Abu-Mouti & El-Hawary, 2012). The present study specifically provides an example of automatically generating race tracks for a racing game. The aim was to identify whether alternative algorithms, such as ABC and PSO, with their potential advantages observed in other domains, can offer similar benefits when applied to the creation of game content, such as race tracks.