Article Preview
TopIntroduction
Energy Management System (EMS) aims at monitoring, controlling and optimizing the generation and transmission of electrical power through the execution of a variety of applications such as state estimator, contingency analysis, optimal power flow analysis, unit commitment, etc. The EMS architecture is generally made of two distinct parts: a server where the applications are executed and a human-machine interface (HMI) where the results of the applications are displayed to the network controller.
With the evolution of the power system network into a smarter grid, both parts of EMS will soon reach their computational limits. Indeed, EMS are required (a) to handle larger data set from additional measurement devices such as PMU (Phasor Measurement Unit) or smart meter; (b) to address larger, possibly continental sized, networks so as to minimize the effects of approximations that should otherwise be done at the boundaries of interconnected network areas; (c) to increase situational awareness by executing and reporting applications results at a faster rate, as well as to obtain energy market pricing information as accurately as possible to boost efficiency of such markets.
While many different approaches are envisioned for EMS to deal with larger and more complex power system network, i.e., HPC server (Huang, 2008), cluster (Pourreza, 2010), dedicated hardware (Shi, 2008), this paper investigates the potential of General Purpose Graphic Processing Unit (GPGPU) for the server and HMI parts of EMS. Compared to other approaches, the main advantage of GPGPU is to provide cost efficient commodity hardware with the equivalent computational power of HPC servers from the previous decade.
The HMI investigation focuses on the applicability and performance improvement of GPGPU for scattered data interpolation algorithms typically used to visually represent the overall state of a power network. The approach chosen is to compare several interpolation algorithms implemented on GPGPU with their equivalent implementations on CPU using its full capabilities, i.e., multithreading and SIMD extension (SSE). The evaluation performed on multiple matrices shows that while the CPU implementations are outperformed by the GPGPU ones, their performances can already be acceptable when the full potential of the CPU architecture is used.
Due to the nature of GPGPU, i.e., implementing a Single Instruction Multiple Data (SIMD) programming paradigm, the server side investigation focuses on fine grain parallelization, i.e., sparse linear solver, rather than a coarse grain one, e.g., different applications executed in parallel. Sparse linear solvers are at the heart of most EMS application and their performances greatly determine the overall applications performances. Due to the limited success on implementing sparse direct solvers on GPGPU (Kerr, 2009) and the promising result in implementing iterative solvers on SIMD architectures (Huang, 2008), we are focusing on implementing iterative solvers for power system applications and comparing their performances against highly performant sparse direct solver libraries, namely SPQR (Davis, 2011) and PARDISO (Schenk, 2008). The performance measurements executed on a typical set of matrices found in various EMS applications, e.g., Jacobian and Gain matrix from state estimation and DC power flow analysis, and representing a wide span of system size ranging from 400 to 40,000 buses, show the limit of GPGPU for this type of applications.