DTU Researchers Revolutionize Electric Ride-Sharing with AI

In the evolving landscape of urban mobility, researchers from the Technical University of Denmark, including Sten Elling Tingstad Jacobsen, Attila Lischka, Balázs Kulcsár, and Anders Lindman, are tackling the challenges posed by the shift towards electric, on-demand transportation services. Their work, published in the journal Transportation Research Part C: Emerging Technologies, focuses on optimizing fleet management for electric dial-a-ride services, which face unique constraints related to battery capacity and charging dynamics.

The Electric Dial-a-Ride Problem (E-DARP) is a complex challenge that extends the traditional dial-a-ride problem by incorporating the limitations of electric vehicles, such as battery capacity and nonlinear charging dynamics. These factors significantly increase the computational complexity, making it difficult to apply exact methods in real-time scenarios. To address this, the researchers proposed a deep reinforcement learning approach that utilizes a graph neural network encoder and an attention-driven route construction policy. This method operates directly on edge attributes like travel time and energy consumption, capturing the intricacies of real road networks, including non-Euclidean, asymmetric, and energy-dependent routing costs.

The proposed approach jointly optimizes routing, charging, and service quality without relying on Euclidean assumptions or handcrafted heuristics. The researchers evaluated their method through two case studies using ride-sharing data from San Francisco. In the first case study, the approach achieved solutions within 0.4% of the best-known results while significantly reducing computation times. The second case study involved large-scale instances with up to 250 request pairs, realistic energy models, and nonlinear charging. Here, the learned policy outperformed the Adaptive Large Neighborhood Search (ALNS) by 9.5% in solution quality, achieving 100% service completion with sub-second inference times compared to hours for the metaheuristic.

The research also included sensitivity analyses to quantify the impact of various factors such as battery capacity, fleet size, ride-sharing capacity, and reward weights. Robustness experiments demonstrated that deterministically trained policies could generalize effectively under stochastic conditions. This work highlights the potential of deep reinforcement learning to optimize fleet management in the context of electric, on-demand urban mobility services, offering practical applications for the energy sector and transportation industry.

This article is based on research available at arXiv.

Scroll to Top
×