Markov-based Redistribution Policy Model for Future Urban Mobility Networks

From DRLWiki

Revision as of 22:58, 12 May 2013 by Mikhail (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

In this work we present a Markov-based urban transportation model that captures the operation of a fleet of taxis in response to incident customer arrivals throughout the city. We consider three different evaluation criteria: (1) minimizing the number of transportation resources for urban planning; (2) minimizing fuel consumption for the drivers; and (3) minimizing customer waiting time increase the overall quality of service. We present a practical policy and evaluate it by comparing against the actual observed redistribution of taxi drivers in Singapore. We show through simulation that our proposed policy is stable and improves substantially upon the default unmanaged redistribution of taxi drivers in Singapore with respect to the three evaluation criteria.

People

Mikhail Volkov

Javed Aslam

Daniela Rus

References

Volkov, Mikhail, Aslam, Javed, Rus, Daniela - Markov-based redistribution policy model for future urban mobility networks
Intelligent Transportation Systems (ITSC), 2012 15th International IEEE Conference on pp. 1906--1911, September 2012
Pdf Bibtex
Author : Volkov, Mikhail, Aslam, Javed, Rus, Daniela
Title : Markov-based redistribution policy model for future urban mobility networks
In : Intelligent Transportation Systems (ITSC), 2012 15th International IEEE Conference on -
Address :
Date : September 2012
Personal tools