IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Model-Based Multi-Objective Reinforcement Learning by a Reward Occurrence Probability Vector

Model-Based Multi-Objective Reinforcement Learning by a Reward Occurrence Probability Vector
View Sample PDF
Author(s): Tomohiro Yamaguchi (Nara College, National Institute of Technology (KOSEN), Japan), Shota Nagahama (Nara College, National Institute of Technology (KOSEN), Japan), Yoshihiro Ichikawa (Nara College, National Institute of Technology (KOSEN), Japan), Yoshimichi Honma (Nara College, National Institute of Technology (KOSEN), Japan)and Keiki Takadama (The University of Electro-Communications, Japan)
Copyright: 2020
Pages: 27
Source title: Advanced Robotics and Intelligent Automation in Manufacturing
Source Author(s)/Editor(s): Maki K. Habib (The American University in Cairo, Egypt)
DOI: 10.4018/978-1-7998-1382-8.ch010

Purchase

View Model-Based Multi-Objective Reinforcement Learning by a Reward Occurrence Probability Vector on the publisher's website for pricing and purchasing information.

Abstract

This chapter describes solving multi-objective reinforcement learning (MORL) problems where there are multiple conflicting objectives with unknown weights. Previous model-free MORL methods take large number of calculations to collect a Pareto optimal set for each V/Q-value vector. In contrast, model-based MORL can reduce such a calculation cost than model-free MORLs. However, previous model-based MORL method is for only deterministic environments. To solve them, this chapter proposes a novel model-based MORL method by a reward occurrence probability (ROP) vector with unknown weights. The experimental results are reported under the stochastic learning environments with up to 10 states, 3 actions, and 3 reward rules. The experimental results show that the proposed method collects all Pareto optimal policies, and it took about 214 seconds (10 states, 3 actions, 3 rewards) for total learning time. In future research directions, the ways to speed up methods and how to use non-optimal policies are discussed.

Related Content

Poshan Yu, Yi Lu, Akhilesh Chandra Prabhakar, Vasilii Erokhin, Shengyuan Lu, Kelin Guo. © 2025. 38 pages.
Akhilesh Chandra Prabhakar. © 2025. 36 pages.
S. Srinivasan, R. Vallipriya, Ajay Kumar Singh. © 2025. 38 pages.
S. Srinivasan, R. Vallipriya, Ajay Kumar Singh. © 2025. 34 pages.
Muhammad Usman Tariq. © 2025. 28 pages.
B. C. M. Patnaik, Ipseeta Satpathy, Vishal Jain. © 2025. 32 pages.
Hemlata Parmar, Utsav Krishan Murari. © 2025. 30 pages.
Body Bottom