Vom 20.12.2025 bis 11.01.2026 ist die Universitätsbibliothek geschlossen. Ab dem 12.01.2026 gelten wieder die regulären Öffnungszeiten. Ausnahme: Medizinische Hauptbibliothek und Zentralbibliothek sind bereits ab 05.01.2026 wieder geöffnet. Weitere Informationen

Treffer: Optimizing Markov decision process state design for deep reinforcement learning manufacturing scheduling using Bayesian optimization Open Access.

Title:
Optimizing Markov decision process state design for deep reinforcement learning manufacturing scheduling using Bayesian optimization Open Access.
Source:
Journal of Computational Design & Engineering; Oct2025, Vol. 12 Issue 10, p154-175, 22p
Database:
Complementary Index

Weitere Informationen

This study investigates the application of Bayesian optimization for feature selection in Markov decision processes when applied to production scheduling problems. Traditional supervised learning feature selection methods are unsuitable due to the absence of explicit target values and the dynamic nature of scheduling environments. To address this, a bi-level optimization framework is proposed, with Bayesian optimization at the upper level for feature selection and reinforcement learning at the lower level for evaluation. Experimental results conducted in dynamic flexible job shop and thin-film transistor liquid-crystal display production scheduling environments demonstrate that the framework enhances efficiency by focusing on impactful features, reducing computational complexity, and improving decision-making. The findings highlight the significance of aligning state representations with scheduling dynamics and provide a foundation for future research on systematic feature selection in complex environments. [ABSTRACT FROM AUTHOR]

Copyright of Journal of Computational Design & Engineering is the property of Oxford University Press / USA and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)