Dynamic and Effect Driven Output Service Selection for IoT Environments Using Deep Reinforcement Lea

Dynamic and Effect Driven Output Service Selection for IoT Environments Using Deep Reinforcement Lea

Abstract:

In the context of the recent emergence of the Internet of Things (IoT), human users and IoT-based services are interacting via physical effects, such as light and sound. Therefore, it is necessary to consider the quality of the delivery of physical effects to users by IoT devices for selecting services in IoT environments. However, traditional service-selection algorithms focus primarily on the network-level Quality of Service (QoS), such as latency and throughput. In this study, we improve on the visual-service effectiveness metric developed in our previous work to measure the effectiveness of the personalized delivery of physical effects of visual services to users by considering user- and application-specific factors. We evaluate the metric by conducting a user study, and the results show that the metric reflects users’ perceived effectiveness with high accuracy. We also investigate the use of virtual reality (VR) to imitate physical environments for efficient evaluation of the metric. Based on this metric, we develop a dynamic effect-driven output-service selection agent (DEOSA) that selects output services dynamically by considering the effectiveness of service-effect delivery. By adopting a state-of-the-art reinforcement-learning algorithm, DEOSA can learn the optimal policy for selecting output services that can be generalized to various environments. We evaluate DEOSA in simulated IoT environments and show that it can learn the optimal policy successfully; it generally performs better than traditional greedy algorithms in terms of the visual service effectiveness metric and the replacement overhead in randomly generated test environments.