By Pedro Henrique Santos

User-Centric Markov Reward Model on the Example of Cloud Gaming


Markov reward models are commonly used in the analysis of systems by integrating a reward rate to each system state. Typically, rewards are defined based on system states and reflect the system’s perspective. From a user’s point of view, it is important to consider the changing system conditions and dynamicity while the user consumes a service. In this paper, we consider online cloud gaming as use case. Cloud gaming essentially moves the processing power required to render a game away from the user into the cloud and streams the entire game experience to the user as a high definition video. According to the available network capacity, the video streaming bitrate is adapted. We conduct experiments on Google Stadia and provide a Markov model based on the measurement results to investigate a scenario where users are sharing a bottleneck link. The key contributions are proper definitions for (i) system-centric reward and (ii) user-centric reward of the cloud gaming model, as well as (iii) the analysis of the relationships between those metrics. Our key result allows a simple computation of the user-centric rewards. We provide (iv) numerical results on the trade-off between user-centric rewards and blocking probabilities to access the online cloud servers. We use Kleinrock’s power metric to identify operational points. This work gives relevant and important insights in how to integrate the user’s perspective in the analysis of Markov reward models and is a blueprint for the analysis of other services beyond cloud gaming.

In proceedings of ITC 2022, Shenzen, China, September 2022