diff --git "a/DtFRT4oBgHgl3EQfxjhb/content/tmp_files/2301.13642v1.pdf.txt" "b/DtFRT4oBgHgl3EQfxjhb/content/tmp_files/2301.13642v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/DtFRT4oBgHgl3EQfxjhb/content/tmp_files/2301.13642v1.pdf.txt" @@ -0,0 +1,4596 @@ +An Efficient Solution to s-Rectangular Robust Markov +Decision Processes +Navdeep Kumar1, Kfir Levy1, Kaixin Wang2, and Shie Mannor1 +1Technion +2National University of Singapore +February 1, 2023 +Abstract +We present an efficient robust value iteration for s-rectangular robust Markov +Decision Processes (MDPs) with a time complexity comparable to standard (non- +robust) MDPs which is significantly faster than any existing method. We do so by +deriving the optimal robust Bellman operator in concrete forms using our Lp water +filling lemma. We unveil the exact form of the optimal policies, which turn out to be +novel threshold policies with the probability of playing an action proportional to its +advantage. +1 +Introduction +In Markov Decision Processes (MDPs), an agent interacts with the environment and learns +to optimally behave in it [28]. However, the MDP solution may be very sensitive to little +changes in the model parameters [23]. Hence we should be cautious applying the solution of +the MDP, when the model is changing or when there is uncertainty in the model parameters. +Robust MDPs provide a way to address this issue, where an agent can learn to optimally +behave even when the model parameters are uncertain [15, 29, 18]. Another motivation to +study robust MDPs is that they can lead to better generalization [33, 34, 25] compared to +non-robust solutions. +Unfortunately, solving robust MDPs is proven to be NP-hard for general uncertainty +sets [32]. As a result, the uncertainty set is often assumed to be rectangular, which enables +the existence of a contractive robust Bellman operators to obtain the optimal robust value +function [24, 18, 22, 12, 32]. Recently, there has been progress in solving robust MDPs +for some sa-rectangular uncertainty sets via both value-based and policy-based methods +[30, 31]. An uncertainty set is said to be sa-rectangular if it can be expressed as a Cartesian +product of the uncertainty in all states and actions. It can be further generalized to a +s-rectangular uncertainty set if it can be expressed as a Cartesian product of the uncertainty +in all states only. Compared to sa-rectangular robust MDPs, s-rectangular robust MDPs +are less conservative and hence more desirable; however, they are also much more difficult +and poorly understood [32]. Currently, there are few works that consider s-rectangular Lp +robust MDPs where uncertainty set is further constrained by Lp norm, but they rely on +1 +arXiv:2301.13642v1 [cs.LG] 31 Jan 2023 + +black box methods which limits its applicability and offers little insights [7, 16, 9, 32]. No +effective value or policy based methods exist for solving any s-robust MDPs. Moreover, it is +known that optimal policies in s-rectangular robust MDPs can be stochastic, in contrast to +sa-rectangular robust MDPs and non-robust MDPs [32]. However, so far, nothing is known +about the stochastic nature of the optimal policies in s-rectangular MDPs. +In this work, we mainly focus on s-rectangular Lp robust MDPs. We first revise the +unrealistic assumptions made in the noise transition kernel in [9] and introduce forbidden +transitions, which leads to novel regularizers. Then we derive robust Bellman operator +(policy evaluation) for a s-rectangular robust MDPs in closed form which is equivalent to +reward-value-policy-regularized non-robust Bellman operator without radius assumption 5.1 +in [9]. We exploit this equivalence to derive an optimal robust Bellman operator in concrete +forms using our Lp-water pouring lemma which generalizes existing water pouring lemma for +L2 case [1]. We can compute these operators in closed form for p = 1, ∞ and exactly by a +simple algorithm for p = 2, and approximately by binary search for general p. We show that +the time complexity of robust value iteration for p = 1, 2 is the same as that of non-robust +value iteration. For general p, the complexity includes some additional log-factors due to +binary searches. +In addition, we derive a complete characterization of the stochastic nature of optimal +policies in s-rectangular robust MDPs. The optimal policies in this case, are threshold +policies, that plays only actions with positive advantage with probability proportional to +(p − 1)-th power to its advantage. +Related Work. For sa-rectangular R-contamination robust MDPs, [30] derived robust +Bellman operators which are equivalent to value-regularized-non-robust Bellman operators, +enabling efficient robust value iteration. Building upon this work, [31] derived robust policy +gradient which is equivalvent to non-robust policy gradient with regularizer and correction +terms. Unfortunately, these methods can’t be naturally generalized to s-rectangular robust +MDPs. +For s-rectangular robust MDPs, methods such as robust value iteration [6, 32], robust +modified policy iteration [19], partial robust policy iteration [16] etc tries to approximately +evaluate robust Bellman operators using variety of tools to estimate optimal robust value +function. The scalability of these methods has been limited due to their reliance on an +external black-box solver such as Linear Programming. +Previous works have explored robust MDPs from a regularization perspective [9, 10, 17, 11]. +Specifically, [9] showed that s-rectangular robust MDPs is equivalent to reward-value-policy +regularized MDPs, and proposed a gradient based policy iteration for s-rectangular Lp +robust MDPs ( where uncertainty set is s-rectangular and constrained by Lp norm). But +this gradient based policy improvement relies on black box simplex projection, hence very +slow and not scalable. +The detailed discussion of the above works can be found in the appendix. +2 +Preliminary +2.1 +Notations +For a set S, |S| denotes its cardinality. ⟨u, v⟩ := � +s∈S u(s)v(s) denotes the dot product +between functions u, v : S → R. +∥v∥q +p := (� +s|v(s)|p) +q +p denotes the q-th power of Lp +norm of function v, and we use ∥v∥p := ∥v∥1 +p and ∥v∥ := ∥v∥2 as shorthand. +For a +2 + +set C, ∆C := {a : C → R|a(c) ≥ 0, ∀c, � +c∈C ac = 1} is the probability simplex over +C. 0, 1 denotes all zero vector and all ones vector/function respectively of appropriate +dimension/domain. 1(a = b) := 1 if a = b, 0 otherwise, is the indicator function. For +vectors u, v, 1(u ≥ v) is component wise indicator vector, i.e. 1(u ≥ v)(x) = 1(u(x) ≥ v(x)). +A × B = {(a, b) | a ∈ A, b ∈ B} is cartesain product between set A and B. +2.2 +Markov Decision Processes +A Markov Decision Process (MDP) can be described as a tuple (S, A, P, R, γ, µ), where S is +the state space, A is the action space, P is a transition kernel mapping S × A to ∆S, R is a +reward function mapping S × A to R, µ is an initial distribution over states in S, and γ is a +discount factor in [0, 1). The expected discounted cumulative reward (return) is defined as +ρπ +(P,R) :=E +� ∞ +� +n=0 +γnR(sn, an) +��� s0 ∼ µ, π, P +� +. +The return can be written compactly as +ρπ +(P,R) = ⟨µ, vπ +(P,R)⟩, +(1) +[26] where vπ +(P,R) is the value function , defined as +vπ +(P,R)(s) := E +� ∞ +� +n=0 +γnR(sn, an) +��� s0 = s, π, P +� +. +(2) +Our objective is to find an optimal policy π∗ +(P,R) that maximizes the performance ρπ +(P,R). +This performance can be written as : +ρ∗ +(P,R) := max +π +ρπ +(P,R) = ⟨µ, v∗ +(P,R)⟩, +(3) +where v∗ +(P,R) := maxπ vπ +(P,R) is the optimal value function [26]. +The value function vπ +(P,R) and the optimal value function v∗ +(P,R) are the fixed points of the +Bellman operator T π +(P,R) and the robust Bellman operator T ∗ +(P,R), respectively [28]. These +γ-contraction operators are defined as follows: For any vector v, and state s ∈ S, +(T π +(P,R)v)(s) := +� +a +π(a|s) +� +R(s, a) + γ +� +s′ +P(s′|s, a)v(s′) +� +, +and +T ∗ +(P,R)v := max +π +T π +(P,R)v. +Therefore, the value iteration vn+1 := T ∗ +(P,R)vn converges linearly to the optimal value +function v∗ +(P,R). Given this optimal value function, the optimal policy can be computed as: +π∗ +(P,R) ∈ arg maxπ T π +(P,R)v∗ +(P,R). +Remark 1. The vector minimum of a set U of vectors is defined component wise, i.e. +(minu∈U u)(i) := minu∈U u(i). +This operation is well-defined only when there exists a +minimal vector u∗ ∈ U such that u∗ ⪯ u, ∀u ∈ U. The same holds for other operations such +as maximum, argmin, argmax, etc. +3 + +2.3 +Robust Markov Decision Processes +A robust Markov Decision Process (MDP) is a tuple (S, A, P, R, γ, µ) which generalizes the +standard MDP by containing a set of transition kernels P and set of reward functions R. +Let uncertainty set U = P × R be set of tuples of transition kernels and reward functions +[18, 24]. The robust performance ρπ +U of a policy π is defined to be its worst performance on +the entire uncertainty set U as +ρπ +U := +min +(P,R)∈U ρπ +(P,R). +(4) +Our objective is to find an optimal robust policy π∗ +U that maximizes the robust performance +ρπ +U, defined as +ρ∗ +U := max +π +ρπ +U. +(5) +Solving the above robust objectives 4 and 5 are strongly NP-hard for general uncertainty +sets, even if they are convex [32]. Hence, the uncertainty set U = P × R is commonly +assumed to be s-rectangular, meaning that R and P can be decomposed state-wise as +R = ×s∈SRs and P = ×s∈SPs. For further simplification, U = P × R is assumed to +decompose state-action-wise as R = ×(s,a)∈S×ARs,a and P = ×(s,a)∈S×APs,a, known as +sa-rectangular uncertainty set. Throughout the paper, the uncertainty set is assumed to +be s-rectangular (or sa-rectangular) unless stated otherwise. Under the s-rectangularity +assumption, for every policy π, there exists a robust value function vπ +U which is the minimum +of vπ +(P,R) for all (P, R) ∈ U, and the optimal robust value function v∗ +U which is the maximum +of vπ +U for all policies π [32], that is +vπ +U := +min +(P,R)∈U vπ +(P,R), +and +v∗ +U := max +π +vπ +U. +This implies, robust policy performance can be rewritten as +ρπ +U = ⟨µ, v�� +U⟩, +and +ρ∗ +U = ⟨µ, v∗ +U⟩. +Furthermore, the robust value function vπ +U is the fixed point of the robust Bellmen operator +T π +U [32, 18], defined as +(T π +U v)(s) := +min +(P,R)∈U +� +a +π(a|s) +� +R(s, a) + γ +� +s′ +P(s′|s, a)v(s′) +� +, +and the optimal robust value function v∗ +U is the fixed point of the optimal robust Bellman +operator T ∗ +U [18, 32], defined as +T ∗ +U v := max +π +T π +U v. +The optimal robust Bellman operator T ∗ +U and robust Bellman operators T π +U are γ contraction +maps for all policy π [32], that is +∥T ∗ +U v − T ∗ +U u∥∞ ≤ γ∥u − v∥∞, +∥T π +U v − T π +U u∥∞ ≤ γ∥u − v∥∞, +∀π, u, v. +So for all initial values vπ +0 , v∗ +0, sequences defined as +vπ +n+1 := T π +U vπ +n, +v∗ +n+1 := T ∗ +U v∗ +n +(6) +converges linearly to their respective fixed points, that is vπ +n → vπ +U and v∗ +n → v∗ +U. Given +this optimal robust value function, the optimal robust policy can be computed as: π∗ +U ∈ +arg maxπ T π +U v∗ +U [32]. This makes the robust value iteration an attractive method for solving +s-rectangular robust MDPs. +4 + +Table 1: p-variance +x +κx(v) +Remark +p +minω∈R∥v − ω1∥p +Binary search +∞ +maxs v(s)−mins v(s) +2 +Semi-norm +2 +�� +s +� +v(s) − +� +s v(s) +S +�2 +Variance +1 +�⌊(S+1)/2⌋ +i=1 +v(si) +Top half - lower half +− �S +i=⌈(S+1)/2⌉ v(si) +where v is sorted, i.e. v(si) ≥ v(si+1) +∀i. +3 +Method +In this section, we consider constraining the uncertainty set around nominal values by the Lp +norm, which is a natural way of limiting the broad class of s (or sa)-rectangular uncertainty +sets [9, 16, 3]. We will then derive robust Bellman operators for these uncertainty sets, which +can be used to obtain robust value functions. This will be done separately for sa-rectangular +in Subsection 3.1 and s-rectangular case in Section 3.2. +We begin by making a few useful definitions. We reserve q for Holder conjugate of p, i.e. +1 +p + 1 +q = 1. Let p-variance function κp : S → R be defined as +κp(v) := min +ω∈R∥v − ω1∥p. +(7) +For p = 1, 2, ∞, the p-variance function κp has intuitive closed forms as summarized in Table +1. For general p, it can be calculated by binary search in the range [mins v(s), maxs v(s)] ( +see appendix I for proofs). +3.1 +(Sa)-rectangular Lp robust Markov Decision Processes +In accordance with [9], we define sa-rectangular Lp constrained uncertainty set Usa +p +as +Usa +p := (P0 + P) × (R0 + R) +where P, R are noise sets around nominal kernel P0 and nominal reward R0 respectively. +Furthermore, noise sets are sa-rectangular, that is +P = ×s∈S,a∈APs,a, +and +R = ×s∈S,a∈ARs,a, +and each component are bounded by Lp norm that is +Rs,a = +� +Rs,a ∈ R +��� |Rs,a| ≤ αs,a +� +, +and +Ps,a = {Ps,a : S → R +��� +� +s′ +Ps,a(s′) = 0 +� +�� +� +simplex condition +, ∥Ps,a∥p ≤ βs,a} +5 + +with radius vector α and β. Radius vector β is chosen small enough so that all the transition +kernels in (P0 + P) are well defined. Further, all transition kernels in (P0 + P) must have +the sum of each row equal to one, with P0 being a valid transition kernel satisfying this +requirement. This implies that the elements of P must have a sum of zero across each row +as ensured by simplex condition above. +Our setting differs from [9] as they didn’t impose this simplex condition on the kernel +noise, which renders their setting unrealistic as not all transition kernels in their uncertainty +set satisfy the properties of transition kernels. This makes our reward regularizer dependent +on the q-variance of the value function κq(v), instead of the q-th norm of value function ∥v∥q +in [9]. +The main result of this subsection below states that robust Bellman operators can be +evaluated using only nominal values and regularizers. +Theorem 1. sa-rectangular Lp robust Bellman operators are equivalent to reward-value +regularized (non-robust) Bellman operators: +(T π +Usa +p v)(s) = +� +a +π(a|s) +� +−αs,a − γβs,aκq(v) + R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� +, +and +(T ∗ +Usa +p v)(s) = max +a∈A +� +−αs,a − γβs,aκq(v) + R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� +. +Proof. The proof in appendix, it mainly consists of two parts: a) Separating the noise from +nominal values. b) The reward noise to yields the term −αs,a and noise in kernel yields +−γβs,aκq(v). +Note, the reward penalty is proportional to both the uncertainty radiuses and a novel +variance function κp(v). +We recover non-robust value iteration by putting uncertainty radiuses (i.e. αs,a, βs,a) to +zero, in the above results. Furthermore, the same is true for all subsequent robust results in +this paper. +Q-Learning +The above result immediately implies the robust value iteration, and also suggests the Q-value +iteration of the following form +Qn+1(s, a) = max +a +� +R0(s, a) − αs,a − γβs,aκq(vn) + +� +s′ +P0(s′|s, a) max +a +Qn(s′, a′) +� +, +where vn(s) = maxa Qn(s, a), which is further discussed in appendix E. +Observe that value-variance κp(v) can be estimated online, using batches or other more +sophisticated methods. This paves the path for generalizing to a model-free setting similar +to [30]. +Forbidden Transitions +Now, we focus on the cases where P0(s′|s, a) = 0 for some states s′, that is, forbidden +transitions. In many practical situations, for a given state, many transitions are impossible. +For example, consider a grid world example where only a single-step jumps (left, right, up, +down) are allowed, so in this case, the probability of making a multi-step jump is impossible. +6 + +Table 2: Optimal robust Bellman operator evaluation +U +(T ∗ +U v)(s) +remark +Us +p +min x +s.t. +��� +� +Qs − x1 +� +◦1 +� +Qs ≥ x +���� +p= σq(v, s) +Solve by binary search +Us +1 +maxk +�k +i=1 Q(s,ai)−σ∞(v,s) +k +Highest penalized average +Us +2 +By algorithm 1 +High mean and variance +Us +∞ +maxa∈A Q(s, a) − σ1(v, s) +Best action +Usa +p +maxa∈A +� +Q(s, a) − αsa − γβsaκq(v) +� +Best penalized action +nr +maxa Q(s, a) +Best action +where nr stands for Non-Robust MDP, +Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′), +sorted Q-value: Q(s, a1) ≥ · · · ≥ Q(s, aA) +, σq(v, s) = αs + γβsκq(v), Qs = Q(s, ·), +and ◦ is Hadamard product. +So upon adding noise to the kernel, the system should not start making impossible transitions. +Therefore, noise set P must satisfy additional constraint: For any (s, a) if P0(s′|s, a) = 0 +then +P(s′|s, a) = 0, +∀P ∈ P. +Incorporating this constraint without much change in the theory is one of our novel contri- +bution, and is discussed in the appendix C. +3.2 +S-rectangular Lp robust Markov Decision Processes +In this subsection, we discuss the core contribution of this paper: the evaluation of robust +Bellman operators for the s-rectangular uncertainty set. +We begin by defining s-rectangular Lp constrained uncertainty set Us +p as +Us +p := (P0 + P) × (R0 + R) +where noise sets are s-rectangular, +P = ×s∈SPs, +and +R = ×s∈SRs, +and each component are bounded by Lp norm, +Rs = +� +Rs : A → R +��� ∥Rs∥p ≤ αs +� +, +and +Ps = +� +Ps : S × A → R +��� ∥Ps∥p ≤ βs, +� +s′ +Ps(s′, a) = 0, ∀a +� +, +with radius vectors α and small enough β. +The result below shows that, compared to the sa-rectangular case, the policy evaluation +for the s-rectangular case has an extra dependence on the policy. +7 + +Theorem 2. (Policy Evaluation) S-rectangular Lp robust Bellman operator is equivalent to +reward-value-policy regularized (non-robust) Bellman operator: +(T π +Uspv)(s) = − +� +αs + γβsκq(v) +� +∥πs∥q + +� +a +π(a|s) +� +R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� +, +where ∥πs∥q is q-norm of the vector π(·|s) ∈ ∆A. +Proof. The proof in the appendix: the techniques are similar to as its sa-rectangular +counterpart. +The reward penalty in this case has an additional dependence on the norm of the policy +(∥πs∥q). This norm is conceptually similar to entropy regularization � +a π(a|s) ln(π(a|s)), +which is widely studied in the literature [21, 13, 20, 14, 27], and other regularizers such as +� +a π(a|s)tsallis( 1−π(a|s) +2 +), � +a π(a|s)cos(cos( π(a|s) +2 +)), etc. +Note: These regularizers, which are convex functions, are often used to promote stochas- +ticity in the policy and thus improve exploration during learning. However, the above result +shows another benefit of these regularizers: they can improve robustness, which in turn can +lead to better generalization. +In literature, the above regularizers are scaled with arbitrary chosen constant, here we +have the different constant αs + γβsκq(v) for different states. +This extra dependence makes the policy improvement a more challenging task and thus, +presents a richer theory. +Theorem 3. (Policy improvement) For any vector v and state s, (T ∗ +Uspv)(s) is the minimum +value of x that satisfies +� � +a +� +Q(s, a) − x +�p +1 +� +Q(s, a) ≥ x +�� 1 +p = σ, +(8) +where Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′), and σ = αs + γβsκq(v). +Proof. The proof is in the appendix; the main steps are: +(T ∗ +Uspv)(s) = max +π (T π +Uspv)(s), +(from definition) +( Using policy evaluation Theorem 2) += max +π +� +(T π +(P0,R0)v)(s)− +� +αs + γβsκq(v) +� +∥πs∥q +� += max +πs∈∆A⟨πs, Qs⟩ − σ∥πs∥q +( where Qs = Q(·|s)). +The solution to the above optimization problem is technically complex. Specifically, for +p = 2, the solution is known as the water filling/pouring lemma [1], we generalize it to the +Lp case, in the appendix. +To better understand the nature of (8), lets look at the ’sub-optimality distance’ function +g, +g(x) := +� � +a +� +Q(s, a) − x +�p +1 +� +Q(s, a) ≥ x +�� 1 +p . +8 + +Algorithm 1 s-rectangular L2 robust Bellman operator +(see algorithm 1 of [1] +Input: v, s, x = Q(s, ·), and σ = αs + γβsκq(v) +Output: (T ∗ +Uspv)(s) +1: Sort x such that x1 ≥ x2, · · · ≥ xA. +2: Set k = 0 and λ = x1 − σ +3: while k ≤ A − 1 and λ ≤ xk do +4: +k = k + 1 +5: +λ = 1 +k +� +k +� +i=1 +xi − +� +� +� +�kσ2 + ( +k +� +i=1 +x2 +i − k +k +� +i=1 +xi)2 +� +6: end while +7: return λ +The g(x) is the cumulative difference between x and the Q-values of actions whose +Q-value is greater than x. The function is monotonically decreasing, with a lower bound +of σ at x = maxa Q(s, a) − σ and a value of zero for all x ≥ maxa Q(s, a). Since, (T ∗ +Uspv)(s) +is the value of x at which the "sub-optimality distance" g(x) is equal to the "uncertainty +penalty" σ. Hence, (8) can be approximately solved using a binary search between the +interval [maxa Q(s, a) − σ, +maxa Q(s, a)]. +We invite the readers to consider the dependence of (T ∗ +Uspv)(s) on p, αs, and βs, specifically: +1. If αs = βs = 0 then σ = 0 which implies (T ∗ +Uspv)(s) = maxa Q(s, a), same as non-robust +case. +2. If p = ∞ then (T ∗ +Uspv)(s) = maxa Q(s, a) − σ, as in the sa-rectangular case. +3. For p = 1, 2, (8) becomes linear and quadratic equation respectively, hence can be +solved exactly. +4. As αs and βs increase, σ increases, resulting in a decrease in (T ∗ +Uspv)(s) at a rate that +becomes smaller as σ increases. When σ is sufficiently small, (T ∗ +Uspv)(s) = maxa Q(s, a)− +σ. +Solution to (8) can be obtained in closed form for the cases of p = 1, ∞, exactly by +algorithm 1 for p = 2, and approximately by binary search for general p, as summarized in +table 2. +In this section, we have demonstrated that robust Bellman operators can be efficiently +evaluated for both sa and s rectangular Lp robust MDPs, thus enabling efficient robust +value iteration. In the following sections, we discuss the nature of optimal policies and the +time complexity of robust value iteration. Finally, we present experiments validating the +time complexity of robust value iteration. +4 +Optimal Policies +In the previous sections, we discussed how to efficiently obtain the optimal robust value +functions. This section focuses on utilizing these optimal robust value functions to derive +9 + +Table 3: Optimal Policy +U +π∗ +U(a|s) ∝ +Remark +Us +p +A(s, a)p−11(A(s, a) ≥ 0) +Top actions proportional to +(p − 1)-th power of advantage +Us +1 +1(A(s, a) ≥ 0) +Top actions with uniform probability +Us +2 +A(s, a)1(A(s, a) ≥ 0) +Top actions proportion to advantage +Us +∞ +1(A(s, a) = 0) +Best action +Usa +p +1(A(s, a) = maxa A(s, a)) +Best regularized action +(P0, R0) +1(A(s, a) = 0) +Non-robust MDP: Best action +where Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v∗ +U(s′), and A(s, a) = Q(s, a) − v∗ +U(s). +the optimal robust policy using +π∗ +U ∈ arg max +π +T π +U v∗ +U. +This implies, the robust optimal policy π∗ +U(·|s) at state s, is the policy π that maximizes +� +a +π(a|s) +min +(P,R)∈U +� +R(s, a) + γ +� +s′ +P(s′|s, a)v∗ +U(s′) +� +. +Non-robust MDP admits a deterministic optimal policy that maximizes the optimal +Q-value Q(s, a) := R(s, a) + γ � +s′ P(s′|s, a)v∗ +(P,R)(s′). +sa-rectangular robust MDPs are known to admit a deterministic optimal robust +policy [18, 24]. Moreover, from Theorem 1, it clear that a sa-rectangular Lp robust MDP +has a deterministic optimal robust policy that maximizes the regularized Q-value Q(s, a) = +−αs,a − γβs,aκq(v) + R0(s, a) + γ � +s′ P0(s′|s, a)v∗ +Usa +p (s′). +s-rectangular robust MDPs: For this case, it is known that all optimal robust +policies can be stochastic [32], however, it was not previously known what the nature of +this stochasticity was. The result below provides the first explicit characterization of robust +optimal policies. +Theorem 4. The optimal robust policy π∗ +Usp can be computed using optimal robust value +function as: +π∗ +Usp(a|s) ∝ [Q(s, a) − v∗ +Usp(s)]p−11 +� +Q(s, a) ≥ v∗ +Usp(s) +� +where Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v∗ +Us +p(s). +The above policy is a threshold policy that takes actions with a positive advantage, which +is proportional to the advantage function, while giving more weight to actions with higher +advantages and avoiding playing actions that are not useful. This policy is different from +the optimal policy in soft-Q learning with entropy regularization, which is a softmax policy +10 + +Algorithm 2 Online s-rectangular Lp robust value iteration +Input: Initialize Q, v randomly, s0 ∼ µ, and n = 0. +Output: v = v∗ +Usp. +1: while not converged; n = n + 1 do +2: +Estimate κp(v) using table 1. +3: +Approximate (T ∗ +Uspv)(sn) using table 2 and update +v(sn) = v(sn) + ηn[(T ∗ +Uspv)(sn) − v(sn)]. +4: +Play action an = a with probability proportional to +[Q(sn, a) − v(sn)]p−11(Q(sn, a) ≥ v(sn)), +and get next state sn+1 from the environment. +5: +Update Q-value: +Q(sn, an) =Q(sn, an) + η′ +n[R(sn, an) + γv(sn+1) − Q(sn, an)]. +6: end while +Table 4: Relative running cost (time) for value iteration +S +A +nr +Usa +1 +LP +Us +1 LP +Usa +1 +Usa +2 +Usa +∞ +Us +1 +Us +2 +Us +∞ +Usa +10 +Us +10 +10 +10 +1 +1438 +72625 +1.7 +1.5 +1.5 +1.4 +2.6 +1.4 +5.5 +33 +30 +10 +1 +6616 +629890 +1.3 +1.4 +1.4 +1.5 +2.8 +3.0 +5.2 +78 +50 +10 +1 +6622 +4904004 +1.5 +1.9 +1.3 +1.2 +2.4 +2.2 +4.1 +41 +100 +20 +1 +16714 +NA +1.4 +1.5 +1.5 +1.1 +2.1 +1.5 +3.2 +41 +nr stands for Non-robust MDP +of the form π(a|s) ∝ eη(Q(a|s)−v(s)) [14, 21, 27]. To the best of our knowledge, this type of +policy has not been presented in literature before. +The special cases of the above theorem for p = 1, 2, ∞ along with others are summarized +in table 3. +5 +Time complexity +In this section, we examine the time complexity of robust value iteration: +vn+1 := T ∗ +U vn +for different Lp robust MDPs assuming the knowledge of nominal values (P0, R0). Since, the +optimal robust Bellman operator T ∗ +U is γ-contraction operator [32], meaning that it requires +only O(log( 1 +ϵ )) iterations to obtain an ϵ-close approximation of the optimal robust value. +The main challenge is to calculate the cost of one iteration. +The evaluation of the optimal robust Bellman operators in Theorem 1 and Theorem 3 +has three main components. A) Computing κp(v), which can be done differently depending +11 + +Table 5: Time complexity +Total cost O +Non-Robust MDP +log(1/ϵ)S2A +Usa +1 +log(1/ϵ)S2A +Usa +2 +log(1/ϵ)S2A +Usa +∞ +log(1/ϵ)S2A +Us +1 +log(1/ϵ)(S2A + SA log(A)) +Us +2 +log(1/ϵ)(S2A + SA log(A)) +Us +∞ +log(1/ϵ)S2A +Usa +p +log(1/ϵ) +� +S2A + S log(S/ϵ) +� +Us +p +log(1/ϵ) +� +S2A + SA log(A/ϵ) +� +Convex U +Strongly NP Hard +on the value of p, as shown in table 1. B) Computing the Q-value from v, which requires +O(S2A) in all cases. And finally, C) Evaluating optimal robust Bellman operators from +Q-values, which requires different operations such as sorting of the Q-value, calculating the +best action, and performing a binary search, etc., as shown in table 2. The overall complexity +of the evaluation is presented in table 5, with the proofs provided in appendix L. +We can observe that when the state space S is large, the complexity of the robust MDPs +is the same as that of the non-robust MDPs, as the complexity of all robust MDPs is the +same as non-robust MDPs at the limit S → ∞ (keeping action space A and tolerance ϵ +constant). This is verified by our experiments, thus concluding that the Lp robust MDPs +are as easy as non-robust MDPs. +6 +Experiments +In this section, we present numerical results that demonstrate the effectiveness of our methods, +verifying our theoretical claims. +Table 4 and Figure 1 demonstrate the relative cost (time) of robust value iteration +compared to non-robust MDP, for randomly generated kernel and reward functions with +varying numbers of states S and actions A. The results show that s and sa-rectangular +MDPs are indeed costly to solve using numerical methods such as Linear Programming (LP). +Our methods perform similarly to non-robust MDPs, especially for p = 1, 2, ∞. For general +p, binary search is required for acceptable tolerance, which requires 30−50 iterations, leading +to a little longer computation time. +As our complexity analysis shows, value iteration’s relative cost converges to 1 as the +number of states increases while keeping the number of actions fixed. This is confirmed by +Figure 1. +The rate of convergence for all the settings tested was the same as that of the non-robust +setting, as predicted by the theory. The experiments ran a few times, resulting in some +stochasticity in the results, but the trend is clear. Further details can be found in section G. +12 + +Figure 1: Relative cost of value iteration w.r.t. non-robust MDP at different S with fixed +A = 10. +13 + +Relative cost of value iteration +U^sa 1 +1.5 +U^s 1 +relative cost to non-robust MDPs +non-robust +1.4 +L.3 +1.0 +0 +200 +600 +800 +400 +1000 +1200 +1400 +number of states7 +Conclusion and future work +We present an efficient robust value iteration for s-rectangular Lp-robust MDPs. Our method +can be easily adapted to an online setting, as shown in Algorithm 2 for s-rectangular Lp-robust +MDPs. Algorithm 2 is a two-time-scale algorithm, where the Q-values are approximated at +a faster time scale and the value function is approximated from the Q-values at a slower +time scale. The p-variance function κp can be estimated in an online fashion using batches +or other sophisticated methods. The convergence of the algorithm can be guaranteed from +[8]; however, its analysis is left for future work. +Additionally, we introduce a novel value regularizer (κp) and a novel threshold policy +which may help to obtain more robust and generalizable policies. +Further research could focus on other types of uncertainty sets, potentially resulting in +different kinds of regularizers and optimal policies. +References +[1] Oren Anava and Kfir Levy. k*-nearest neighbors: From global to local. Advances in +neural information processing systems, 29, 2016. +[2] Mahsa Asadi, Mohammad Sadegh Talebi, Hippolyte Bourel, and Odalric-Ambrym +Maillard. Model-based reinforcement learning exploiting state-action equivalence. CoRR, +abs/1910.04077, 2019. +[3] Peter Auer, Thomas Jaksch, and Ronald Ortner. +Near-optimal regret bounds for +reinforcement learning. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, +Advances in Neural Information Processing Systems, volume 21. Curran Associates, Inc., +2008. +[4] Peter Auer and Ronald Ortner. Logarithmic online regret bounds for undiscounted +reinforcement learning. In B. Schölkopf, J. Platt, and T. Hoffman, editors, Advances in +Neural Information Processing Systems, volume 19. MIT Press, 2006. +[5] Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds +for reinforcement learning. In Doina Precup and Yee Whye Teh, editors, Proceedings of +the 34th International Conference on Machine Learning, volume 70 of Proceedings of +Machine Learning Research, pages 263–272. PMLR, 06–11 Aug 2017. +[6] J. Andrew Bagnell, Andrew Y. Ng, and Jeff G. Schneider. Solving uncertain markov +decision processes. Technical report, Carnegie Mellon University, 2001. +[7] Bahram Behzadian, Marek Petrik, and Chin Pang Ho. Fast algorithms for l_\infty- +constrained s-rectangular robust mdps. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. +Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing +Systems, volume 34, pages 25982–25992. Curran Associates, Inc., 2021. +[8] Vivek Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint. 01 2008. +[9] Esther Derman, Matthieu Geist, and Shie Mannor. Twice regularized mdps and the +equivalence between robustness and regularization, 2021. +14 + +[10] Esther Derman and Shie Mannor. Distributional robustness and regularization in +reinforcement learning, 2020. +[11] Benjamin Eysenbach and Sergey Levine. Maximum entropy rl (provably) solves some +robust rl problems, 2021. +[12] Vineet Goyal and Julien Grand-Clément. Robust markov decision process: Beyond +rectangularity, 2018. +[13] Jean-Bastien Grill, Omar Darwiche Domingues, Pierre Menard, Remi Munos, and +Michal Valko. Planning in entropy-regularized markov decision processes and games. +In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Gar- +nett, editors, Advances in Neural Information Processing Systems, volume 32. Curran +Associates, Inc., 2019. +[14] Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement +learning with deep energy-based policies, 2017. +[15] Grani Adiwena Hanasusanto and Daniel Kuhn. Robust data-driven dynamic program- +ming. In C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, +editors, Advances in Neural Information Processing Systems, volume 26. Curran Asso- +ciates, Inc., 2013. +[16] Chin Pang Ho, Marek Petrik, and Wolfram Wiesemann. Partial policy iteration for +l1-robust markov decision processes, 2020. +[17] Hisham Husain, Kamil Ciosek, and Ryota Tomioka. Regularized policies are reward +robust, 2021. +[18] Garud N. Iyengar. Robust dynamic programming. Mathematics of Operations Research, +30(2):257–280, May 2005. +[19] David L. Kaufman and Andrew J. Schaefer. Robust modified policy iteration. INFORMS +J. Comput., 25:396–410, 2013. +[20] Xiang Li, Wenhao Yang, and Zhihua Zhang. A Regularized Approach to Sparse Optimal +Policy in Reinforcement Learning. Curran Associates Inc., Red Hook, NY, USA, 2019. +[21] Tien Mai and Patrick Jaillet. Robust entropy-regularized markov decision processes, +2021. +[22] Shie Mannor, Ofir Mebel, and Huan Xu. Robust mdps with k-rectangular uncertainty. +Math. Oper. Res., 41(4):1484–1509, nov 2016. +[23] Shie Mannor, Duncan Simester, Peng Sun, and John N. Tsitsiklis. Bias and variance in +value function estimation. In Proceedings of the Twenty-First International Conference +on Machine Learning, ICML ’04, page 72, New York, NY, USA, 2004. Association for +Computing Machinery. +[24] Arnab Nilim and Laurent El Ghaoui. Robust control of markov decision processes with +uncertain transition matrices. Oper. Res., 53:780–798, 2005. +[25] Charles Packer, Katelyn Gao, Jernej Kos, Philipp Krähenbühl, Vladlen Koltun, and +Dawn Song. Assessing generalization in deep reinforcement learning, 2018. +15 + +[26] Martin L. Puterman. Markov decision processes: Discrete stochastic dynamic program- +ming. In Wiley Series in Probability and Statistics, 1994. +[27] John Schulman, Xi Chen, and Pieter Abbeel. Equivalence between policy gradients and +soft q-learning, 2017. +[28] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. +The MIT Press, second edition, 2018. +[29] Aviv Tamar, Shie Mannor, and Huan Xu. Scaling up robust mdps using function +approximation. In Eric P. Xing and Tony Jebara, editors, Proceedings of the 31st +International Conference on Machine Learning, volume 32 of Proceedings of Machine +Learning Research, pages 181–189, Bejing, China, 22–24 Jun 2014. PMLR. +[30] Yue Wang and Shaofeng Zou. Online robust reinforcement learning with model uncer- +tainty, 2021. +[31] Yue Wang and Shaofeng Zou. Policy gradient method for robust reinforcement learning, +2022. +[32] Wolfram Wiesemann, Daniel Kuhn, and Breç Rustem. Robust markov decision processes. +Mathematics of Operations Research, 38(1):153–183, 2013. +[33] Huan Xu and Shie Mannor. Robustness and generalization, 2010. +[34] Chenyang Zhao, Olivier Sigaud, Freek Stulp, and Timothy M. Hospedales. Investigating +generalisation in continuous deep reinforcement learning, 2019. +How to read appendix +1. Section A contains related work. +2. Section B contains additional properties and results that couldn’t be included in the +main section for the sake of clarity and space. Many of the results in the main paper +is special cases of the results in this section. +3. Section C contains the discussion on zero transition kernel (forbidden transitions). +4. Section D contains a possible connection this work to UCRL. +5. Section G contains additional experimental results and a detailed discussion. +6. All the proofs of the main body of the paper is presented in the section K and L. +7. Section I contains helper results for section K. Particularly, it discusses p-mean function +ωp and p-variance function κp. +8. Section J contains helper results for section K. Particularly, it discusses Lp water +pouring lemma, necessary to evaluate robust optimal Bellman operator (learning) for +s-rectangular Lp robust MDPs. +9. Section L contains time complexity proof for model based algorithms. +16 + +10. Section E develops Q-learning machinery for (sa)-rectangular Lp robust MDPs based +on the results in the main section. It is not used in the main body or anywhere +else, but this provides a good understanding for algorithms proposed in section F for +(sa)-rectangular case. +11. Section F contains model-based algorithms for s and (sa)-rectangular Lp robust MDPs. +It also contains, remarks for special cases for p = 1, 2, ∞. +A +Related Work +R-Contamination Uncertainty Robust MDPs +The paper [30] considers the following uncertainty set for some fixed constant 0 ≤ R ≤ 1, +Psa = {(1 − R)(P0)(·|s, a) + RP | P ∈ ∆S}, +s ∈ S, a ∈ A, +(9) +and P = ⊗s,aPs,a, +U = {R0} × P. The robust value function vπ +U is the fixed point of +the robust Bellman operator defined as +(T π +U v)(s) := min +P ∈P +� +a +π(a|s)[R0(s, a) + γ +� +s′ +P(s′|s, a)v(s′)], +(10) += +� +a +π(a|s)[R0(s, a) − γR max +s +v(s) + (1 − R)γ +� +s′ +P0(s′|s, a)v(s′)]. +(11) +And the optimal robust value function v∗ +U⊣ is the fixed point of the optimal robust Bellman +operator defined as +(T ∗ +U v)(s) := max +π +min +P ∈P +� +a +π(a|s)[R0(s, a) + γ(1 − R) +� +s′ +P(s′|s, a)v(s′)], +(12) += max +a [R0(s, a) − γR max +s +v(s) + γ(1 − R) +� +s′ +P0(s′|s, a)v(s′)]. +(13) +Since, the uncertainty set is sa-rectangular, hence the map is a contraction [24], so the +robust value iteration here, will also converge linearly similar to non-robust MDPs. It is also +possible to obtain Q-learning as following +Qn+1(s, a) = R0(s, a) − γR max +s,a Qn(s, a) + γ(1 − R) +� +s′ +P0(s′|s, a) max +s′ +Qn(s′, a′). +(14) +Convergence of the above Q-learning follows from the contraction of robust value iteration. +Further, it is easy to see that model-free Q-learning can be obtained from the above. +A follow-up work [31] proposes a policy gradient method for the same. +Proposition 1. (Theorem 3.3 of [31]) Consider a class of policies Π satisfying Assumption +3.2 of [31]. The gradient of the robust return is given by +∇ρπθ = +γR +(1 − γ)(1 − γ + γR) +� +s,a +dπθ +µ (s, a)∇πθ(a|s)Qπθ +U (s, a) ++ +1 +1 − γ + γR +� +s,a +dπθ +sθ (s, a)∇πθ(a|s)Qπθ +U (s, a), +where sθ ∈ arg max vπθ +U (s), and Qπ +U(s, a) = � +a π(a|s) +� +R0(s, a) − γR maxs vπ +U(s) + γ(1 − +R) � +s′ P0(s′|s, a)vπ +U(s′) +� +. +17 + +The work shows that the proposed robust policy gradient method converges to the global +optimum asymptotically under direct policy parameterization. +The uncertainty set considered here, is sa-rectangular, as uncertainty in each state-action +is independent, hence the regularizer term (γR maxs v(s)) is independent of policy, and the +optimal (and greedy) policy is deterministic. It is unclear, how the uncertainty set can be +generalized to the s-rectangular case. Observe that the above results resemble very closely +our sa-rectangular L1 robust MDPs results. +Twice Regularized MDPs +The paper [9] converts robust MDPs to twice regularized MDPs, and proposes a gradient +based policy iteration method for solving them. +Proposition 2. (corollary 3.1 of [9]) (s-rectangular reward robust policy evaluation) Let the +uncertainty set be U = (R0 + R) × {P0}, where Rs = {rs ∈ RA | ∥rs∥ ≤ αs} for all s ∈ S. +Then the robust value function vπ +U is the optimal solution to the convex optimization problem: +max +v∈RA⟨µ, v⟩ +s.t. +v(s) ≤ (T π +R0,P0v)(s) − αs∥πs∥, +∀s ∈ S. +It derives the policy gradient for reward robust MDPs to obtain the optimal robust policy +π∗ +U. +Proposition 3. (Proposition 3.2 of [9]) (s-rectangular reward robust policy gradient) Let +the uncertainty set be U = (R0 + R) × {P0}, where Rs = {rs ∈ RA | ∥rs∥ ≤ αs} for all +s ∈ S. Then the gradient of the reward robust objective ρπ +U := ⟨µ, vπ +U⟩ is given by +∇ρπ +U = E(s,a)∼dπ +P0 +� +∇ ln(π(a|s)) +� +Qπ +U(s, a) − αs +π(a|s) +∥πs∥ +�� +, +where Qπ +U(s, a) := min(R,P )∈U[R(s, a) + γ � +s′ P(s′|s, a)vπ +U(s′)]. +Proposition 4. (Corollary 4.1 of [9]) (s-rectangular general robust policy evaluation) Let +the uncertainty set be U = (R0 + R) × {P0 + P}, where Rs = {rs ∈ RA | ∥rs∥ ≤ αs} and +Ps = {Ps ∈ RS×A | ∥Ps∥ ≤ βs} for all s ∈ S. Then the robust value function vπ +U is the +optimal solution to the convex optimization problem: +max +v∈RA⟨µ, v⟩ +s.t. +v(s) ≤ (T π +R0,P0v)(s) − αs∥πs∥ − γβs∥v∥∥πs∥, +∀s ∈ S. +Same as the reward robust case, the paper tries to find a policy gradient method to +obtain the optimal robust policy. Unfortunately, the dependence of regularizer terms on +value makes it a very difficult task. Hence it proposes the R2MPI algorithm (algorithm 1 of +[9]) for the purpose that optimizing the greedy step via projection onto the simplex using +a black box solver. Note that the above proposition is not same as our policy evaluation +(although it looks similar), it requires some extra assumptions (assumption 5.1 [9]) and lot +of work ensure R2 Bellman operator is contraction etc. In our case, we directly evaluate +robust Bellman operator that has already proven to be a contraction, hence we don’t require +any extra assumption nor any other work as [9]. +Our work makes improvements over this work by explicitly solving both policy evaluation +and policy improvement in general robust MDPs. It also makes more realistic assumptions +on the transition kernel uncertainty set. +18 + +Regularizer solves Robust MDPs +The work [11] looks in the opposite direction than we do. It investigates the impact of the +popularly used entropy regularizer on robustness. It finds that MaxEnt can be used to +maximize a lower bound on a certain robust RL objective (reward robust). +As we noticed that ∥πs∥q behaves like entropy in our regularization. Further, our work +also deals with uncertainty in transition kernel in addition to the uncertainty in reward +function. +Upper Confidence RL +The upper confidence setting in [4, 3] is very similar to our Lp robust setting. We refer to +this discussion in section D. +B +S-rectangular: More Properties +Definition 1. We begin with the following notational definitions. +1. Q-value at value function v is defined as +Qv(s, a) := R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′). +2. Optimal Q-value is defined as +Q∗ +U(s, a) = R0(s, a) + γ +� +s′ +P0(s′|s, a)v∗ +U(s′) +3. With little abuse of notation, Q(s, ai) shall denote the ith best value in state s, that is +Q(s, a1) ≥ Q(s, a2) ≥, · · · , ≥ Q(s, aA). +4. πv +U denotes the greedy policy at value function v, that is +T ∗ +U v = T πv +U +U +v. +5. χp(s) denotes the number of active actions in state s in s-rectangular Lp robust MDPs, +defined as +χp(s) := +�� {a | π∗ +Us +p(a|s) ≥ 0} +�� . +6. χp(p, s) denotes the number of active actions in state s at value function v in s- +rectangular Lp robust MDPs, defined as +χp(v, s) := +�� {a | πv +Us +p(a|s) ≥ 0} +�� . +We saw above that optimal policy in s-rectangular robust MDPs may be stochastic. The +action that has a positive advantage is active and the rest are inactive. Let χp(s) be the +number of active actions in state s, defined as +χp(s) := +�� {a | π∗ +Us +p(a|s) ≥ 0} +��= +�� {a | Q∗ +Us +p(s, a) ≥ v∗ +Us +p(s)} +�� . +(15) +19 + +Last equality comes from Theorem 4. One direct relation between Q-value and value function +is given by +v∗ +Us +p(s) = +� +a +π∗ +Us +p(a|s) +� +− +� +αs + γβsκq(v) +� +∥π∗ +Us +p(·|s)∥q + Q∗ +Us +p(s, a) +� +. +(16) +The above relation is very convoluted compared to non-robust and sa-rectangular robust +cases. The property below illuminates an interesting relation. +Property 1. (Optimal Value vs Q-value) v∗ +Us +p(s) is bounded by the Q-value of χp(s)th and +(χp(s) + 1)th actions, that is , +Q∗ +Us +p(s, aχp(s)+1) < v∗ +Us +p(s) ≤ Q∗ +Us +p(s, aχp(s)). +This special case of the property 2, similarly table 6 is special case of table 8. +Table 6: Optimal value function and Q-value +v∗(s) = maxa Q∗(s, a) +Best value +v∗ +Usa +p (s) = maxa[αs,a − γβs,aκq(v∗ +Usa +p ) − Q∗ +Usa +p (s, a)] +Best regularized value +Q∗ +Us +p(s, aχp(s)+1) < v∗ +Us +p(s) ≤ Q∗ +Us +p(s, aχp(s)) +Sandwich! +where v∗, Q∗ is the optimal value function and Q-value respectively +of non-robust MDP. +The same is true for the non-optimal Q-value and value function. +Theorem 5. (Greedy policy) The greedy policy πv +Us +p is a threshold policy, that is proportional +to the advantage function, that is +πv +Us +p(a|s) ∝ +� +Qv(s, a) − (T ∗ +Us +pv)(s) +�p−1 1 +� +Qv(s, a) ≥ (T ∗ +Us +pv)(s) +� +. +The above theorem is proved in the appendix, and Theorem 4 is its special case. So is +table 3 special case of table 7. +Table 7: Greedy policy at value function v +U +πv +U(a|s) ∝ +remark +Us +p +(Qv(s, a) − (T ∗ +U v)(s))p−11(Av +U(s, a) ≥ 0) +top actions proportional to +(p − 1)th power of its advantage +Us +1 +1(Av +U(s,a)≥0) +� +a 1(Av +U(s,a)≥0) +top actions with uniform probability +Us +2 +Av +U(s,a)1Av +U(s,a)≥0) +� +a Av +U(s,a)1(Av +U(s,a)≥0) +top actions proportion to advantage +Us +∞ +arg maxa∈A Qv(s, a) +best action +Usa +p +arg maxa[−αsa − γβsaκq(v) + Qv(s, a)] +best action +where Av +U(s, a) = Qv(s, a) − (T ∗ +U v)(s) and Qv(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′). +20 + +The above result states that the greedy policy takes actions that have a positive advantage, +so we have. +χp(v, s) := +�� {a | πv +Us +p(a|s) ≥ 0} +��= +�� {a | Qv(s, a) ≥ (T ∗ +Us +p)v(s)} +�� . +(17) +Property 2. (Greedy Value vs Q-value) (T ∗ +Us +pv)(s) is bounded by the Q-value of χp(v, s)th +and (χp(v, s) + 1)th actions, that is , +Qv(s, aχp(v,s)+1) < (T ∗ +Us +pv)(s) ≤ Qv(s, aχp(v,s)). +Table 8: +Greedy value function and Q-value +(T ∗v)(s) = maxa Qv(s, a) +Best value +(T ∗ +Usa +p )v(s) = maxa[αs,a − γβs,aκq(v) − Qv(s, a)] +Best regularized value +Qv(s, aχp(v,s)+1) < (T ∗ +Us +p)v(s) ≤ Qv(s, aχp(v,s)) +Sandwich! +where Qv(s, a1) ≥, · · · , ≥ Qv(s, aA). +The property below states that we can compute the number of active actions χp(v, s) +(and χp(s)) directly without computing greedy (optimal) policy. +Property 3. χp(v, s) is number of actions that has positive advantage, that is +χp(v, s) := max{k | +k +� +i=1 +� +Qv(s, ai) − Qv(s, ak) +�p≤ σp}, +where σ = αs + γβsκq(v), and Qv(s, a1) ≥ Qv(s, a2), ≥ · · · ≥ Q(s, aA). +When uncertainty radiuses (αs, βs) are zero (essentially σ = 0 ), then χp(v, s) = 1, ∀v, s, +that means, greedy policy taking the best action. In other words, all the robust results +reduce to non-robust results as discussed in section 2.2 as the uncertainty radius becomes +zero. +Algorithm 3 Algorithm to compute s-rectangular Lp robust optimal Bellman Operator +1: Input: σ = αs + γβsκq(v), +Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′). +2: Output (T ∗ +Us +pv)(s), χp(v, s) +3: Sort Q(s, ·) and label actions such that Q(s, a1) ≥ Q(s, a2), · · · . +4: Set initial value guess λ1 = Q(s, a1) − σ and counter k = 1. +5: while k ≤ A − 1 and λk ≤ Q(s, ak) do +6: +Increment counter: k = k + 1 +7: +Take λk to be a solution of the following +k +� +i=1 +� +Q(s, ai) − x +�p= σp, +and +x ≤ Q(s, ak). +(18) +8: end while +9: Return: λk, k +21 + +C +Revisiting kernel noise assumption +Sa-Rectangular Uncertainty +Suppose at state s, we know that it is impossible to have transition (next) to some states +(forbidden states Fs,a) under some action. That is, we have the transition uncertainty set P +and nominal kernel P0 such that +P0(s′|s, a) = P(s′|s, a) = 0, +∀P ∈ P, ∀s′ ∈ Fs,a. +(19) +Then we define, the kernel noise as +Ps,a = {P | ∥P∥p = βs,a, +� +s′ +P(s′) = 0, +P(s”) = 0, ∀s” ∈ Fs,a}. +(20) +In this case, our p-variance function is redefined as +κp(v, s, a) = +min +∥P ∥p=βs,a, +� +s′ P (s′)=0, +P (s”)=0, +∀s”∈Fs,a⟨P, v⟩ +(21) += min +ω∈R∥u − ω1∥p, +where u(s) = v(s)1(s /∈ Fs,a). +(22) +=κp(u) +(23) +This basically says, we consider value of only those states that is allowed (not forbidden) in +calculation of p-variance. For example, we have +κ∞(v, s, a) = maxs/∈Fs,a v(s) − mins/∈Fs,a v(s) +2 +. +(24) +(25) +So theorem 1 of the main paper can be re-stated as +Theorem 6. (Restated) (Sa)-rectangular Lp robust Bellman operator is equivalent to reward +regularized (non-robust) Bellman operator. That is, using κp above, we have +(T π +Usa +p v)(s) = +� +a +π(a|s)[−αs,a − γβs,aκq(v, s, a) + R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′)], +(T ∗ +Usa +p v)(s) = max +a∈A[−αs,a − γβs,aκq(v, s, a) + R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′)]. +S-Rectangular Uncertainty +This notion can also be applied to s-rectanular uncertainty, but with little caution. Here, we +define forbidden states in state s to be Fs (state dependent) instead of state-action dependent +in sa-rectangular case. Here, we define p-variance as +κp(v, s) = κp(u), +where u(s) = v(s)1(s /∈ Fs). +(26) +So the theorem 2 can be restated as +Theorem 7. (restated) (Policy Evaluation) S-rectangular Lp robust Bellman operator is +equivalent to reward regularized (non-robust) Bellman operator, that is +(T π +Us +pv)(s) = − +� +αs +γβsκq(v, s) +� +∥π(·|s)∥q + +� +a +π(a|s) +� +R0(s, a)+γ +� +s′ +P0(s′|s, a)v(s′) +� +where κp is defined above and ∥π(·|s)∥q is q-norm of the vector π(·|s) ∈ ∆A. +22 + +All the other results (including theorem 4), we just need to replace the old p-variance +function with new p-variance function appropriately. +D +Application to UCRL +In robust MDPs, we consider the minimization over uncertainty set to avoid risk. When we +want to discover the underlying kernel by exploration, then we seek optimistic policy, then +we consider the maximization over uncertainty set [4, 3, 2]. We refer the reader to the step 3 +of the UCRL algorithm [4], which seeks to find +arg max +π +max +R,P ∈U⟨µ, vπ +P,R⟩, +(27) +where +U = {(R, P) | |R(s, a) − R0(s, a)| ≤ αs,a, |P(s′|s, a) − P0(s′|s, a)| ≤ βs,a,s′, P ∈ (∆S)S×A} +for current estimated kernel P0 and reward function R0. We refer section 3.1.1 and step 4 of +the UCRL 2 algorithm of [3], which seeks to find +arg max +π +max +R,P ∈U⟨µ, vπ +P,R⟩, +(28) +where +U ={(R, P) | |R(s, a) − R0(s, a)| ≤ αs,a, +∥P(·|s, a) − P0(·|s, a)∥1 ≤ βs,a, P ∈ (∆S)S×A} +The uncertainty radius α, β depends on the number of samples of different transitions and +observations of the reward. The paper [4] doesn’t explain any method to solve the above +problem. UCRL 2 algorithm [3], suggests to solve it by linear programming that can be very +slow. We show that it can be solved by our methods. +The above problem can be tackled as following +max +π +max +R,P ∈Usa +p +⟨µ, vπ +P,R⟩. +(29) +We can define, optimistic Bellman operators as +ˆT π +U v := max +R,P ∈U vπ +P,R, +ˆT ∗ +U v := max +π +max +R,P ∈U vπ +P,R. +(30) +The well definition and contraction of the above optimistic operators may follow directly +from their pessimistic (robust) counterparts. We can evaluate above optimistic operators as +( ˆT π +Usa +p v)(s) = +� +a +π(a|s) +� +R0(s, a) + αs,a + βs,aγκq(v) + +� +s′ +P0(s′|s, a)v(s′) +� +, +(31) +( ˆT ∗ +Usa +p v)(s) = max +a +� +R0(s, a) + αs,a + βs,aγκq(v) + +� +s′ +P0(s′|s, a)v(s′) +� +. +(32) +The uncertainty radiuses α, β and nominal values P0, R0 can be found by similar analysis by +[4, 3]. We can get the Q-learning from the above results as +Q(s, a) → R0(s, a) − αs,a − γβs,aκq(v) + γ +� +s′ +P0(s′|s, a) max +a′ Q(s′, a′), +(33) +23 + +where v(s) = maxa Q(s, a). From law of large numbers, we know that uncertainty radiuses +αs,a, βs,a behaves as O( 1 +√n) asymptotically with number of iteration n. This resembles very +closely to UCB VI algorithm [5]. We emphasize that similar optimistic operators can be +defined and evaluated for s-rectangular uncertainty sets too. +E +Q-Learning for sa-rectangular MDPs +In view of Theorem 1, we can define Qπ +Usa +p , the robust Q-values under policy π for (sa)- +rectangular Lp constrained uncertainty set Usa +p +as +Qπ +Usa +p (s, a) := −αs,a − γβs,aκq(vπ +Usa +p ) + R0(s, a) + γ +� +s′ +P0(s′|s, a)vπ +Usa +p (s′). +(34) +This implies that we have the following relation between robust Q-values and robust value +function, same as its non-robust counterparts, +vπ +Usa +p (s) = +� +a +π(a|s)Qπ +Usa +p (s, a). +(35) +Let Q∗ +Usa +p denote the optimal robust Q-values associated with optimal robust value v∗ +Usa +p , given +as +Q∗ +Usa +p (s, a) := −αs,a − γβs,aκq(v∗ +Usa +p ) + R0(s, a) + γ +� +s′ +P0(s′|s, a)v∗ +Usa +p (s′). +(36) +It is evident from Theorem 1 that optimal robust value and optimal robust Q-values satisfies +the following relation, same as its non-robust counterparts, +v∗ +Usa +p (s′) = max +a∈A Q∗ +Usa +p (s, a). +(37) +Combining 37 and 36, we have optimal robust Q-value recursion as follows +Q∗ +Usa +p (s, a) = −αs,a − γβs,aκq(v∗ +Usa +p ) + R0(s, a) + γ +� +s′ +P0(s′|s, a) max +a∈A Q∗ +Usa +p (s, a). +(38) +The above robust Q-value recursion enjoys similar properties as its non-robust counterparts. +Corollary 1. ((sa)-rectangular Lp regularized Q-learning) Let +Qn+1(s, a) = R0(s, a) − αsa − γβsaκq(vn) + γ +� +s′ +P0(s′|s, a) max +a∈A Qn(s′, a), +where vn(s) = maxa∈A Qn(s, a), then Qn converges to Q∗ +Usa +p linearly. +Observe that the above Q-learning equation is exactly the same as non-robust MDP +except the reward penalty. Recall that κ1(v) = 0.5(maxs v(s) − mins v(s)) is difference +between peak to peak values and κ2(v) is variance of v, that can be easily estimated. Hence, +model free algorithms for (sa)-rectangular Lp robust MDPs for p = 1, 2, can be derived +easily from the above results. This implies that (sa)-rectangular L1 and L2 robust MDPs +are as easy as non-robust MDPs. +24 + +F +Model Based Algorithms +In this section, we assume that we know the nominal transitional kernel and nominal reward +function. Algorithm 4, algorithm 5 is model based algorithm for (sa)-rectangular and s +rectangular Lp robust MDPs respectively. It is explained in the algorithms, how to get deal +with specail cases (p = 1, 2, ∞) in a easy way. +Algorithm 4 Model Based Q-Learning Algorithm for SA Rectangular Lp Robust MDP +1: Input: αs,a, βs,a are uncertainty radius in reward and transition kernel respectively in +state s and action a. Transition kernel P and reward vector R. Take initial Q-values Q0 +randomly and v0(s) = maxa Q0(s, a). +2: while not converged do +3: +Do binary search in [mins vn(s), maxs vn(s)] to get q-mean ωn, such that +� +s +(vn(s) − ωn) +|vn(s) − ωn| |vn(s) − ωn| +1 +p−1 = 0. +(39) +4: +Compute q-variance: +κn = ∥v − ωn∥q. +5: +Note: For p = 1, 2, ∞, we can compute κn exactly in closed from, see table 1. +6: +for s ∈ S do +7: +for a ∈ A do +8: +Update Q-value as +Qn+1(s, a) = R0(s, a) − αsa − γβsaκn + γ +� +s′ +P0(s′|s, a) max +a +Qn(s′, a). +9: +end for +10: +Update value as +vn+1(s) = max +a +Qn+1(s, a). +11: +end for +n → n + 1 +12: end while +25 + +Algorithm 5 Model Based Algorithm for S Rectangular Lp Robust MDP +1: Take initial Q-values Q0 and value function v0 randomly. +2: Input: αs, βs are uncertainty radius in reward and transition kernel respectively in state +s. +3: while not converged do +4: +Do binary search in [mins vn(s), maxs vn(s)] to get q-mean ωn, such that +� +s +(vn(s) − ωn) +|vn(s) − ωn| |vn(s) − ωn| +1 +p−1 = 0. +(40) +5: +Compute q-variance: +κn = ∥v − ωn∥q. +6: +Note: For p = 1, 2, ∞, we can compute κn exactly in closed from, see table 1. +7: +for s ∈ S do +8: +for a ∈ A do +9: +Update Q-value as +Qn+1(s, a) = R0(s, a) + γ +� +s′ +P0(s′|s, a)vn+1(s′). +(41) +10: +end for +11: +Sort actions in decreasing order of the Q-value, that is +Qn+1(s, ai) ≥ Qn+1(s, ai+1). +(42) +12: +Value evaluation: +vn+1(s) = x +such that +(αs + γβsκn)p = +� +Qn+1(s,ai)≥x +|Qn+1(s, ai) − x|p. +(43) +13: +Note: We can compute vn+1(s) exactly in closed from for p = ∞ and for p = 1, 2, +we can do the same using algorithm 8,7 respectively, see table 2. +14: +end for +n → n + 1 +15: end while +26 + +Algorithm 6 Model based algorithm for s-recantangular L1 robust MDPs +1: Take initial value function v0 randomly and start the counter n = 0. +2: while not converged do +3: +Calculate q-variance: +κn = 1 +2 +� +maxs vn(s) − mins vn(s) +� +4: +for s ∈ S do +5: +for a ∈ A do +6: +Update Q-value as +Qn(s, a) = R0(s, a) + γ +� +s′ +P0(s′|s, a)vn(s′). +(44) +7: +end for +8: +Sort actions in state s, in decreasing order of the Q-value, that is +Qn(s, a1) ≥ Qn(s, a2), · · · ≥ Qn(s, aA). +(45) +9: +Value evaluation: +vn+1(s) = max +m +�m +i=1 Qn(s, ai) − αs − βsγκn +m +. +(46) +10: +Value evaluation can also be done using algorithm 8. +11: +end for +n → n + 1 +12: end while +G +Experiments +The table 4 contains relative cost (time) of robust value iteration w.r.t. non-robust MDP, +for randomly generated kernel and reward function with the number of states S and the +number of action A. +Notations +S : number of state, A: number of actions, Usa +p +LP: Sa rectangular Lp robust MPDs by Linear +Programming, Us +p LP: S rectangular Lp robust MPDs by Linear Programming and other +numerical methods, Usa/s +p=1,2,∞ : Sa/S rectangular L1/L2/L∞ robust MDPs by closed form +method (see table 2, theorem 3) Usa/s +p=5,10 : Sa/S rectangular L5/L10 robust MDPs by binary +search (see table 2, theorem 3 of the paper) +Observations +1. Our method for s/sa rectangular L1/L2/L∞ robust MDPs takes almost same (1-3 times) +the time as non-robust MDP for one iteration of value iteration. This confirms our complexity +analysis (see table 4 of the paper) 2. Our binary search method for sa rectangular L5/L10 +robust MDPs takes around 4 − 6 times more time than non-robust counterpart. This is +27 + +Table 9: Relative running cost (time) for value iteration +U +S=10 A=10 +S=30 A=10 +S=50 A=10 +S=100 A=20 +remark +non-robust +1 +1 +1 +1 +Usa +∞ by LP +1374 +2282 +2848 +6930 +lp +Usa +1 +by LP +1438 +6616 +6622 +16714 +lp +Us +1 by LP +72625 +629890 +4904004 +NA +lp/minimize +Usa +1 +1.77 +1.38 +1.54 +1.45 +closed form +Usa +2 +1.51 +1.43 +1.91 +1.59 +closed form +Usa +∞ +1.58 +1.48 +1.37 +1.58 +closed form +Us +1 +1.41 +1.58 +1.20 +1.16 +closed form +Us +2 +2.63 +2.82 +2.49 +2.18 +closed form +Us +∞ +1.41 +3.04 +2.25 +1.50 +closed form +Usa +5 +5.4 +4.91 +4.14 +4.06 +binary search +Usa +10 +5.56 +5.29 +4.15 +3.26 +binary search +Us +5 +33.30 +89.23 +40.22 +41.22 +binary search +Us +10 +33.59 +78.17 +41.07 +41.10 +binary search +lp stands for scipy.optimize.linearprog +28 + +Table 10: Relative running cost (time) for value iteration +U +S=10 A=10 +S=100 A=20 +remark +non-robust +1 +1 +Usa +1 +0.999 +0.999 +closed form +Usa +2 +0.999 +0.999 +closed form +Usa +∞ +1.000 +0.998 +closed form +Us +1 +0.999 +0.999 +closed form +Us +2 +0.999 +0.999 +closed form +Us +∞ +1.000 +0.998 +closed form +Usa +5 +0.999 +0.995 +binary search +Usa +10 +1.000 +0.999 +binary search +Us +5 +1.000 +0.999 +binary search +Us +10 +1.000 +0.995 +binary search +due to extra iterations required to find p-variance function κp(v) through binary search. 3. +Our binary search method for s rectangular L5/L10 robust MDPs takes around 30 − 100 +times more time than non-robust counterpart. This is due to extra iterations required to +find p-variance function κp(v) through binary search and Bellman operator. 4. One common +feature of our method is that time complexity scales moderately as guranteed through our +complexity analysis. 5. Linear programming methods for sa-rectangualr L1/L∞ robsust +MDPs take atleast 1000 times more than our methods for small state-action space, and +it scales up very fast. 6. Numerical methods (Linear programming for minimization over +uncertianty and ’scipy.optimize.minimize’ for maximization over policy) for s-rectangular L1 +robust MDPs take 4-5 order more time than our mehtods (and non-robust MDPs) for very +small state-action space, and scales up too fast. The reason is obvious, as it has to solve +two optimization, one minimization over uncertainty and other maximization over policy, +whereas in the sa-rectangular case, only minimization over uncertainty is required. This +confirms that s-rectangular uncertainty set is much more challenging. +Rate of convergence +The rate of convergence for all were approximately the same as 0.9 = γ, as predicted by +theory. And it is well illustrated by the relative rate of convergence w.r.t. non-robust by the +table G. +In the above experiments, Bellman updates for sa/s rectangular L1/L2/L∞ were done in +29 + +closed form, and for L5/L10 were done by binary search as suggested by table 2 and theorem +3. +Note: Above experiments’ results are for few runs, hence containing some stochas- +ticity but the general trend is clear. In the final version, we will do averaging of many +runs to minimize the stochastic nature. Results for many different runs can be found at +https://github.com/******. +Note that the above experiments were done without using too much parallelization. +There is ample scope to fine-tune and improve the performance of robust MDPs. The above +experiments confirm the theoretical complexity provided in Table 4 of the paper. The codes +and results can be found at https://github.com/******. +Experiments parameters +Number of states S (variable), number of actions A (variable), transition kernel and reward +function generated randomly, discount factor 0.9, uncertainty radiuses =0.1 (for all states +and action, just for convenience ), number of iterations = 100, tolerance for binary search = +0.00001 +Hardware +The experiments are done on the following hardware: Intel(R) Core(TM) i5-4300U CPU @ +1.90GHz 64 bits, memory 7862MiB Software: Experiments were done in python, using numpy, +scipy.optimize.linprog for Linear programmig for policy evalution in s/sa rectangular robust +MDPs, scipy.optimize.minize and scipy.optimize.LinearConstraints for policy improvement +in s-rectangular L1 robust MDPs. +H +Extension to Model Free Settings +Extension of Q-learning (in section E ) for sa-rectangular MDPs to model free setting can +easily done similar to [30], also policy gradient method can be obtained as [31]. The only +thing, we need to do, is to be able to compute/estimate κq online. It can be estimated +using an ensemble (samples). Further, κ2 can be estimated by the estimated mean and the +estimated second moment. κ∞ can be estimated by tracking maximum and minimum values. +For s-rectangular case too, we can obtain model-free algorithms easily, by estimating κq +online and keeping track of Q-values and value function. The convergence analysis may be +similar to [30], especially for sa-rectangular case, and for the other, it would be two time +scale, which can be dealt with techniques in [8]. We leave this for future work. It would be +interesting to obtain policy gradient methods for this case, which we believe can be obtained +from the policy evaluation theorem. +I +p-variance +Recall that κp is defined as follows +κp(v) = min +w ∥v − ω1∥p = ∥v − ωp∥p. +30 + +Now, observe that +∂∥v − ω∥p +∂ω += 0 +=⇒ +� +s +sign(v(s) − ω)|v(s) − ω|p−1 = 0, +=⇒ +� +s +sign(v(s) − ωp(v))|v(s) − ωq(v)|p−1 = 0. +(47) +For p = ∞, we have +lim +p→∞ +��� +� +s +sign +� +v(s) − ω∞(v) +��� v(s) − ω∞(v) +��p��� +1 +p = 0 += +� +max +s |v(s) − ω∞(v)| +� +lim +p→∞ +��� +� +s +sign +� +v(s) − ω∞(v) +�� +|v(s) − ω∞(v) +�� +maxs|v(s) − ω∞(v)| +�p��� +1 +p +Assuming max +s |v(s) − ω∞(v)| ̸= 0 otherwise ω∞ = v(s) = v(s′), +∀s, s′ +=⇒ lim +p→∞ +��� +� +s +sign +� +v(s) − ω∞(v) +�� +|v(s) − ω∞(v) +�� +maxs|v(s) − ω∞(v)| +�p��� +1 +p = 0 +To avoid technical complication, we assume max +s +v(s) > v(s) < min +s +v(s), +∀s +=⇒ lim +p→∞|max +s +v(s) − ω∞(v)| = lim +p→∞|min +s +v(s) − ω∞(v)| +=⇒ max +s +v(s) − lim +q→∞ ω∞(v) = −(min +s +v(s) − lim +p→∞ ω∞(v)), +(managing signs) +=⇒ lim +p→∞ ω∞(v) = maxs v(s) + mins v(s) +2 +. +(48) +κ∞(v) =∥v − ω∞1∥∞ +=∥v − maxs v(s) + mins v(s) +2 +1∥∞, +(putting in value of ω∞) +=maxs v(s) − mins v(s) +2 +(49) +For p = 2, we have +κ2(v) =∥v − ω21∥2 +=∥v − +� +s v(s) +S +1∥2, += +�� +s +(v(s) − +� +s v(s) +S +)2 +(50) +For p = 1, we have +� +s∈S +sign +� +v(s) − ω1(v) +� += 0 +(51) +Note that there may be more than one values of ω1(v) that satisfies the above equation and +each solution does equally good job (as we will see later). So we will pick one ( is median of +31 + +Table 11: p-mean, where v(si) ≥ v(si+1) +∀i. +x +ωx(v) +remark +p +� +s sign(v(s) − ωp(v))|v(s) − ωp(v)| +1 +p−1 = 0 +Solve by binary search +1 +v(s⌊(S+1)/2⌋)+v(s⌈(S+1)/2⌉) +2 +Median +2 +� +s v(s) +S +Mean +∞ +maxs v(s)+mins v(s) +2 +Average of peaks +v) according to our convenience as +ω1(v) = v(s⌊(S+1)/2⌋) + v(s⌈(S+1)/2⌉) +2 +where +v(si) ≥ v(si+1) +∀i. +κ1(v) =∥v − ω11∥1 +=∥v − med(v)1∥1, +(putting in value of ω0, see table 11) += +� +s +|v(s) − med(v)| += +⌊(S+1)/2⌋ +� +i=1 +(v(s) − med(v)) + +S +� +⌈(S+1)/2⌉ +(med(v) − v(s)) += +⌊(S+1)/2⌋ +� +i=1 +v(s) − +S +� +⌈(S+1)/2⌉ +v(s) +(52) +where med(v) := +v(s⌊(S+1)/2⌋)+v(s⌈(S+1)/2⌉) +2 +where +v(si) ≥ v(si+1) +∀i is median of v. The +results are summarized in table 1 and 11. +I.1 +p-variance function and kernel noise +Lemma 1. q-variance function κq is the solution of the following optimization problem +(kernel noise), +κq(v) = −1 +ϵ min +c ⟨c, v⟩, +∥c∥p ≤ ϵ, +� +s +c(s) = 0. +Proof. Writing Lagrangian L, as +L := +� +s +c(s)v(s) + λ +� +s +c(s) + µ( +� +s +|c(s)|p − ϵp), +where λ ∈ R is the multiplier for the constraint � +s c(s) = 0 and µ ≥ 0 is the multiplier for +the inequality constraint ∥c∥q≤ ϵ. Taking its derivative, we have +∂L +∂c(s) = v(s) + λ + µp|c(s)|p−1 c(s) +|c(s)| +(53) +32 + +From the KKT (stationarity) condition, the solution c∗ has zero derivative, that is +v(s) + λ + µp|c∗(s)|p−1 c∗(s) +|c∗(s)| = 0, +∀s ∈ S. +(54) +Using Lagrangian derivative equation (54), we have +v(s) + λ + µp|c∗(s)|p−1 c∗(s) +|c∗(s)| = 0 +=⇒ +� +s +c∗(s)[v(s) + λ + µp|c∗(s)|p−1 c∗(s) +|c∗(s)|] = 0, +(multiply with c∗(s) and summing ) +=⇒ +� +s +c∗(s)v(s) + λ +� +s +c∗(s) + µp +� +s +|c∗(s)|p−1 (c∗(s))2 +|c∗(s)| = 0 +=⇒ ⟨c∗, v⟩ + µp +� +s +|c∗(s)|p = 0 +(using +� +s +c∗(s) = 0 and (c∗(s))2 = |c∗(s)|2 ) +=⇒ ⟨c∗, v⟩ = −µpϵp, +(using +� +s +|c∗(s)|p = ϵp ). +(55) +It is easy to see that µ ≥ 0, as minimum value of the objective must not be positive ( at +c = 0, the objective value is zero). Again we use Lagrangian derivative (54) and try to get +the objective value (−µpϵp) in terms of λ, as +v(s) + λ + µp|c∗(s)|p−1 c∗(s) +|c∗(s)| = 0 +=⇒ |c∗(s)|p−2c∗(s) = −v(s) + λ +µp +, +(re-arranging terms) +=⇒ +� +s +|(|c∗(s)|p−2c∗(s))| +p +p−1 = +� +s +| − v(s) + λ +µp +| +p +p−1 , +(doing +� +s +|·| +p +p−1 ) +=⇒ ∥c∗∥p +p = +� +s +| − v(s) + λ +µp +| +p +p−1 = +� +s +|v(s) + λ +µp +|q = ∥v + λ∥q +q +|µp|q +=⇒ |µp|q∥c∗∥p +p = ∥v + λ∥q +q, +(re-arranging terms) +=⇒ |µp|qϵp = ∥v + λ∥q +q, +(using +� +s +|c∗(s)|p = ϵp ) +=⇒ ϵ(µpϵp/q) = ϵ∥v + λ∥q +(taking 1 +q the power then multiplying with ϵ) +=⇒ µpϵp = ϵ∥v + λ∥q. +(56) +33 + +Again, using Lagrangian derivative (54) to solve for λ, we have +v(s) + λ + µp|c∗(s)|p−1 c∗(s) +|c∗(s)| = 0 +=⇒ |c∗(s)|p−2c∗(s) = −v(s) + λ +µp +, +(re-arranging terms) +=⇒ |c∗(s)| = |v(s) + λ +µp +| +1 +p−1 , +(looking at absolute value) +and +c∗(s) +|c∗(s)| = − v(s) + λ +|v(s) + λ|, +(looking at sign: and note µ, p ≥ 0) +=⇒ +� +s +c∗(s) +|c∗(s)||c∗(s)| = − +� +s +v(s) + λ +|v(s) + λ||v(s) + λ +µp +| +1 +p−1 , +(putting back) +=⇒ +� +s +c∗(s) = − +� +s +v(s) + λ +|v(s) + λ||v(s) + λ +µp +| +1 +p−1 , +=⇒ +� +s +v(s) + λ +|v(s) + λ||v(s) + λ| +1 +p−1 = 0, +( using +� +i +c∗(s) = 0) +(57) +Combining everything, we have +− 1 +ϵ min +c ⟨c, v⟩, +∥c∥p ≤ ϵ, +� +s +c(s) = 0 +=∥v − λ∥q, +such that +� +s +sign(v(s) − λ)|v(s) − λ| +1 +p−1 = 0. +(58) +Now, observe that +∂∥v − λ∥q +∂λ += 0 +=⇒ +� +s +sign(v(s) − λ)|v(s) − λ| +1 +p−1 = 0, +=⇒ κq(v) = ∥v − λ∥q, +such that +� +s +sign(v(s) − λ)|v(s) − λ| +1 +p−1 = 0. +(59) +The last equality follows from the convexity of p-norm ∥·∥q, where every local minima is +global minima. +For the sanity check, we re-derive things for p = 1 from scratch. For p = 1, we have +− 1 +ϵ min +c ⟨c, v⟩, +∥c∥1 ≤ ϵ, +� +s +c(s) = 0. += − 1 +2(min +s +v(s) − max +s +v(s)) +=κ1(v). +(60) +It is easy to see the above result, just by inspection. +I.2 +Binary search for p-mean and estimation of p-variance +If the function f : [−B/2, B/2] → R, B ∈ R is monotonic (WLOG let it be monotonically +decreasing) in a bounded domain, and it has a unique root x∗ s.t. f(x∗) = 0. Then we can +34 + +find x that is an ϵ-approximation x∗ (i.e. ∥x − x∗∥ ≤ ϵ ) in O(B/ϵ) iterations. Why? Let +x0 = 0 and +xn+1 := +� +� +� +� +� +−B+xn +2 +if +f(xn) > 0 +B+xn +2 +if +f(xn) < 0 +xn +if +f(xn) = 0 +. +It is easy to observe that ∥xn − x∗∥ ≤ B(1/2)n. This is proves the above claim. This +observation will be referred to many times. +Now, we move to the main claims of the section. +Proposition 5. The function +hp(λ) := +� +s +sign +� +v(s) − λ +��� v(s) − λ +��p +is monotonically strictly decreasing and also has a root in the range [mins v(s), maxs v(s)]. +Proof. +hp(λ) = +� +s +v(s) − λ +|v(s) − λ||v(s) − λ|p +dhp +dλ (λ) = −p +� +s +|v(s) − λ|p−1 ≤ 0, +∀p ≥ 0. +(61) +Now, observe that hp(maxs v(s)) ≤ 0 and hp(mins v(s)) ≥ 0, hence by hp must have a root +in the range [mins v(s), maxs v(s)] as the function is continuous. +The above proposition ensures that a root ωp(v) can be easily found by binary search +between [mins v(s), maxs v(s)]. +Precisely, ϵ approximation of ωp(v) can be found in O(log( maxs v(s)−mins v(s) +ϵ +)) number of +iterations of binary search. And one evaluation of the function hp requires O(S) iterations. +And we have finite state-action space and bounded reward hence WLOG we can assume +|maxs v(s)|, |mins v(s)| are bounded by a constant. Hence, the complexity to approximate +ωp is O(S log( 1 +ϵ )). +Let ˆωp(v) be an ϵ-approximation of ωp(v), that is +�� ωp(v) − ˆωp(v) +��≤ ϵ. +And let ˆκp(v) be approximation of κp(v) using approximated mean, that is, +ˆκp(v) := ∥v − ˆωp(v)1∥p. +Now we will show that ϵ error in calculation of p-mean ωp, induces O(ϵ) error in estimation +of p-variance κp. Precisely, +��� κp(v) − ˆκp(v) +���= +��� +�� v − ωp(v)1 +�� +p − +�� v − ˆωp(v)1 +�� +p +��� +≤ +�� ωp(v)1 − ˆωp(v)1 +�� +p, +(reverse triangle inequality) += +�� 1 +�� +p +�� ωp(v) − ˆωp(v) +�� +≤ +�� 1 +�� +p ϵ +=S +1 +p ϵ ≤ Sϵ. +(62) +35 + +For general p, an ϵ approximation of κp(v) can be calculated in O(S log( S +ϵ ) iterations. Why? +We will estimate mean ωp to an ϵ/S tolerance (with cost O(S log( S +ϵ ) ) and then approximate +the κp with this approximated mean (cost O(S)). +J +Lp Water Filling/Pouring lemma +In this section, we are going to discuss the following optimization problem, +max +c +−α∥c∥q + ⟨c, b⟩ +such that +A +� +i=1 +ci = 1, +ci ≥ 0, +∀i +where α ≥ 0, referred as Lp-water pouring problem. We are going to assume WLOG that b +is sorted component wise, that is b1 ≥ b2, · · · ≥ bA. The above problem for p = 2, is studied +in [1]. The approach we are going to solve the problem is as follows: a) Write Lagrangian b) +Since the problem is convex, any solutions of KKT condition is global maximum. c) Obtain +conditions using KKT conditions. +Lemma 2. Let b ∈ RA be such that its components are in decreasing order (i,e bi ≥ bi+1), +α ≥ 0 be any non-negative constant, and +ζp := max +c +−α∥c∥q + ⟨c, b⟩ +such that +A +� +i=1 +ci = 1, +ci ≥ 0, +∀i, +(63) +and let c∗ be a solution to the above problem. Then +1. Higher components of b, gets higher weight in c∗. In other words, c∗ is also sorted +component wise in descending order, that is +c∗ +1 ≥ c∗ +2, · · · , ≥ c∗ +A. +2. The value ζp satisfies the following equation +αp = +� +bi≥ζp +(bi − ζp)p +3. The solution c of (63), is related to ζp as +ci = +(bi − ζp)p−11(bi ≥ ζp) +� +s(bi − ζp)p−11(bi ≥ ζp) +4. Observe that the top χp := max{i|bi ≥ ζp} actions are active and rest are passive. The +number of active actions can be calculated as +{k|αp ≥ +k +� +i=1 +(bi − bk)p} = {1, 2, · · · , χp}. +5. Things can be re-written as +ci ∝ +� +(bi − ζp)p−1 +if +i ≤ χp +0 +else +and +αp = +χp +� +i=1 +(bi − ζp)p +36 + +6. The function � +bi≥x(bi − x)p is monotonically decreasing in x, hence the root ζp can +be calculated efficiently by binary search between [b1 − α, b1]. +7. Solution is sandwiched as follows +bχp+1 ≤ ζp ≤ bχp +8. k ≤ χp if and only if there exist the solution of the following, +k +� +i=1 +(bi − x)p = αp +and +x ≤ bk. +9. If action k is active and there is greedy increment hope then action k + 1 is also active. +That is +k ≤ χp +and +λk ≤ bk+1 =⇒ k + 1 ≤ χp, +where +k +� +i=1 +(bi − λk)p = αp +and +λk ≤ bk. +10. If action k is active, and there is no greedy hope and then action k + 1 is not active. +That is, +k ≤ χp +and +λk > bk+1 =⇒ k + 1 > χp, +where +k +� +i=1 +(bi − λk)p = αp +and +λk ≤ bk. +And this implies k = χp. +Proof. +1. Let +f(c) := −α∥c∥q + ⟨b, c⟩. +Let c be any vector, and c′ be rearrangement c in descending order. Precisely, +c′ +k := cik, +where +ci1 ≥ ci2, · · · , ≥ ciA. +Then it is easy to see that f(c′) ≥ f(c). And the claim follows. +2. Writting Lagrangian of the optimization problem, and its derivative, +L = −α∥c∥q + ⟨c, b⟩ + λ( +� +i +ci − 1) + θici +∂L +∂ci += −α∥c∥1−q +q +|ci|q−2ci + bi + λ + θi, +(64) +λ ∈ R is multiplier for equality constraint � +i ci = 1 and θ1, · · · , θA ≥ 0 are multipliers +for inequality constraints ci ≥ 0, +∀i ∈ [A]. Using KKT (stationarity) condition, we +have +−α∥c∗∥1−q +q +|c∗ +i |q−2c∗ +i + bi + λ + θi = 0 +(65) +37 + +Let B := {i|c∗ +i > 0}, then +� +i∈B +c∗ +i [−α∥c∗∥1−q +q +|c∗ +i |q−2c∗ +i + bi + λ] = 0 +=⇒ − α∥c∗∥1−q +q +∥c∗∥q +q + ⟨c∗, b⟩ + λ = 0, +(using +� +i +c∗ +i = 1 and (c∗ +i )2 = |c∗ +i |2) +=⇒ − α∥c∗∥q + ⟨c∗, b⟩ + λ = 0 +=⇒ − α∥c∗∥q + ⟨c∗, b⟩ = −λ, +(re-arranging) +(66) +Now again using (65), we have +− α∥c∗∥1−q +q +|c∗ +i |q−2c∗ +i + bi + λ + θi = 0 +=⇒ α∥c∗∥1−q +q +|c∗ +i |q−2c∗ +i = bi + λ + θi, +∀i, +(re-arranging) +(67) +Now, if i ∈ B then θi = 0 from complimentry slackness, so we have +α∥c∗∥1−q +q +|c∗ +i |q−2c∗ +i = bi + λ > 0, +∀i ∈ B +by definition of B. Now, if for some i, bi + λ > 0 then bi + λ + θi > 0 as θi ≥ 0, that +implies +α∥c∗∥1−q +q +|c∗ +i |q−2c∗ +i = bi + λ + θi > 0 +=⇒ c∗ +i > 0 =⇒ i ∈ B. +So, we have, +i ∈ B ⇐⇒ bi + λ > 0. +To summarize, we have +α∥c∗∥1−q +q +|c∗ +i |q−2c∗ +i = (bi + λ)1(bi ≥ −λ), +∀i, +(68) +=⇒ +� +i +α +q +q−1 ∥c∗∥−q +q (c∗ +i )q = +� +i +(bi + λ) +q +q−1 1(bi ≥ −λ), +(taking q/(q − 1)th power and summing) +=⇒ αp = +A +� +i=1 +(bi + λ)p1(bi ≥ −λ). +(69) +So, we have, +ζp = −λ +such that +αp = +� +bi≥λ +(bi + λ)p. +=⇒ αp = +� +bi≥ζp +(bi − ζp)p +(70) +3. Furthermore, using (68), we have +α∥c∗∥1−q +q +|c∗ +i |q−2c∗ +i = (bi + λ)1(bi ≥ −λ) = (bi − ζp)1(bi ≥ ζp) +∀i, +=⇒ c∗ +i ∝ (bi − ζp) +1 +q−1 1(bi ≥ ζp) = +(bi − ζp)p−11(bi ≥ ζp) +� +i(bi − ζp)p−11(bi ≥ ζp), +(using +� +i +c∗ +i = 1). +(71) +38 + +4. Now, we move on to calculate the number of active actions χp. Observe that the +function +f(λ) := +A +� +i=1 +(bi − λ)p1(bi ≥ λ) − αp +(72) +is monotonically decreasing in λ and ζp is a root of f. This implies +f(x) ≤ 0 ⇐⇒ x ≥ ζp +=⇒ f(bi) ≤ 0 ⇐⇒ bi ≥ ζp +=⇒ {i|bi ≥ ζp} = {i|f(bi) ≤ 0} +=⇒ χp = max{i|bi ≥ ζp} = max{i|f(bi) ≤ 0}. +(73) +Hence, things follows by putting back in the definition of f. +5. We have, +αp = +A +� +i=1 +(bi − ζp)p1(bi ≥ ζp), +and +χp = max{i|bi ≥ ζp}. +Combining both we have +αp = +χp +� +i=1 +(bi − ζp)p. +And the other part follows directly. +6. Continuity and montonocity of the function � +bi≥x(bi − x)p is trivial. Now observe +that � +bi≥b1(bi − b1)p = 0 and � +bi≥b1−α(bi − (b1 − α))p ≥ αp, so it implies that it is +equal to αp in the range [b1 − α, b1]. +7. Recall that the ζp is the solution to the following equation +αp = +� +bi≥x +(bi − x)p. +And from the definition of χp, we have +αp < +χp+1 +� +i=1 +(bi − bχp+1)p = +� +bi≥bχp+1 +(bi − bχp+1)p, +and +αp ≥ +χp +� +i=1 +(bi − bχp)p = +� +bi≥bχp +(bi − bχp)p. +So from continuity, we infer the root ζp must lie between [bχp+1, bχ]. +8. We prove the first direction, and assume we have +k ≤ χp +=⇒ +k +� +i=1 +(bi − bk)p ≤ αp +(from definition of χp). +(74) +39 + +Observe the function f(x) := �k +i=1(bi − x)p is monotically decreasing in the range +(−∞, bk]. +Further, f(bk) ≤ αp and limx→−∞ f(x) = ∞, so from the continuity +argument there must exist a value y ∈ (−∞, bk] such that f(y) = αp. This implies that +k +� +i=1 +(bi − y)p ≤ αp, +and +y ≤ bk. +Hence, explicitly showed the existence of the solution. Now, we move on to the second +direction, and assume there exist x such that +k +� +i=1 +(bi − x)p = αp, +and +x ≤ bk. +=⇒ +k +� +i=1 +(bi − bk)p ≤ αp, +(as x ≤ bk ≤ bk−1 · · · ≤ b1) +=⇒ k ≤ χp. +9. We have k ≤ χp and λk such that +αp = +k +� +i=1 +(bi − λk)p, +and +λk ≤ bk, +(from above item) +≥ +k +� +i=1 +(bi − bk+1)p, +(as λk ≤ bk+1 ≤ bk) +≥ +k+1 +� +i=1 +(bi − bk+1)p, +(addition of 0). +(75) +From the definition of χp, we get k + 1 ≤ χp. +10. We are given +k +� +i=1 +(bi − λk)p = αp +=⇒ +k +� +i=1 +(bi − bk+1)p > αp, +(as λk > bk+1) +=⇒ +k+1 +� +i=1 +(bi − bk+1)p > αp, +(addition of zero) +=⇒ k + 1 > χp. +40 + +J.0.1 +Special case: L1 +For p = 1, by definition, we have +ζ1 = max +c +−α∥c∥∞ + ⟨c, b⟩ +such that +� +a∈A +ca = 1, +c ⪰ 0. +(76) +And χ1 is the optimal number of actions, that is +α = +χ1 +� +i=1 +(bi − ζ1) +=⇒ ζ1 = +�χ1 +i=1 bi − α +χ1 +. +Let λk be the such that +α = +k +� +i=1 +(bi − λk) +=⇒ λk = +�k +i=1 bi − α +k +. +Proposition 6. +ζ1 = max +k +λk +Proof. From lemma 2, we have +λ1 ≤ λ2 · · · ≤ λχ1. +Now, we have +λk − λk+m = +�k +i=1 bi − α +k +− +�k+m +i=1 bi − α +k + m += +�k +i=1 bi − α +k +− +�k +i=1 bi − α +k + m +− +�m +i=1 bk+i +k + m += m(�k +i=1 bi − α +k(k + m)) +− +�m +i=1 bk+i +k + m += +m +k + m( +�k +i=1 bi − α +k +− +�m +i=1 bk+i +m +) += +m +k + m(λk − +�m +i=1 bk+i +m +) +(77) +From lemma 2, we also know the stopping criteria for χ1, that is +λχ1 > bχ1+1 +=⇒ λχ1 > bχ1+i, +i ≥ 1, +(as bi are in descending order) +=⇒ λχ1 > +�m +i=1 bχ1+i +m +, +∀m ≥ 1. +41 + +Combining it with the (77), for all m ≥ 0 , we get +λχ1 − λχ1+m = +m +χ1 + m(λχ1 − +�m +i=1 bχ1+i +m +) +≥ 0 +=⇒ λχ1 ≥ λχ1+m +(78) +Hence, we get the desired result, +ζ1 = λχ1 = max +k +λk. +J.0.2 +Special case: max norm +For p = ∞, by definition, we have +ζ∞(b) = max +c +−α∥c∥1 + ⟨c, b⟩ +such that +� +a∈A +ca = 1, +c ⪰ 0. += max +c +−α + ⟨c, b⟩ +such that +� +a∈A +ca = 1, +c ⪰ 0. += − α + max +i +bi +(79) +J.0.3 +Special case: L2 +The problem is discussed in great details in [1], here we outline the proof. For p = 2, we +have +ζ2 = max +c +−α∥c∥2 + ⟨c, b⟩ +such that +� +a∈A +ca = 1, +c ⪰ 0. +(80) +Let λk be the solution of the following equation +α2 = +k +� +i=1 +(bi − λ)2, +λ ≤ bk += kλ2 − 2 +k +� +i=1 +λbi. + +k +� +i=1 +(bi)2, +λ ≤ bk +=⇒ λk = +�k +i=1 bi ± +� +(�k +i=1 bi)2 − k(�k +i=1(bi)2 − α2) +k +, +and +λk ≤ bk += +�k +i=1 bi − +� +(�k +i=1 bi)2 − k(�k +i=1(bi)2 − α2) +k += +�k +i=1 bi +k +− +� +� +� +�α2 − +k +� +i=1 +(bi − +�k +i=1 bi +k +)2 +(81) +From lemma 2, we know +λ1 ≤ λ2 · · · ≤ λχ2 = ζ2 +42 + +where χ2 calculated in two ways: a) +χ2 = max +m {m| +m +� +i=1 +(bi − bm)2 ≤ α2} +b) +χ2 = min +m {m|λm ≤ bm+1} +We proceed greedily until stopping condition is met in lemma 2. Concretely, it is illustrated +in algorithm 7. +J.1 +L1 Water Pouring lemma +In this section, we re-derive the above water pouring lemma for p = 1 from scratch, just for +sanity check. As in the above proof, there is a possibility of some breakdown, as we had take +limits q → ∞. We will see that all the above results for p = 1 too. +Let b ∈ RA be such that its components are in decreasing order, i,e bi ≥ bi+1 and +ζ1 := max +c +−α∥c∥∞ + ⟨c, b⟩ +such that +A +� +i=1 +ci = 1, +ci ≥ 0, +∀i. +(82) +Lets fix any vector c ∈ RA, and let k1 := ⌊ +1 +maxi ci ⌋ and let +c1 +i = +� +� +� +� +� +maxi ci +if +i ≤ k1 +1 − k1 maxi ci +if +i = k1 + 1 +0 +else +Then we have, +−α∥c∥∞ + ⟨c, b⟩ = − α max +i +ci + +A +� +i=1 +cibi +≤ − α max +i +ci + +A +� +i=1 +c1 +i bi, +(recall bi is in decreasing order) += − α∥c1∥∞ + ⟨c1, b⟩ +(83) +Now, lets define c2 ∈ RA. Let +k2 = +� +k1 + 1 +if +�k1 +i=1 bi−α +k1 +≤ bk+1 +k1 +else +43 + +and let c2 +i = 1(i≤k2) +k2 +. Then we have, +−α∥c1∥∞ + ⟨c1, b⟩ = − α max +i +ci + +A +� +i=1 +c1 +i bi += − α max +i +ci + +k1 +� +i=1 +max +i +cibi + (1 − k1 max +i +ci)bk1+1, +(definition of c1) +=(−α + �k1 +i=1 bi +k1 +)k1 max +i +ci + bk1+1(1 − k1 max +i +ci), +(re-arranging) +≤−α + �k2 +i=1 bi +k2 += − α∥c2∥∞ + ⟨c2, b⟩ +(84) +The last inequality comes from the definition of k2 and c2. So we conclude that a optimal +solution is uniform over some actions, that is +ζ1 = max +c∈C −α∥c∥∞ + ⟨c, b⟩ += max +k +� −α + �k +i=1 bi +k +� +(85) +where C := {ck ∈ RA|ck +i = 1(i≤k) +k +} is set of uniform actions. Rest all the properties follows +same as Lp water pouring lemma. +K +Robust Value Iteration (Main) +In this section, we will discuss the main results from the paper except for time complexity +results. It contains the proofs of the results presented in the main body and also some other +corollaries/special cases. +K.1 +sa-rectangular robust policy evaluation and improvement +Theorem 8. (sa)-rectangular Lp robust Bellman operator is equivalent to reward regularized +(non-robust) Bellman operator, that is +(T π +Usa +p v)(s) = +� +a +π(a|s)[−αs,a − γβs,aκq(v) + R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′)], +and +(T ∗ +Usa +p v)(s) = max +a∈A[−αs,a − γβs,aκq(v) + R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′)], +where κp is defined in (7). +44 + +Proof. From definition robust Bellman operator and Usa +p = (R0 + R) × (P0 + P), we have, +(T π +Usa +p v)(s) = +min +R,P ∈Usa +p +� +a +π(a|s) +� +R(s, a) + γ +� +s′ +P(s′|s, a)v(s′) +� += +� +a +π(a|s) +� +R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� ++ +min +p∈P,r∈R +� +a +π(a|s) +� +r(s, a) + γ +� +s′ +p(s′|s, a)v(s′) +� +, +(from (sa)-rectangularity, we get) += +� +a +π(a|s) +� +R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� ++ +� +a +π(a|s) +min +ps,a∈Psa,rs,a∈Rs,a +� +rs,a + γ +� +s′ +ps,a(s′)v(s′) +� +� +�� +� +:=Ωsa(v) +(86) +Now we focus on regularizer function Ω, as follows +Ωsa(v) = +min +ps,a∈Ps,a,rs,a∈Rs,a +� +rs,a + γ +� +s′ +ps,a(s′)v(s′) +� += +min +rs,a∈Rs,a rs,a + γ +min +ps,a∈Psa +� +s′ +ps,a(s′)v(s′) += −αs,a + γ +min +∥psa∥p≤βs,a,� +s′ psa(s′)=0⟨ps,a, v⟩, += − αs,a − γβs,aκq(v), +(from lemma 1). +(87) +Putting back, we have +(T π +Usa +p v)(s) = +� +a +π(a|s) +� +−αs,a − γβs,aκq(v) + R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� +Again, reusing above results in optimal robust operator, we have +(T ∗ +Usa +p v)(s) = max +πs∈∆A +min +R,P ∈Usa +p +� +a +πs(a) +� +R(s, a) + γ +� +s′ +P(s′|s, a)v(s′) +� += max +πs∈∆A +� +a +πs(a) +� +−αs,a − γβs,aκp(v) + R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� += max +a∈A +� +−αs,a − γβs,aκq(v) + R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� +(88) +The claim is proved. +K.2 +S-rectangular robust policy evaluation +Theorem 9. S-rectangular Lp robust Bellman operator is equivalent to reward regularized +(non-robust) Bellman operator, that is +(T π +Us +pv)(s) = − +� +αs + γβsκq(v) +� +∥π(·|s)∥q + +� +a +π(a|s) +� +R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� +45 + +where κp is defined in (7) and ∥π(·|s)∥q is q-norm of the vector π(·|s) ∈ ∆A. +Proof. From definition of robust Bellman operator and Us +p = (R0 + R) × (P0 + P), we have +(T π +Us +pv)(s) = +min +R,P ∈Us +p +� +a +π(a|s) +� +R(s, a) + γ +� +s′ +P(s′|s, a)v(s′) +� += +� +a +π(a|s) +� +R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� +�� +� +nominal values +� ++ +min +p∈P,r∈R +� +a +π(a|s) +� +r(s, a) + γ +� +s′ +p(s′|s, a)v(s′) +� +(from s-rectangularity we have) += +� +a +π(a|s) +� +R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� ++ +min +ps∈Ps,rs∈Rs +� +a +π(a|s) +� +rs(a) + γ +� +s′ +ps(s′|a)v(s′) +� +� +�� +� +:=Ωs(πs,v) +(89) +where we denote πs(a) = π(a|s) as a shorthand. Now we calculate the regularizer function +as follows +Ωs(πs, v) := +min +rs∈Rs,ps∈Ps⟨rs + γvT ps, πs⟩ = min +rs∈Rs⟨rs, πs⟩ + γ min +ps∈Ps vT psπs += −αs∥πs∥q + γ min +ps∈Ps vT psπs, +(using 1 +p + 1 +q = 1 ) += − αs∥πs∥q + γ min +ps∈Ps +� +a +πs(a)⟨ps,a, v⟩ += − αs∥πs∥q + γ +min +� +a(βs,a)p≤(βs)p +min +∥psa∥p≤βs,a,� +s′ psa(s′)=0 +� +a +πs(a)⟨ps,a, v⟩ += − αs∥πs∥q + γ +min +� +a(βs,a)p≤(βs)p +� +a +πs(a) +min +∥psa∥p≤βs,a,� +s′ psa(s′)=0 +⟨ps,a, v⟩ += − αs∥πs∥q + γ +min +� +a(βsa)p≤(βs)p +� +a +πs(a)(−βsaκp(v)) +( from lemma 1) += − αs∥πs∥q − γκq(v) +max +� +a(βsa)p≤(βs)p +� +a +πs(a)βsa += − αs∥πs∥q − γκp(v)∥πs∥qβs +(using Holders) += − (αs + γβsκq(v))∥πs∥q. +(90) +Now putting above values in robust operator, we have +(T π +Us +pv)(s) = − +� +αs + γβsκq(v) +� +∥π(·|s)∥q+ +� +a +π(a|s) +� +R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +� +. +46 + +K.3 +s-rectangular robust policy improvement +Reusing robust policy evaluation results in section K.2, we have +(T ∗ +Uspv)(s) = max +πs∈∆A +min +R,P ∈Usa +p +� +a +πs(a) +� +R(s, a) + γ +� +s′ +P(s′|s, a)v(s′) +� += max +πs∈∆A +� +−(αs + γβsκq(v))∥πs∥q + +� +a +πs(a)(R(s, a) + γ +� +s′ +P(s′|s, a)v(s′)) +� +. +(91) +Observe that, we have the following form +(T ∗ +Uspv)(s) = max +c +−α∥c∥q + ⟨c, b⟩ +such that +A +� +i=1 +ci = 1, +c ⪰ 0, +(92) +where α = αs +γβsκq(v) and bi = R(s, ai)+γ � +s′ P(s′|s, ai)v(s′). Now all the results below, +follows from water pouring lemma ( lemma 2). +Theorem 10. (Policy improvement) The optimal robust Bellman operator can be evaluated +in following ways. +1. (T ∗ +Us +pv)(s) is the solution of the following equation that can be found using binary search +between +� +maxa Q(s, a) − σ, maxa Q(s, a) +� +, +� +a +� +Q(s, a) − x +�p 1 +� +Q(s, a) ≥ x +� += σp. +(93) +2. (T ∗ +Us +pv)(s) and χp(v, s) can also be computed through algorithm 3. +where σ = αs + γβsκq(v), and Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′). +Proof. The first part follows from lemma 2, point 2. The second part follows from lemma 2, +point 9 (greedy inclusion ) and point 10 (stopping condition). +Theorem 11. (Go To Policy) The greedy policy π w.r.t. value function v, defined as +T ∗ +Us +pv = T π +Us +pv is a threshold policy. It takes only those actions that has positive advantage, +with probability proportional to (p − 1)th power of its advantage. That is +π(a|s) ∝ (A(s, a))p−11(A(s, a) ≥ 0), +where A(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′) − (T ∗ +Us +pv)(s). +Proof. Follows from lemma 2, point 3. +Property 4. χp(v, s) is number of actions that has positive advantage, that is +χp(v, s) = +���� +� +a | (T ∗ +Us +pv)(s) ≤ R0(s, a) + γ +� +s′ +P0(s′|s, a)v(s′) +���� . +Proof. Follows from lemma 2, point 4. +47 + +Property 5. ( Value vs Q-value) (T ∗ +Us +pv)(s) is bounded by the Q-value of χth and (χ + 1)th +actions. That is +Q(s, aχ+1) < (T ∗ +Us +pv)(s) ≤ Q(s, aχ), +where +χ = χp(v, s), +Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′), and Q(s, a1) ≥ Q(s, a2), · · · Q(s, aA). +Proof. Follows from lemma 2, point 7. +Corollary 2. For p = 1, the optimal policy π1 w.r.t. value function v and uncertainty set +Us +1, can be computed directly using χ1(s) without calculating advantage function. That is +π1(as +i|s) = 1(i ≤ χ1(s)) +χ1(s) +. +Proof. Follows from Theorem 11 by putting p = 1. Note that it can be directly obtained +using L1 water pouring lemma (see section J.1) +Corollary 3. (For p = ∞) The optimal policy π w.r.t. value function v and uncertainty set +Us +∞ (precisely T ∗ +Us +∞v = T π +Us +∞v), is to play the best response, that is +π(a|s) = 1(a ∈ arg maxa Q(s, a)) +�� arg maxa Q(s, a) +�� +. +In case of tie in the best response, it is optimal to play any of the best responses with any +probability. +Proof. Follows from Theorem 11 by taking limit p → ∞. +Corollary 4. For p = ∞, T ∗ +Us +pv, the robust optimal Bellman operator evaluation can be +obtained in closed form. That is +(T ∗ +Us +∞v)(s) = max +a +Q(s, a) − σ, +where σ = αs + γβsκ1(v), Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′). +Proof. Let π be such that +T ∗ +Us +∞v = T π +Us +∞v. +This implies +(T ∗ +Uspv)(s) = +min +R,P ∈Usa +p +� +a +π(a|s) +� +R(s, a) + γ +� +s′ +P(s′|s, a)v(s′) +� += −(αs + γβsκp(v))∥π(·|s)∥q + +� +a +π(a|s)(R(s, a) + γ +� +s′ +P(s′|s, a)v(s′)). +(94) +From corollary 3, we know the that π is deterministic best response policy. Putting this we +get the desired result. +There is a another way of proving this, using Theorem 3 by taking limit p → ∞ carefully as +lim +p→∞ +� +a +� +Q(s, a) − T ∗ +Uspv)(s) +�p +1 +� +Q(s, a) ≥ T ∗ +Uspv)(s) +� +) +1 +p = σ, +(95) +where σ = αs + γβsκ1(v). +48 + +Corollary 5. For p = 1, the robust optimal Bellman operator T ∗ +Us +p, can be computed in +closed form. That is +(T ∗ +Us +pv)(s) = max +k +�k +i=1 Q(s, ai) − σ +k +, +where σ = αs + γβsκ∞(v), Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′), and Q(s, a1) ≥ +Q(s, a2), ≥ · · · ≥ Q(s, aA). +Proof. Follows from section J.0.1. +Corollary 6. The s rectangular Lp robust Bellman operator can be evaluated for p = 1, 2 +by algorithm 8 and algorithm 7 respectively. +Proof. It follows from the algorithm 3, where we solve the linear equation and quadratic +equation for p = 1, 2 respectively. For p = 2, it can be found in [1]. +Algorithm 7 Algorithm to compute S-rectangular L2 robust optimal Bellman Operator +1: Input: σ = αs + γβsκ2(v), +Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′). +2: Output (T ∗ +Us +2 v)(s), χ2(v, s) +3: Sort Q(s, ·) and label actions such that Q(s, a1) ≥ Q(s, a2), · · · . +4: Set initial value guess λ1 = Q(s, a1) − σ and counter k = 1. +5: while k ≤ A − 1 and λk ≤ Q(s, ak) do +6: +Increment counter: k = k + 1 +7: +Update value estimate: +λk = 1 +k +� +k +� +i=1 +Q(s, ai) − +� +� +� +�kσ2 + ( +k +� +i=1 +Q(s, ai))2 − k +k +� +i=1 +(Q(s, ai))2 +� +8: end while +9: Return: λk, k +Algorithm 8 Algorithm to compute S-rectangular L1 robust optimal Bellman Operator +1: Input: σ = αs + γβsκ∞(v), +Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)v(s′). +2: Output (T ∗ +Us +1 v)(s), χ1(v, s) +3: Sort Q(s, ·) and label actions such that Q(s, a1) ≥ Q(s, a2), · · · . +4: Set initial value guess λ1 = Q(s, a1) − σ and counter k = 1. +5: while k ≤ A − 1 and λk ≤ Q(s, ak) do +6: +Increment counter: k = k + 1 +7: +Update value estimate: +λk = 1 +k +� +k +� +i=1 +Q(s, ai) − σ +� +8: end while +9: Return: λk, k +49 + +L +Time Complexity +In this section, we will discuss time complexity of various robust MDPs and compare it with +time complexity of non-robust MDPs. We assume that we have the knowledge of nominal +transition kernel and nominal reward function for robust MDPs, and in case of non-robust +MDPs, we assume the knowledge of the transition kernel and reward function. We divide +the discussion into various parts depending upon their similarity. +L.1 +Exact Value Iteration: Best Response +In this section, we will discuss non-robust MDPs, (sa)-rectangular L1/L2/L∞ robust MDPs +and s-rectangular L∞ robust MDPs. They all have a common theme for value iteration as +follows, for the value function v, their Bellman operator ( T ) evaluation is done as +(T v)(s) = +max +a +���� +action cost +� +R(s, a) + αs,a +κ(v) +���� +reward penalty/cost ++γ +� +s′ +P(s′|s, a)v(s′) +� +�� +� +sweep +� +. +(96) +’Sweep’ requires O(S) iterations and ’action cost’ requires O(A) iterations. Note that the +reward penalty κ(v) doesn’t depend on state and action. It is calculated only once for value +iteration for all states. The above value update has to be done for each states , so one full +update requires +O +� +S(action cost)(sweep cost +� ++reward cost +� += O +� +S2A + reward cost +� +Since the value iteration is a contraction map, so to get ϵ-close to the optimal value, it +requires O(log( 1 +ϵ )) full value update, so the complexity is +O +� +log(1 +ϵ ) +� +S2A + reward cost +�� +. +1. Non-robust MDPs: The cost of ’reward is zero as there is no regularizer to compute. +The total complexity is +O +� +log(1 +ϵ ) +� +S2A + 0 +�� += O +� +log(1 +ϵ )S2A +� +. +2. (sa)-rectangular L1/L2/L∞ and s-rectangular L∞ robust MDPs: We need +to calculate the reward penalty (κ1(v)/κ2(v)/κ∞) that takes O(S) iterations. As +calculation of mean, variance and median, all are linear time compute. Hence the +complexity is +O +� +log(1 +ϵ ) +� +S2A + S +�� += O +� +log(1 +ϵ )S2A +� +. +L.2 +Exact Value iteration: Top k response +In this section, we discuss the time complexity of s-rectangular L1/L2 robust MDPs as in +algorithm 5. We need to calculate the reward penalty (κ∞(v)/κ2(v) in (40)) that takes O(S) +iterations. Then for each state we do: sorting of Q-values in (45), value evaluation in (46), +50 + +update Q-value in (44) that takes O(A log(A)), O(A), O(SA) iterations respectively. Hence +the complexity is +total iteration(reward cost (40) + S( sorting (45) + value evaluation (46) +Q-value(44)) += log(1 +ϵ )(S + S(A log(A) + A + SA) +O +� +log(1 +ϵ ) +� +S2A + SA log(A) +�� +. +For general p, we need little caution as kp(v) can’t be calculated exactly but approximately +by binary search. And it is the subject of discussion for the next sections. +L.3 +Inexact Value Iteration: sa-rectangular Lp robust MDPs (U sa +p ) +In this section, we will study the time complexity for robust value iteration for (sa)-rectangular +Lp robust MDPs for general p. Recall, that value iteration takes best penalized action, that +is easy to compute. But reward penalization depends on p-variance measure κp(v), that we +will estimate by ˆκp(v) through binary search. We have inexact value iterations as +vn+1(s) := max +a∈A[αsa − γβsaˆκq(vn) + R0(s, a) + γ +� +s′ +P0(s′|s, a)vn(s′)] +where ˆκq(vn) is a ϵ1 approximation of κq(vn), that is |ˆκq(vn) − κq(vn)| ≤ ϵ1. Then it is easy +to see that we have bounded error in robust value iteration, that is +∥vn+1 − T ∗ +Usa +p vn∥∞ ≤ γβmaxϵ1 +where βmax := maxs,a βs,a +Proposition 7. Let T ∗ +U be a γ contraction map, and v∗ be its fixed point. And let {vn, n ≥ 0} +be approximate value iteration, that is +∥vn+1 − T ∗ +U vn∥∞ ≤ ϵ +then +lim +n→∞∥vn − v∗∥∞ ≤ +ϵ +1 − γ +moreover, it converges to the +ϵ +1−γ radius ball linearly, that is +∥vn − v∗∥∞ − +ϵ +1 − γ ≤ cγn +where c = +1 +1−γ ϵ + ∥v0 − v∗∥∞. +51 + +Proof. +∥vn+1 − v∗∥∞ =∥vn+1 − T ∗ +U v∗∥∞ +=∥vn+1 − T ∗ +U vn + T ∗ +U vn − T ∗ +U v∗∥∞ +≤∥vn+1 − T ∗ +U vn∥∞ + ∥T ∗ +U vn − T ∗ +U v∗∥∞ +≤∥vn+1 − T ∗ +U vn∥∞ + γ∥vn − v∗∥∞, +(contraction) +≤ϵ + γ∥vn − v∗∥∞, +(approximate value iteration) +=⇒ ∥vn − v∗∥∞ = +n−1 +� +k=0 +γkϵ + γn∥v0 − v∗∥∞, +(unrolling above recursion) +=1 − γn +1 − γ ϵ + γn∥v0 − v∗∥∞ +=γn[ +1 +1 − γ ϵ + ∥v0 − v∗∥∞] + +ϵ +1 − γ +(97) +Taking limit n → ∞ both sides, we get +lim +n→∞∥vn − v∗∥∞ ≤ +ϵ +1 − γ . +Lemma 3. For Usa +p , the total iteration cost is log( 1 +ϵ )S2A + (log( 1 +ϵ ))2 to get ϵ close to the +optimal robust value function. +Proof. We calculate κq(v) with ϵ1 = (1−γ)ϵ +3 +tolerance that takes O(S log( S +ϵ1 )) using binary +search (see section I.2). Now, we do approximate value iteration for n = log( 3∥v0−v∗∥∞ +ϵ +). +Using the above lemma, we have +∥vn − v∗ +Usa +p ∥∞ =γn[ +1 +1 − γ ϵ1 + ∥v0 − v∗ +Usa +p ∥∞] + +ϵ1 +1 − γ +≤γn[ ϵ +3 + ∥v0 − v∗ +Usa +p ∥∞] + ϵ +3 +≤γn ϵ +3 + ϵ +3 + ϵ +3 ≤ ϵ. +(98) +In summary, we have action cost O(A), reward cost O(S log( S +ϵ )), sweep cost O(S) and total +number of iterations O(log( 1 +ϵ )). So the complexity is +(number of iterations) +� +S(actions cost) (sweep cost) + reward cost +� += log(1 +ϵ ) +� +S2A + S log(S +ϵ ) +� += log(1 +ϵ )(S2A + S log(1 +ϵ ) + S log(S)) += log(1 +ϵ )S2A + S(log(1 +ϵ ))2 +52 + +L.4 +Inexact Value Iteration: s-rectangular Lp robust MDPs +In this section, we study the time complexity for robust value iteration for s-rectangular +Lp robust MDPs for general p ( algorithm 4). Recall, that value iteration takes regularized +actions and penalized reward. And reward penalization depends on q-variance measure κq(v), +that we will estimate by ˆκq(v) through binary search, then again we will calculate T ∗ +Usa +p by +binary search with approximated κq(v). Here, we have two error sources ((40), (46)) as +contrast to (sa)-rectangular cases, where there was only one error source from the estimation +of κq. +First, we account for the error caused by the first source (κq). Here we do value iteration +with approximated q-variance ˆκq, and exact action regularizer. We have +vn+1(s) := λ +s.t. +αs + γβsˆκq(v) = ( +� +Q(s,a)≥λ +(Q(s, a) − λ)p) +1 +p +where Q(s, a) = R0(s, a) + γ � +s′ P0(s′|s, a)vn(s′), and |ˆκq(vn) − κq(vn)| ≤ ϵ1. Then from +the next result (proposition 8), we get +∥vn+1 − T ∗ +Usa +p vn∥∞ ≤ γβmaxϵ1 +where βmax := maxs,a βs,a +Proposition 8. Let ˆκ be an an ϵ-approximation of κ, that is |ˆκ − κ| ≤ ϵ, and let b ∈ RA +be sorted component wise, that is, b1 ≥, · · · , ≥ bA. Let λ be the solution to the following +equation with exact parameter κ, +α + γβκ = ( +� +bi≥λ +|bi − λ|p) +1 +p +and let ˆλ be the solution of the following equation with approximated parameter ˆκ, +α + γβˆκ = ( +� +bi≥ˆλ +|bi − ˆλ|p) +1 +p , +then ˆλ is an O(ϵ)-approximation of λ, that is +|λ − ˆλ| ≤ γβϵ. +Proof. Let the function f : [bA, b1] → R be defined as +f(x) := ( +� +bi≥x +|bi − x|p) +1 +p . +We will show that derivative of f is bounded, implying its inverse is bounded and hence +Lipschitz, that will prove the claim. Let proceed +df(x) +dx += −( +� +bi≥x +|bi − x|p) +1 +p −1 � +bi≥x +|bi − x|p−1 += − +� +bi≥x |bi − x|p−1 +(� +bi≥x |bi − x|p) +p−1 +p += − +� (� +bi≥x |bi − x|p−1) +1 +p−1 +(� +bi≥x |bi − x|p) +1 +p +�p−1 +≤ −1. +(99) +53 + +The inequality follows from the following relation between Lp norm, +∥x∥a ≥ ∥x∥b, +∀0 ≤ a ≤ b. +It is easy to see that the function f is strictly monotone in the range bA, b1], so its inverse is +well defined in the same range. Then derivative of the inverse of the function f is bounded as +0 ≥ d +dxf −(x) ≥ −1. +Now, observe that λ = f −(α + γβκ) and ˆλ = f −(α + γβˆκ), then by Lipschitzcity, we have +|λ − ˆλ| = |f −(α + γβκ) − f −(α + γβˆκ)| ≤ γβ| − κ − ˆκ)| ≤ γβϵ. +Lemma 4. For Us +p, the total iteration cost is O +� +log( 1 +ϵ ) +� +S2A + SA log( A +ϵ ) +�� +to get ϵ +close to the optimal robust value function. +Proof. We calculate κq(v) in (40) with ϵ1 = (1−γ)ϵ +6 +tolerance that takes O(S log( S +ϵ1 )) iterations +using binary search (see section I.2). Then for every state, we sort the Q values (as in (45)) +that costs O(A log(A)) iterations. In each state, to update value, we do again binary search +with approximate κq(v) upto ϵ2 := (1−γ)ϵ +6 +tolerance, that takes O(log( 1 +ϵ2 )) search iterations +and each iteration cost O(A), altogether it costs O(A log( 1 +ϵ2 )) iterations. Sorting of actions +and binary search adds upto O(A log( A +ϵ )) iterations (action cost). So we have (doubly) +approximated value iteration as following, +|vn+1(s) − ˆλ| ≤ ϵ1 +(100) +where +(αs + γβsˆκq(vn))p = +� +Qn(s,a)≥ˆλ +(Qn(s, a) − ˆλ)p +and +Qn(s, a) = R0(s, a) + γ +� +s′ +P0(s′|s, a)vn(s′), +|ˆκq(vn) − κq(vn)| ≤ ϵ1. +And we do this approximate value iteration for n = log( 3∥v0−v∗∥∞ +ϵ +). Now, we do error +analysis. By accumulating error, we have +|vn+1(s) − (T ∗ +Us +pvn)(s)| ≤|vn+1(s) − ˆλ| + |ˆλ − (T ∗ +Us +pvn)(s)| +≤ϵ1 + |ˆλ − (T ∗ +Us +pvn)(s)|, +(by definition) +≤ϵ1 + γβmaxϵ1, +(from proposition 8) +≤2ϵ1. +(101) +where βmax := maxs βs, γ ≤ 1. +Now, we do approximate value iteration, and from proposition 7, we get +∥vn − v∗ +Us +p∥ ≤ 2ϵ1 +1 − γ + γn[ +1 +1 − γ 2ϵ1 + ∥v0 − v∗ +Us +p∥∞] +(102) +54 + +Now, putting the value of n, we have +∥vn − v∗ +Us +p∥∞ =γn[ 2ϵ1 +1 − γ + ∥v0 − v∗ +Us +p∥∞] + +2ϵ1 +1 − γ +≤γn[ ϵ +3 + ∥v0 − v∗ +Us +p∥∞] + ϵ +3 +≤γn ϵ +3 + ϵ +3 + ϵ +3 ≤ ϵ. +(103) +To summarize, we do O(log( 1 +ϵ )) full value iterations. Cost of evaluating reward penalty +is O(S log( S +ϵ )). For each state: evaluation of Q-value from value function requires O(SA) +iterations, sorting the actions according Q-values requires O(A log(A)) iterations, and binary +search for evaluation of value requires O(A log(1/ϵ). So the complexity is +O((total iterations)(reward cost + S(Q-value + sorting + binary search for value ))) += O +� +log(1 +ϵ ) +� +S log(S +ϵ ) + S(SA + A log(A) + A log(1 +ϵ )) +�� += O +� +log(1 +ϵ ) +� +S log(1 +ϵ ) + S log(S) + S2A + SA log(A) + SA log(1 +ϵ ) +�� += O +� +log(1 +ϵ ) +� +S2A + SA log(A) + SA log(1 +ϵ ) +�� += O +� +log(1 +ϵ ) +� +S2A + SA log(A +ϵ ) +�� +55 +