Monday, April 7, 2014

Research Paper of Descrete Distribution

RESEARCH 

REPORT
ON CONVEX PROBABILISTIC
PROGRAMMING WITH DISCRETE
DISTRIBUTIONS
Darinka Dentcheva” András Prékopa’
Andrzej Ruszczyñski’
RRR 49-2000,
SEPTEMBER, 2000
RUTCOR
Rutgers Center
for
Operations Research
Rutgers University
640 Bartholomew Road 1RUTCOR, Rutgers University, 640 Bartholomew Road, Piscataway, NJ
Piscataway, New Jersey 08854-8003, USA
08854-8003 bRUTCOR, Rutgers University, 640 Bartholomew Road, Piscataway, NJ
Telephone: 732-445-3804 08854-8003, USA
Telefax: 732-445-5472 RUTCOR, Rutgers University, 640 Bartholomew Road, Piscataway, NJ
Email: rrr@rutcor.rutgers.edu 088548003, USA





RUTCOR RESEARCH REPORT
RRR 49-2000, SEPTEMBER, 2000

ON CONVEX PROBABILISTIC PROGRAMMING
WITH DISCRETE DISTRIBUTIONS
Darinka Dentcheva András
Prékopa Andrzej Ruszczyñski
Abstract. We consider convex stochastic programming problems with probabilistic constraints involving integer-valued random variables. The concept of a p-efficient point of a probability distribution is used to derive various equivalent problem form ulations. Next we introduce the concept of r-concave discrete probability distrib utions and analyse its relevance for problems under consideration. These notions are used to derive lower and upper bounds for the optimal value of probabilistic ally constrained convex stochastic programming problems with discrete random variables.



 1 Introduction 
Let f R R aiLd g : R be concave functions. and let V be a closed COILVCX set. If
in the COILVCX program
max
f(x)
SUl)jeCt to •q(x) e,
x E V.
the vector is random. we require that g(x) shall hold at least with some prescribed probabthty p E (0, 1), rather tliaii for all possible realizations of the right hand side. This leads to the convex programming problem with probabilistic con.strain.ts:
max f(x)
subject to P{g(x) } p, (1)
x E
D,
where the SyIILl)ol P (IcilOteS prol)al)thty. Programming under pro1)al)ihsti( constraints was
imutiated l)y Charnes. Cooper and
Synionds in [4]. They formulated prol)al)ilisti( constraints in(lividually for each stociLastic constraint. .Jomt pro1)abthsti( constraints for imidepeudent random variables were used first by Miller and Wagner in [10]. The general case was introd uced and first studied by the second author of the present paper in [12. 1].
Much
is kiLowu about problem (1) in the case where has a contmuous probabthty (histribution (see [17] and the references therein). However, the convex case with a discrete (hstnbution has not been addrcsse(l yet.
Although we concentrate on integer random variables, all our results easily extend to
other discrete distributions with non—uniforimi grids, under the con(lition that a uniform lower bound OIL the (listance of grid points in each coordinate can be found.We 1 and 1÷ to denote the set of integers and iioiincgative integers, respectively. The mequahty for vectors is always understood
coor(hnate—wisc.
2 p-Efficient Points
Let us (lefine tiLe set
Z={yER8:P( y) p}. (2)
Clearly. problem (1) can be compactly rewritten as
max
f(x)
subject to g(x) E Z,, (3)
x E V.
The structure of Z, needs to be analysed in more detail. Let F denote the probabthty distribution function of . and F the marginal probability (listnl)ution function ofthe ith comnponent . By assumnptiolL. the set Z of all I)oSsil)le ValueS of tiLe randont vector is ill(’lIlde(l 111 1’. We shall use the (X)IiCe1)t of a p—efficient I)OilLt,
introduce(1 in [16].



PAGE 2 RRR 49-2000
Definition 2.1 Let p E [0, 1]. A point v E RS is called a p-efficient point of the probability th4ributu)n function F. if F(v) p aml there is no y < v, y v such that F(y) J).
Obviously, for a scalar random variable and for every p e (0, 1) there is exactly one pe fficient point: the smallest v such that F(v) p. Since F(v) F(v1) for every v E R and i = 1,... , s, we have the following result.
Lemma 2.2
Let i’ E (0. 1) and let i be the p—efficient point of the one—dimensional marginal distribution F, i = 1. . Then every v E R such thatF(v) p must satisfy the inequality V : l = (li,. . .
Rounding (loWn to the nearest integer (locs not change the Value of the (listril)ution function, so p—efficient points of a random vector with all integer components (shortly. integer random vector) must be integer. We can thus use Lcuuna 2.2 to get the following
fact.
Theorem 2.3 For each p e (0,1) the set of p-efficient points of an integer random vector is nonempty and finite.
Proof. The result follows
froui Dickson’s Lenimna [1. Cor. 4.48] and Lemma 2.2. 0 Let p E (0, 1) and let vi, j J, be all p—efficient points of . By Theorem 2.3, .1 is a
finite set. Let
us define the cones
K=v3+R..
jEJ.
Remark 2.4 = K. Proof. If y e Z, then either y is p-efficient or there exists an integer v y, v y, v E Z.
By Lemma 2.2. one must have l v. Since there are only finitely many integer points
l < v < y one of them. v, must be p—efficient. and so y E K. 0 Thus. we ol)tain (for
0 < p < 1) the following disjunctive formulation of (3):
max
f(i:)
subject to g(x) E (4)
£ E V.
Its man a(lvantage is an insight into the nature of the non—convexity of
the feasible set.
A straightforward way to solve (1) is to find all p—efficient points v2, j E .1, and to process all convex progrannnmg problems
max
f(x)
subject to g(x) (5)
j: E
V.
Specialized bounding—pruning techniques can be used to avoid solving all of them.
For mnulti-limcnsional ran(Iom vectors the number of p-efficient points can be very large and their
straightforward enumeration very difficult. It would I)c desirable, therefore. to avoid the complete emmummieration an(1 to search for pronusing p—efficient points oniy. We shall return to this issue in section 5.



RRR 49-2000 PAGE 3
3 r-Concave Discrete Distribution Functions
Since tiLe set Z ILeed not i)e convex, it is essential to alLalySe its properties and to find eqmvalent formulations with more convenient structures. To this end we shall recall aiicl adapt the notion of r—concavity of a (listrll)ution function. It uses the generalized mean function mr : R+ x R+ x [0, 1] R defined as follows:
iflr(U. h. ) = 0 for ab = 0,
and if a> 0.
h> 0, 0 \ 1, then
(L”h1 if r = 0,
iiiax{a,
h} if r = co.
m,.(a.&A)= .
nun{a.b} ifr=—oo.
(a’ + (1 ))bT)h/r otherwise.
Definition 3.1
A distribution function F : ; [0, 1] is called r-concave. where r E [—oo.oo], if
F(x +
(1 A)y) mr(F(x), F(y), \)
for all x,y
e R8 and alL\ E [0,1],
If
r = —00 we call F quasi-concave, for r = 0 it is known as log-concave, aiid for r = 1 the function F is concave iii the usual sense.
The concept of a log-concave probabthty measure (the case
r = 0) was introduced and stu(lied in [13. 14]. The ijotion of r—concavity aiLd corresponding results were giveii in [2, 3]. For (letailed description an(l proofs. see [17].
By monotoiucity. r—concavity of a (listnbution function is equivalent to the inequality
F(z) iflr(F(). F(y), ))
for all z )‘.r + (1 )y. Clearly. (listributioll functions of integer randoiii vanal)leS arc not continuous. alL(l cannot be r—concave in the sense of the al)ove definition. Therefore. WC relax Definition 3.1 in the following way.
Definition 3.2
A distribution function F is called r-concaveon the set A C R8 with r E [—oo.oo], if
F(z) mr(F(:), F(y).
)
for all z.x.y E A and..\ E (0,1) such that z > )j. +(1
To illustrate the relation l)etwecn the two definitions let us consider tiLe case of integer
random vectors winch are roundups
of continuously (hstnbutcd random vectors.



PAGE 4 RRR 49-2000
Remark 3.3
If the distribution function of a random vector i, is r-concave on R8 then the distribution function of = r,1 r—concave Ofl Z.
The last
property follows from the observation that at integer points both (listribution func— tions COill(1(le. For the relations 1)ctwedn the r—concavity of the (hstnbution function of i and tiLe r—coiicavity of its (lenSity the Rea(ler is referred to [2. 3, 19]. The ColLCCpt of r—concavity on a set can be used to find an equivalent representation of the set Z given by (2).
Theorem 3.4
Let Z be the set of all possible values of an integer random vector . If the distribution function F of is r-concave on 2 + Z. for some r E {—oo. oo], then for every p E (0. 1) one has
Z
={y E R: y z )ijv, >\j = 1. 0. z E Z},
JEJ JEJ
where v, j E .1, are the p-efficient points of F.
Proof. By the monotoiucity of F we have F(y) F(z) if y z. It is.therefore. sufficient to siLoW that P( z) p for all z E Z such that z )jv’ WitiL 0,JJ = 1. We consider five cases with respect to r.
Case 1: r = 00. It follows from time definition of r-concavity that F(z) max{F(v),j e J: 0} p.
Case 2: r = —00. Simice F(v)) p for each index j E .1 such that 0, the assertion follows as in Case 1.
Case 3: r = 0. By the definition of r—concavity.
F(z) fl[F(v2)]’i = p.
2EJ jEJ
Case 4: r E (—oo, 0). By the (lefirntion of r-concavity.
[F(V)]’ < \jp = pr
JEJ JEJ
Since r <0, we obtarn F(z) > p.
Case 5.r E (0, 00). By the definition of r-concavity.
[F(z)]” j[F(v)]’ Ap’ = p’.
3EJ )EJ



RRR 49-2000 PAGE 5
Under the C01L(litioltS of TILX)renl 3.4. problem (4) can be formulated iii the following equivalent way:
max
f(x) (6)
subject to x
E D, (7)
g(r) z, (8)
zEZ4, (9)
z Jv’. (10)
JEJ
),=1, (11)
jEJ
0,jEJ. (12)
So,
the probabilistic constraint ha_s i)cell replaced by linear qiiations aILd ineqiialitcs. tog ether with the integrality reqiurement (9).This COII(litiolL cannot l)e (1r01)1)ed, in general. If takes valueS OIL a non—umform grid. COIL(htion (9) should l)e replacc(l by tiLe reqiurement that z is a grid point. The difficulty comes from the implicitly given p-efficient points vj, j E .1. Our objective will be to avoid their enumeration and to develop an approach that generates them only when nee(Ied. An obvious question anses: winch (listnbutions are v—concave in our sense? We devote the renainmg part of this section to some useful observations on this tOl)ic.
Directly from the definition and Holder’s inequality we obtain the following property.
Remark 3.5 If a distribution function F is r-concave on the set A C RS with som.e r E [—oo. co], then it is p-concave on A for all p E [—oo. r].
For binary random vectors we have the strongest possible property.
Proposition
3.6 Every distribution function of an s-dimensional binary random vector is v-concave on Z. for all r E [—oo, oo].
Proof. Let
x,y e Z, E (0,1) and let z ..r+(l—A)y. By projecting x andy on {0, 1} we get some x’ and y’ such that F(x’) = F(x), F(y’) = F(y) and z .x’ + (1 )y’. Since z is integer an(l x’ and y’ binary. then z x’ and z y’. Thus F(z) max(F(x’). F(y’)) = ma.x(F(x). F(y)). Consequently. F is co—concave and the result follows froiii Remark 3.5. 0 For scalax integer random variables our definition of v—concavity is related to log—concavity of sequences. A sequence Pk. k = . . . —1,0, 1 is called loq—concave, if p Pk—lPk+1 for all k. By [6] (see also [17, Tiiiii. 4.7.21) amid Remark 3.3. we have the following property.
Proposition 3.7
Suppose that for a scalar integer random variable the probabilities pk =
=
k}, k = ... , —1.0, 1,... , form. a log-concave sequence. Then the distribution function of is v-concave on Z for evertj r
E [—co. 0].



PAGE 6 RRR 49-2000 Many W( ll—kILOWIL one—(limensional discrete distributions satisfy tlL comlitions of Proposition
3.7: the Poisson (listfll)UtiOlL. the gC()llletrlcai (liStfll)UtioIL, the l)ilLOllLliil (listribution [17, p.
109].

We end this
section with sufficient conditions for the r-concavity of the joint distribution function in the case of integer—valued ilLdepeltdcnt subvectors. Our assertion, presented in the next proposition is the (liscrete version of an observation from [11]. The same proof, using Holder’s ilLequality. works iii our case as well.
Proposition 3.8 Assume that = (c.... , CL), where the .sj-dimensional subuectors , i = 1.... L, arc independent ( sj = .s). Furthermore, let the marginal distribution functions F1 : [0, 1] be r1-concave on sets A1 C 1q,
(i) If r, > 0, 1 = 1.... , L, then F is r-concave on A = A1 x ... x ALwith r= (1rr’’:
(ii) If r1 = 0, 1 = 1 L, they. F is log-concave on A = A1 x ...
x AL.
4 Lagrangian Relaxation
Let us 51)lit variables iii problem (3):
max
f(r)
g(x) z, (13)
x E V.
z E
2,,.
Associating Lagrange multipliers u e R. with constraints (13) we ol)tam the LagrangialL function:
L(x, z, it) = f(r) + uT(g(x) z).
The dual functional has the forimi
= Sup L(x, z, u) = h(u) d(u).
(x,z)EVxZ
where
h(u) = sup{f(x) + uTg(x) I £ e D}, (14)
d(u) = inf{uTz I z 4}. (15)
For any
u E R the value of l(u) is a upper boluLd on the optimal value F of the original problem. This is true irrespectively whether the distril)ution function of e is or is imot
r—concave.



RRR 49-2000 PAGE 7
The best Lagrangian upper 1)01111(1 will 1)0 gieii by
= inf !P(u). (16)
u>O
By Remark 2.4, for u 0 the innunuzation in (15) iiiay 1)0 restricted to finitely iiiany p— efficient points pi, j E .1. For u 0 oiie has d(u) = —00. Therefore, d(.) is concave u1(l polyhe(lral. We also have
d(u) = inf{uTz z E co4}. (17)
Let
us consider the convex hull problem:
max f(x) (18)
g(x) z, (19)
x E
V. (20)
z E coZ,,. (21)
We
shall make
the following assumption.
Constraint Qualification Condition.
There erist points x E ri V and z0 E co Z,, such that g(x°) > zu.
If the Constraint Qualification Coit.litioii is satisfied. froni the (luabty theory in CO11VCX prograimning [9, Sec.1.1.2j we know that there exists ñ 0 at which the madmuiii in (16) is attained. and D = cl(ñ) is the optimal wdiie of the ConveX hull problem (18)—(21).
We iiow stu(ly
in detail the structure of the dual functional P. Properties of d(.) can be analysed in a inure explicit way. Define
V(u) = {v e IRtm: uTv = d(u) and v is a p-efficient point}, (22)
C(u)={dER:d1=0ifu>0, i=1.... ,s}. (23)
Lemma 4.1
For every u 0 the solution set of (15) is nonempty and has the following form: Z(u) = V(u) + C(u).
Proof. The result follows from Remimark 2.4. Let us at first consider the case u > 0. Suppose that a solution Z to (15) is not a p—efficient point. Then there is a p—efficient v E 4 such that V < Z, SO < a contra(hction. Thus. for all u 0 all solutions to (15) are p—efficient. In the general case u 0. if a solution Z is not p-efficient, we must have itTt, = i,Tz for all p-efficient v z. This is equivalent to z E {v} + C(u). as reqiured. U The last result allows us to calculate the sul)(liffereiltial of d iii a closed form.
Lemma 4.2
For every u 0 one has Od(u) =
V(u) + C(u).


PAGE 8 RRR 49-2000
Proof. From (15) it follows that €l(u) = —‘4,,(—u), where 6(.) is the supl)ort function of Z7, aiid. consequently. of co 4,. This fact follows from the structure of 4, (Remark 2.4) by virtue of Corolarry 16.5.1 in [20]. By [20, Thin 23.5], g e 86(—u) if and only if
6(—u) + 60z(g) = _gTu. where 6cop(•) is the indicator function of co 4. It follows that g E co4 and = _gTu. Thus. g is a convex COifll)ilLatioiL of solutiolLs to (15) and
tiLe result follows froiii Lemma 4.1. 0
Let US turn OW to the function h(.). Define the set of maximizers in (14).
X(u) = {x E D: f(x) + uTg(x) =
h(u)}.
Lemma 4.3 Assume that the set V is compact. Then the function h is convex on 1k and for every u E R’,
Oh(u) = co{g(x) : x E X(u)}.
Therefore the following necessary all(I sufficient optmiahty conditions for problem (16) can be
formulated.
Theorem 4.4 Assume that the Constraint Qualification Condition is satisfied and the set V is compact. A vector ii 0 is an optimal solution of (16) if and only if there erists a point x e X(u), points vtm, . . . ,Vmfl E V(u) and scalars /EJ1 . . . ,/),.i 0 with f3• = 1, such that
m+i
g(x) E C(u). (24)
j=1
where C(u) is given by (23).
Proof. Since —C(u) is the normal cone to the positive orthant at u 0, the necessary and sufficient con(lition for (16) has the form
O(u)flC(u) O
(cf. [9. Sec.1.1.2]). Using Lenuna 4.2 and Lciiima 4.3. we conclude that there est
p—efficient points
V3 E V(u). j = 1, . . . ,m + 1.
,n+i
I3 O, j=1,...,mn+1. /3=1,
3=1
x E X(u), j = 1,... ,m+ 1. (25)
rn-fl
& 0, j=1....,m+1. j=1.
j=l



RRR 49-2000 PAGE 9
such that
m+1 rn+1
cg(x’) f3jv G C(u). (26)
.1=1 j=i
Let us define
rn-I-i
2: = (iX3.
i=i
We have
In rn
f(x) + ujgj(x) = f(r’) + ujg(r1), j = 1,.. .,m + 1. (27)
Indeed, the iiiequality
> al)oVe follows froiti the concavity of f and gj, and the me(juahty is implied by (2).
By the concavity. gj (x) jg1 (xi). Suppose that
u > 0. Then we must have
rn-fl

g(x) = (rJg(xJ). SCC the stnct inequality contra(hcts (27). It follows that
rn+l
g(x) a3g(x) E C(u).
i=l
Therefore relation (26) CUL be Si1111)lifie(l as (24). as required. 0
Since the set of p—efficient pomts is not known. we need a numerical method for solving the convex hull problem (18)—(21) or its dual (16).
5 The cone generation method
The idea of a numerical method for calculating Lagrangian bounds is embedded in the convex hull formiilation( 18)—(21). We shall develop for it a new specialized method. which separates the generation of p—efficient points and the solution of the al)I)rOXllUatioll of the ongmal probl(II1 using these points. It is relate(l to ColUlilil generation methods. wiucli have 1)eclL kiiowii SiILCC the classical work [7] as extremely useful tools of large scale linear and integer programnnung.
The Method
Step 0: Select a
p—efficient point v0. Set J0 = {0}. k =
0.


PAGE 10 RRR 49-2000
Step 1: Solve the
master problem
max f(s) (28)
g(x) (29)
i€Jk
)=1. (30)
JEJk
xEV. \ 0. (31)
Lct
uk be the vector of simplex multipliers associated with the colistraliLt (29). Step 2: Calculate d(uk) = minjE.Jk(uk)Tvi.
Step 3: Find a p—efficient solution vk+1 of the subprobleni: minz (1k )Tz and calculate d(uk) =
(vk+1)Tuk.
Step 4: If d(uk) = d(uk) then stop; otherwise set •‘k+i = Jk U {k + 1}, increase k by one and go to Step 1.
A few comments are in
order. The first p-efficient point can be found by solvmg the subproblcm at Step 3 for au arbitrary u 0. All niastcr problems will be solvable, if the first one is solvable. i.e.. if the set {r e V : g(x) v°} is nouiempty. If uiot. a(l(ling a penalty teruit MiTt to the objective. auid replacmg (29) by
g(x)+t
JeJ&
with t 0 and a very large M, is the usual remedy (1T = [1 1 . . . 1]). The calculation of the upper bound at Step 2 is easy, because one caui simply select Jk E •J&. with ‘Jk > 0 aiL(I set j(11k) = (k)T1,ik At Step 3 one may search for p—efficiellt solutions oily, due to Lemma 4.1.
The algorithm is finite. hidccd, the set Jk cannot grow indefinitely, because there are finitely many p-efficient points (Theorem 2.3). If time stopping test of Step 4 is satisfied. optimality con(litions of Theorem 4.4 are satisfied. Moreover ‘Ik’ = {j E Jk (v 11k) = j(?,k)} c
.1(u).
6 Primal feasible solution and upper bounds
Let us consider the Ol)timal solution 7bow of the COIIVCX hull problem (18)—(21) aiLd the cOrrespOfl(hng multipliers ),. Define = {j E J A,, > 0}.
If ,j1w contains only one cieuiient. the 1)oint r1 is feasible amid therefore optimal for the (lisjunctive formulation (4). If, however, there are mimore positive A’s. we need to generate a feasible point. A natural possibility is to consider the restricted disjunctive formulation:
umax
f()
subject to g(;i:) E UJEJIOW K, (32)
r E
V.



RRR 49-2000 PAGE 11
It can be solved by siniple
enumeration of all cases for j E ,JiOw:
max f(x)
SUI)jCCt to g(x) v2. (33)
£ E V.
An alternative strategy WOIil(l 1)e to solve the corresponding fl1)l)C bOiiiL(IiiLg I>rol)l(Iil (33) every time a ne p—efhcient pomt is generated. If U denotes the optimal wilime of (33). the upper bound at iteration k is
Uk = miii U. (34)
O<j k
If the distribution function of is r-coiicave on the set of possible values of , Theorem 3.4 provides an alternative formulation of the upper bound problem (32):
max
f(x)
subject to x E V.
g(x)
z.
z
E , (35)

iEJk
= 1,
iEJk
-‘ 0. j E Jk.
Problem (35) is more accurate than the bound (34). because the set of integer z (lollunated by convex conibinations of p-efficient points in .Jk is imot smaller than Jk. In fact, we need to solve this problem only at the end. with Jk replaced by
J’°”.
References
[1] T. Becker and V. Weispfenning, Gröbrzer Bases, Springer-Verlag, New York, 1993.
[2]
C. Borell, Convex Set Functions in d-Space, Periodica Mathernatica Hurigarica 6 (1975) 111—
136.
[3] H.J.
Brascamp and E.H. Lieb, On Extensions of the Brimn—Minkowski and Prékopa—Leindler Theorems, Including Inequalities for Log-Concave Functions and with an Application to the Diffusion Equation, Journal of Functional Analysis 22 (1976) 366—389.
[4] A. Charnes, W.W.Cooper and
G.H. Symonds, Cost Horizonsand Certainty Equivalents: An Approach to Stochastic Programnnüng of Heating Oil, Management Science 4
(1958) 235—263.



PAGE 12 RRR 49-2000
[5] D. Dentcheva, A. Prékopa and A. Ruszczyñski, Concavity and p-efficient points of discrete dist ributions in probabilistic programming, Rutcor Research Report 9-2000, Rutgers University Center for Operations Research.
[6] M. Fekete and G. Polya, Uber em Problem von Laguerre,
Rediconti dei Circolo Matematico di Paler’rno, 23 (1912) 89—120.
[7] L.R Ford and D.R. Fulkerson, A Suggested Computation for Maximal Multicommodity Netw ork Flows,
Management Science 5 (1958) 97—101.
[8] J.-B. Hiriart-Urruty and C. Lemaréchal,
Convex Analysis and Minimization Algorithms I and II, Springer-Verlag, Berlin 1993.
[9]  A.D. loffe and V.M. Tikhomirov,
Theory of Extremal Problems, Nauka, Moscow 1974.
[10] L.B. Miller and H. Wagner, Chance-Constrained Programming with Joint Constraints,
Opera tions Research 13 (1965) 930—945.
[11] V.!. Norkin and N.y. Roenko, ct-Concave Functions and Measures and Their Applications,
Kibernet. Sistem. Anal. 189 (1991) 77—88 (in Russian); translation in: Cybernet. Systems Anal. 27 (1991) 860—869.
[12] A. Prékopa, On Probabilistic Constrained Programming,
Proceedings of the Princeton Symp osium on Mathematical Programming. Princeton University Press, 1970, pp. 113—138.
[13] A. Prékopa, Logarithmic Concave Functions with Applications to Stochastic Programming,
Acta Sci. Math. (Szeged) 32 (1971) 301—316.
[14] A. Prékopa, On Logarithmic Concave Measures and Functions,
Acta Sci. Math. (Szeged) 34 (1973) 339—343.
[15] A. Prékopa, Contributions to the Theory of Stochastic Programming,
Mathematical Programm ing 4 (1973) 202—221.
[16] A. Prékopa, Dual Method for a One-Stage Stochastic Programming Problem with Random RHS Obeying a Discrete Probability Distribution,
Zeitschrift für Operations Research 34
(1990) 441—461.

[17] A. Prékopa,
Stochastic Programming, Kiuwer, Dordrecht, Boston, 1995.
[18] A. Prékopa, B. Vizv&i, and T. Badics, Programming Under Probabilistic Constraint with Discrete Random Variable, in:
New Trcnds in Mathematical Programming, L. Grandinetti et al. (Ed.), Kluwer, Dordrecht, Boston, 1998, pp. 235—255.
[19] Y. Rinott, On Convexity of Measures,
Annals of Probability 4 (1976) 1020—1026.
[20] R. T. Rockafellar,
Convex Analysis, Princeton University Press, Princeton, 1970.

  




Discipline summary

Discrete mathematics is a branch of mathematics that studies structures that are fundamentally discrete. Whereas most branches of mathematics study objects that are "smooth" or "continuous" (e.g., the real numbers), discrete mathematics is interested in objects that are each distinct (discrete) from all other similar objects (e.g., integers, graphs or statements in logic).


Random Variables Discrete Distributions
A discrete probability distribution function is completely described by the set of possible values the random variable can take and by the probabilities assigned to each value. On this page we describe the general features of discrete distributions. On the following pages we describe a variety of named distributions. All are available from the Random Variables add-in. We use the triangular distribution pictured below for an example.

The mathematical notation for a discrete distribution is shown at the left. The values associated with a distribution are often integer, but in general they need not be. For a discrete distribution, only the values in the set X have nonzero probabilities and these must be nonnegative. For a valid distribution, summing the probabilities over the set X must yield the value 1.
The number of values in X may be finite or infinite. When the number is infinite, the set must be countable infinite. An example is the set of all nonnegative integers.
When the possible values are integers, we will often use k rather than x as the notation for the values. We use PDF to refer to the Probability Distribution Function.
The example experiment involves throwing a pair of standard dice. Each die has the numbers {1,2,3,4,5,6}, so the sum of the two dice ranges from 2 through 12. The value with the greatest probability is called the mode, so 7 is the mode of this distribution. The probabilities sum to 1 and all probabilities are nonnegative, so this is a valid distribution.
The Random Variables add-in defines distributions using named ranges on the worksheet. For the example, the range B2:B5 has the name Dice. The Mean and Variance in B6 and B7, as well as the probabilities in B9 through B19, are computed with user-defined functions provided by the add-in.

Moments
Several quantities can be computed from the PDF that describe simple characteristics of the distribution. These are called moments. The most common is the mean, the first moment about the origin, and the variance, the second moment about the mean. The mean is a measure of the centrality of the distribution and the variance is a measure of the spread of the distribution about the mean.
The skewness is computed from the third moment about the mean. This quantity can be positive or negative. We normalize the measure by squaring the third moment and dividing it by the third power of the variance. To recover the sign of the third moment, we multiply this ratio by the sign of the third moment. The skewness indicates whether the distribution has a long tail to the right of the mean (positive) or to the left (negative). The skewness is 0 for a symmetric distribution.
The kurtosis is a measure of the thickness of the tails of the distribution. The use of this measure is not obvious in most cases, but it is included for completeness. The formula for this measure subtracts 3 from the ratio of the fourth moment about the mean and the square of the variance. The Normal distribution has a kurtosis of 3, so this normalization provides a value relative to the value for the Normal distribution. It can be positive (greater than the Normal) or negative (less than the Normal).
The moments for the dice example are computed with user-defined functions functions provided by the add-in.


We use distributions to answer questions about situations that involve random variables. We use the game of Craps to illustrate the use of the triangular distribution. In this game, the player roles a pair of dice. We assume the player is female. If on the first roll of the dice the player throws a 7 or 11, she wins. If the player throws 2, 3 or 12, she loses. If the player throws a number other than 2, 3, 7, 11, or 12, the number thrown is called the point. If the player does not win or lose on the first roll, she must roll the dice again and continue to roll until she throws the point and wins, or a 7, and loses. The triangular distribution describes a single roll of the dice. Since the alternatives are mutually exclusive, probabilities of an event involving several different results are obtained by summing. We compute the probabilities associated with the first throw at the left. We use the Craps game as an example for several other distributions on the following pages.

Named Distributions
It is useful for modeling purposes to know about the named discrete distributions. When an experiment on which a random variable is based satisfies the logical conditions associated with a named distribution, the probability values for the random variable are immediately determined. Then we can use the distribution without extensive experimentation to answer decision questions about the situation. We consider a number of named distributions on the following pages. Click on a link at the far left for descriptions and examples.

No comments:

Post a Comment

SITE MENU