# Exercise Solution 5.8

- Change of variables [5.97] defines a function
=**x***g*(). We calculate the determinant of its Jacobian as**u**[s1]

Applying change of variables formula [2.168] to [5.96], we obtain:

[s2]

[s3]

[s4]

- Exhibit s2 depicts the integrand
*f*of [s4] over the region of integration.Clearly,*f*is most variable over subregion Ω_{3}. For a given sample size, a Monte Carlo analysis in that subregion will have a higher standard error than would a Monte Carlo analysis in either of subregions Ω_{1}or Ω_{2}. We can compensate by applying the method of stratified sampling based upon these three regions. This allows us to cluster more realizations in Ω_{3}than in Ω_{1}or Ω_{2}. If we were employing a crude Monte Carlo estimator, many realizations would be “wasted” in subregions Ω_{1}and Ω_{2}.

*f*(

*) are estimated as .930, 4.119, and 18.040 for*

**y**_{j}*j*= 1, 2, and 3, respectively. Standard deviations are estimated as .281, .893, and 4.478. Optimal sample sizes are estimated as 43, 273, and 684. Because these results were obtained with a preliminary Monte Carlo analysis, which entails its own standard error, your results may differ slightly.

- On Ω
_{1}, Ω_{2}and Ω_{3}, a crude Monte Carlo analysis would use approximately 250, 500, and 250 realizations, respectively. This compares to the 43, 273, and 684 realizations that we shall use on those regions in our stratified sampling Monte Carlo analysis. - As defined in item (e),
~**Y**_{j}*U*_{2}(Ω_{j}) for*j*= 1, 2, 3. Eachis a 2-dimensional random vector, so**Y**_{j}= (**Y**_{j}*Y*_{j,1},*Y*_{j,2}). Applying [5.93], we construct our stratified sampling estimator as

[s5]

[s6]

[s7]

- Consider first our stratified sampling estimator. Let’s work with the estimator as it is expressed in [s6]. This is just a weighted sum of 1000 independent random variables . Stated another way, it is a linear polynomial of those independent random variables. This means that we can apply [3.28] to obtain the variance of that (random) linear polynomial. Taking the square root yields its standard deviation, which is the standard error of the estimator:

[s8]

[s9]

[s10]

[s11]

[s12]

To estimate the standard error of the crude Monte Carlo estimator, we first need to estimate the standard deviation σ of the random variable *f* (* U*). A simple solution would have been to directly estimate this with a Monte Carlo analysis in part (e). However, this would be a needless computational expense. If we treat

*f*(

*) as a mixture of the distributions of the three*

**U***f*(

*), we can directly apply [3.129] to the estimated means and standard deviations we obtained for the*

**Y**_{j}*f*(

*) in part (e). To do so, we first need to estimate the mean μ of*

**Y**_{j}*f*(

*). By [3.128]*

**U**[s13]

so our estimated mean is

[s14]

[s15]

[s16]

By [3.129]

[s17]

Substituting in probabilities .25, .50 and .25, the estimated means and standard deviations from part (e) and the estimated mean [s16], we estimate the standard deviation of *f* (* U*) as = 7.016.

The crude Monte Carlo estimator for [s4] is

[s18]

This is a linear polynomial of 1000 independent random variables, so we can apply [3.28] to obtain the variance of that linear polynomial. Taking the square root yields its standard deviation, which is the standard error of the estimator:

[s19]

Substituting in our estimate = 7.016 for σ, we estimate the standard error of crude Monte Carlo estimator [s19] as .222.

- With a sample size of 1000, our stratified sampling estimator has a standard error of .0512 and the crude Monte Carlo estimator has a standard error of 0.222. Using = 7.016 as an estimate for the standard deviation σ of
*f*(), we apply [5.38] and solve for the sample size**U***m*that will yield a standard error of .0512. The result is 18,800. Our stratified sampling estimator accomplishes with 1000 realizations what it would take 18,800 realizations to accomplish with the crude Monte Carlo estimator. - See another spreadsheet. Based upon its computations, we estimate the value of integral [5.96] as 6.928.