HomeĀ¶

ReferencesĀ¶

IntroĀ¶

A simulation model is little more than a mimicry of a real world event. In a Monte Carlo Simulation we simulate a process to determine it's expected value in order to forecast the event targeted. As usual we'll start slow before diving into the theory behind it.

InĀ [1]:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()  # for plot styling
%matplotlib inline

Ex1 - Finding PiĀ¶

Recall

  1. A unit, radius=1, circle is described by the equation $x^2 + y^2 = 1$
  2. The area of a circle is given by $\pi r^2 $ taking r=1 gives us an area of $\pi$

So we can see that the area of the unit circle is given by $\pi$. Pretty simple. So how to find the area? We're going to randomly choose points in the unit square. We'll keep a total count of the number of random points. As well as a count of those inside (which will count as a success). The more points we choose the better our approximation will be.

To make life easier we're only look at 1/4 of the circle. Feel free to copy, paste and extend the code to your liking

InĀ [2]:
# Get an idea of what we're doing
c = plt.Circle((0, 0), radius=1, color='blue')
ax = plt.gca()
ax.add_patch(c)
Out[2]:
<matplotlib.patches.Circle at 0x1373505c160>
InĀ [3]:
# 3.188    1000
# 3.1672   10000
# 3.1472   100000
# 3.144552 1000000
np.random.seed(0)

sim_size = 100000;      #The number of points to be choosing
x = np.random.random(sim_size)
y = np.random.random(sim_size)
z = (x**2 + y**2)
#Determine the successes
success = z[z<1].size

print('Success',success)
print(4*(success/sim_size))

# If you set the size s to big the become merged and diff to see
plt.scatter(x, y, s=1, color='black')    
#plt.plot(x,np.sqrt(1-x**2) ,'r-',alpha=0.3)

#Shaded Success space
a = np.linspace(0,1.0,50)
t = np.sqrt(1-a**2)
plt.fill_between(a,t,color='yellow', alpha=0.3)
plt.show();

#print(data)                  #This could be big depending on the number of simulations
Success 78341
3.13364

The above example really can't be called a monte carlo simulation as it's only run once. So we'll easily extend it and run it many times to see if we can get a better estimate of pi

InĀ [4]:
#np.random.seed(0)                  #Uncomment when debugging

class miniMCS:
    def __init__(self,trials):
        self.trial_size  = trials            #Previously called sim_size
        self.x = np.random.random(trials)
        self.y = np.random.random(trials)
        self.z = (self.x**2 + self.y**2)
        #Determine the successes

    def __call__(self):
        self.success = self.z[self.z < 1].size
        return (4*(self.success/self.trial_size))

results = []
meanList = []   
for i in range(100):             #Run the simulation x times
    t = miniMCS(10000)
    y = t()
    results.append(y)
    meanList.append(np.sum(results)/len(results))

fig = plt.figure()
ax = plt.axes()

print(meanList[-1])

x = np.linspace(0, 100, 1000)
pi_line = np.array([np.pi for i in range(len(x))])
ax.plot(x, pi_line)
ax.plot(meanList);
#print(meanList)

# If you set the size s to big the become merged and diff to see
#plt.scatter(x, y, s=1, color='black')    
#plt.plot(x,np.sqrt(1-x**2) ,'r-',alpha=0.3)

#Shaded Success space
#a = np.linspace(0,1.0,50)
#t = np.sqrt(1-a**2)
#plt.fill_between(a,t,color='yellow', alpha=0.3)
#plt.show();
3.1403799999999995

Should be better I think. We'll see if we can improve this at a later date