Micro Quiz 1 |
Question 1 | Question 2 | Question 3 | Question 4 | Total |
- |
- |
- |
- |
- |
# Staff Answer for Question 1: (correct answers in green)
• create_1 and create_2 have the same time complexity
• For all values of L and n, the expression 0 if create_1(L, n) == create_2(L, n) else 1 evaluates to 0
• It is possible to write code that can detect whether a list was created using create_1 or create_2
• None of the above.
# Staff Answer for Question 2: (correct answers in green)
• A brute force solution to the 0/1 knapsack problem will always produce an optimal solution.
• The complexity of the 0/1 knapsack problem (the kind of knapsack problem described in lecture) is O(2**n) where n = number_of_items * maximum_weight_allowed.
• None of the above.
# Staff Answer for Question 3: (correct answers in green)
• Dynamic programming can be used to reduce the asymptotic time complexity of some inherently exponential problems to polynomial time.
• Dynamic programming can be productively applied to the problem of sorting a list of integers.
• Dynamic programming is useful only when the constraint of an optimization problem can be checked in linear time.
• None of the above.
# Staff Answer for Question 4:
def fact_table(L):
d_memo = {1:1}
for i in range(2, max(L)+1):
d_memo[i] = d_memo[i-1]*i
result = {}
for e in L:
result[e] = d_memo[e]
return result
Micro Quiz 2 |
Question 1 | Question 2 | Question 3 | Question 4 | Total |
- |
- |
- |
- |
- |
# Staff Answer for Question 1: (correct answers in green)
• G has O(n**2) edges.
• On average, the time required by breadth-first-search to find the shortest path between a pair of nodes in G, will be less than linear in the number of edges in G.
• On average, breadth-first search and depth-first search will take the same amount of time to find the shortest path.
• None of the above.
# Staff Answer for Question 2:
9/64
# Staff Answer for Question 3:
0.568
# Staff Answer for Question 4:
def quiz_average(trials, low, high):
s = 0
for t in range(trials):
r1 = random.gauss(70, 10)
r2 = random.gauss(80, 12)
r3 = random.randint(low, high)
if 70 <= (r1+r2+r3)/3 <= 75:
s += 1
return s/trials
Micro Quiz 3 |
Question 1 | Question 2 | Question 3 | Question 4 | Question 5 | Total |
- |
- |
- |
- |
- |
- |
# Staff Answer for Question 1:
def f(): ## deterministic
random.seed(0)
L = []
for i in range(10000000):
r = random.random()
if r < 0.00001:
L.append(i)
return L
def g(): ## stochastic
L = []
random.seed()
for i in range(10000000):
r = random.random()
if r < 0.00001:
L.append(i)
return L
def h(): ## deterministic
r = random.randint(1,10)
if r == 0:
print("Done")
# Staff Answer for Question 2:
• Given a sufficiently large set of samples drawn randomly from the same population, the means of the samples (the sample means) will be approximately uniformly distributed.
• Given a sufficiently large set of samples drawn randomly from the same population, the means of the samples (the sample means) will be approximately normally distributed.
• Given a sufficiently large set of samples drawn randomly from the same population, the mean of the sample means will be close to the mean of the population.
• Given a sufficiently large set of samples drawn randomly from the same population, the variance of the sample means will be close to the variance of the population.
# Staff Answer for Question 3:
• If the simulation were run again, with a probability greater than 0.9 the estimate of K would be between 9 and 13.
• With a probability of approximately 0.9, the true value of K is between 11 and 13.
• With a probability of approximately 0.95, the true value of K is between 11 and 13.
# Staff Answer for Question 4:
• It is not possible to tell which of the lines was generated by ADrunk or BDrunk.
# Staff Answer for Question 5:
def ta_activities(trials, grading, teaching, attending):
'''
trials: integer, number of trials to run
grading: probability a TA is grading, 0 <= p <= 1
teaching: probability a TA is teaching, 0 <= p <= 1
attending: probability a TA is attending class, 0 <= p <= 1
Runs a Monte Carlo simulation 'trials' times. Returns: a tuple of
(1) a float representing the mean num of days it takes to have a day in
which all 3 actions take place
(2) the total width of the 95% confidence interval around that mean
(using stddev)
'''
days_list = []
for trial in range(trials):
days = 1
while (random.random() > grading) or \
(random.random() > teaching) or \
(random.random() > attending):
days += 1
days_list.append(days)
(mean, std) = get_mean_and_stddev(days_list)
return (mean, 1.96*std*2)
Micro Quiz 4 |
Question 1 | Question 2 | Question 3 | Total |
</tr>
<tr>
<td style="text-align:center" id="score1_4_0002">-</td>
<td style="text-align:center" id="score2_4_0002">-</td>
<td style="text-align:center" id="score3_4_0002">-</td>
<td style="text-align:center" id="total_4_0002">-</td>
</tr>
</table>
# Staff Answer for Question 1:
1-1. John ran a single trial of 10,000 Monte Carlo simulations of a game with a binary outcome. He won 1,000 times and lost 9,000 times.
• ** The best estimate of the probability of winning is 0.1.
• It is appropriate to compute a confidence interval using SD.
• None of the above.
1-2. John ran a single trial of 10,000 Monte Carlo simulations of a game with a continuous outcome between 0 and 100. The average score was 50.
• ** The best estimate of the expected score is 50.
• It is appropriate to compute a confidence interval using SD.
• ** It is appropriate to compute a confidence interval using SE.
• None of the above.
1-3. D is a normal distribution with a mean of 0 and a standard deviation of 1.
• More than half the values in D are between 0 and 1.
• ** The median value of D is 0.
• ** The probability of drawing the value 0 from D is less than 0.0001
• None of the above.
1-4. Consider the following code:
def rSquared(m, p):
eErr = ((p-m)**2).sum()
mean = m.sum()/len(m)
var = ((m - mean)**2).sum()
return 1 - eErr/var
def f(X, epsilon):
Y =[]
for x in X:
Y.append(x**2 + random.gauss(0, epsilon))
return pylab.array(Y)
X = range(1, 100)
data1 = (X, f(X, 1000))
data2 = (X, f(X, 10))
model1 = pylab.polyfit(data1[0], data1[1], 2)
model2 = pylab.polyfit(data1[0], data1[1], 3)
model3 = pylab.polyfit(data2[0], data2[1], 2)
• ** R-squared for model2 should be better than for model1.
• ** R-squared for model1 and for model2 should be close to the same.
• ** R-squared for model3 will be larger than R-squared for model2
• None of the above.
1-5. Which of the following is true about k-means clustering?
• ** Once the initial centroids have been chosen, the algorithm is deterministic.
• One problem with k-means clustering is that for small k it often takes a long time to converge.
• ** One problem with k-means clustering is that it can generate an empty cluster.
• As k grows, the average intra-cluster distance tends to grow.
• The clustering found is independent of the distance metric used.
• None of the above.
1-6. Which of the following are true?
• ** Z-scaling ensures that the values for each feature will have a mean of 0 and a standard deviation of 1.
• Linear interpolation ensures that the values for each feature will lie between 0 and 1 with a mean of 0.5
• None of the above.
1-7. Which of the following is true about KNN classification
• The larger k, the more accurate the classification
• The larger k, the longer classification takes.
• When k=1, KNN is the same as linear regression.
• KNN tends to work poorly when classes are reasonably well balanced.
• ** None of the above.
# Staff Answer for Question 2:
def optimize(s):
"""
s: positive integer, what the sum should add up to
Solves the following optimization problem:
x1 + x2 + x3 + x4 is minimized
subject to the constraint x1*25 + x2*10 + x3*5 + x4 = s
and that x1, x2, x3, x4 are non-negative integers.
Returns a list of the coefficients x1, x2, x3, x4 in that order
"""
denom = [25, 10, 5, 1]
result = []
for i in denom:
div = s//i
s -= div*i
result.append(j)
return result
# Staff Answer for Question 3:
def estimate_g(times, velocities, planet):
model = np.polyfit(times, velocities, 1)
estVals = np.polyval(model, times)
r2 = rSquared(velocities, estVals)
return (model[0], model[1], r2)
Finger Exercise Grade |
0.0% |
Final Exam |
Q1 |
Q2 |
Q3 |
Q4 |
Q5 |
Q6 |
Q7 |
Q8 |
Q8-2 |
Bonus |
Total |
- |
- |
- |
- |
- |
- |
- |
- |
- |
2 |
- |
# Staff Answer for Question 1:
c = a_str + b_str
s = 0
for i in c:
s += int(i)
print(s)
# Staff Answer for Question 2:
for i in range(n+1):
if i%r == 0:
print(i)
# Staff Answer for Question 1:
def times_n(L, a):
result = True
for i in range(len(L)):
if L[i] >= a:
result = False
L[i] = L[i]*a
return result
# Staff Answer for Question 2:
def true_in_L(f, L):
Lnew = []
for e in L:
if f(e) or e < 0:
Lnew.append(e)
return Lnew
# Staff Answer for Question 1:
def sum_or_not(d):
d_new = {}
for k in d:
if type(k) != int:
d_new[k] = sum(d[k])
else:
d_new[k] = k
return d_new
# Staff Answer for Question 2:
class Circle():
def __init__(self, radius):
self.radius = radius
self.name = None
def area(self):
return 3.14*self.radius**2
def get_name(self):
return self.name
def set_name(self, name):
self.name = name
def build_two(r1, r2):
c1 = Circle(r1)
c2 = Circle(r2)
return c1.area() + c2.area()
Finger Exercise Grade |
0.0% |