FINAL

 

 

 

 

 

 

 

 

                                                FINAL

 

  1. This problem aims to review several ideas – the Ax = b problem, penalty methods, quadratic optimization, basis functions, positive definite matrices, projection – within the context of Fourier transforms. The essential idea is that every function f(x) with domain on some interval [0,L] can be written in terms of an infinite number of cosine basis functions as follows:

(1)

where x 2 [0,L]. Sauer alludes to this in section 11.1 where he discusses the discrete cosine transform. Ascher discusses Fourier transforms in chapter 13. If you want to read more, the attached notes of David Wilkins (especially see sections 8.2 and 8.4) are nice.

In this problem, we will not deal with an infinite expansion, but will instead consider a basis composed of n of the cosine basis functions.

Consider, as in previous homeworks, the female samples of the BoneMass dataset. Let N be the number of samples, i.e N =

  1. Let bj(x) = cos(⇡j(x xmin)/(xmax xmin)) where xmin and xmax are the minimum and maximum age values in the dataset. Let b0(x) = 1. Consider the problem of modeling the data using a linear space of function F containing functions f(x) of the form:

)                                                                       (2)

  • Consider the case where n = 5. Find f(x) 2F that best fits the data using least squares as the loss function. Justify each step in your computation. Plot the data and resulting fit. Explain how determining f(x) corresponds to (i) a linear regression (ii) quadratic optimization.
  • Consider n = 1000. Show that the best fit is now not well define. To make the fit well defined, add the following penalty term to your optimization,

,                                                                        (3)

where is a penalty parameter that we can vary (as we did for the spline smoothing in hw 10). Compute the best fit , which determines f(x), for any > 0. What type of solutions does the penalty term penalize most? What happens to the fit as increases? Show plots of the data and fit for di↵erent .

(In Fourier terminology, bj(x) is referred to as the jth harmonic. Penalizing fast oscillating harmonics (i.e. bj(x) with large j) creates a smoother function since there is less ”wiggle”. This reflects a profound connection between the derivatives of a function f(x) and its Fourier decomposition (i.e.

  1. The idea and references for this problem were suggested byProf. Bansal who is in the GU biology department and works on network structure. For this problem, you will need a package for networks. igraph and statnet are two choices, there are many others. igraph is available in both R and Python. A good intro to igraph and statnet in R can be found here google.com/site/daishizuka/toolkits/sna/igraph-vs-statnet

Also attached, just to get you started, is a R script to plot a graph in igraph.

Attached you will find an article by M Newman published in PNAS in 2006. Read the article. Much of the beginning is introduction, the key quantitative ideas start on p8578, centering on equation [1]-[4]. Explain Newman’s idea of detecting community structure. Your explanation should walk through the equations and justify each step. Apply Newman’s ideas to the Karate Network (attached as an adjacency matrix) mentioned in the paper.

  1. This problem will explore penalty methods in the context ofsignal processing. Suppose that over times t = 0,1,2,…,100 we are trying to measure some signal (i.e. response). For example, we could be trying to measure GDP at di↵erent weeks, the temperature on di↵erent days, etc. At each time t, let y¯(t) be the true value of the signal (i.e. response). Let y(t) be the output of some very noisy detector that is attempting to measure the true signal at time t. To deal with the noise, suppose that the detector estimates ¯y(t) one hundred times for each t = 0,1,2,…,100, i.e. we draw 100 values of y(t) for each t. Let yt,j be the jth sample taken at time t, with j = 1,2,…, The resultant (t,yt,j) measurements (of which there are 101⇤100 = 101,000) are given in the file noise.txt. In this problem, we will do some signal processing to attempt to extract the true signal (or response). The true signal, which typically is unkown, is given by ¯y(t) = sin(2 ⇤t/100). Let t for t = 0,1,2,…,100 be our estimate for ¯y(t).

Suppose that we know that the signal changes slowly. For example, we may know that in the period of time considered, GDP did not undergo huge changes or the weather did not vary much day to day, etc.. This leads us to consider two objective functions:

  • f1() = PP100t=0 P100j=1(yt,j ↵t)2
  • f2() = 99t=0(t+1 t)2

The first objective tries to fit the t to the data. The second objective tries to extract a signal that changes slowly. Indeed, notice that the second objective is minimized if the signal does not change at all.

  • Try first to solve this problem by computing the mean valueof the signal at each t. Comment on the resultant plot.
  • Consider combining the two objective in the following way,which is similar to the penalized regressions we have considered. f() = f1() + f2(). Explain why f() is a quadratic function and then use this fact to solve minf() by solving rf() = 0. Your solution should be of the form M↵ = b where M is a matrix and b a vector. Write down expressions for M and b.
  • Using = 0, calculate . Do you recover the sinusoidal behavior? Try using other values and calculate . Do you recover sinusoidal behavior? Do you recover the true signal? What is the best ?