Linear_regression_tutorial

First something about training the data

  • Training set –> learning algorithm –> hypothesis –> estimated data
  • And we will use multiple features.

What is linear regression

  • To say the idea in normal, we will have n feature
  • And what we are going to do is to observe them and suppose that they should be in a linear method.
  • And we should develop a way to find out the parameter to suit the hypothesis function.

The first method to solve the problem(Gradient decend)

how it works?

  • There are several ideas
    • learn rate
      • if too small, the gradient descent will be slow
      • if too large, the gradient descent will overshoot the minimum.
        • it may be fail to converge, or even diverge
    • Cost function(J function)
      • The cost function has no relationship with the x and y.
      • and it all depend on the parameter theta
      • and the ½ in the front is to make the derivation easier
      • And the cost function is used to make the err(the err between hypothesis and real data) smaller.
    • Hypothesis
      • This function is used to make the function good.
      • good enough to make the y calculate by the function can came close to the real y.
      • and we will use the err of them to see whether it is good enough or not.
      • and with all the data, we sum them.
    • theta
      • we can see in the following picture

        • suppose that we went down the mountain and we can see all the mountain around us are higher than us
        • and the theta in the cost function is that we reduce theta value in the theta direction
        • and the deviation of the cost function on theta i is the distance that we move on theta i direction.
  • Basic algorithm
    • start with some theta0, theta1, theta2…..
    • Keep changing theta0, theta1, theta2…. to reduce J(theta0, theta2…) until we hopefully end up at a minimum.
    • And this is the key idea of Gradient descend method

The cpp code of it

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
#include<stdlib.h>
#include<stdio.h>
#include<string.h>
#include<math.h>

#define OUTPUTID 10001
#define BUFFERSIZE 50000
#define ROWNUM 10000
#define COLNUM 385

double alpha = 0.1;
char buffer[BUFFERSIZE];
const char *delim = ",";
double x[ROWNUM][COLNUM];
double y[ROWNUM];
double result[ROWNUM];
double diff[ROWNUM];
double theta[COLNUM];
double temp[COLNUM];

void readdata(char *, bool);
void writedata(char *);
void test();
void gradient_descend_train();

int main(){
  readdata("train.csv", true);
  gradient_descend_train();
  readdata("test.csv", false);
  test();
  writedata("predict.csv");
  return 0;
}

void readdata(char *filename, bool haspredicted){
  FILE *inputfile = fopen(filename, "r");

  if(inputfile == NULL){
      system("PAUSE");
      exit(1);
  }
  //drop the first line
  fscanf(inputfile, "%s", buffer);
  //read all lines each
  char *s;
  for(int i = 0; i < ROWNUM; i++){
      
      fscanf(inputfile, "%s", buffer);
      //drop the first column
      strtok(buffer, delim);
      //read the predict y
      if(haspredicted){
          s = strtok(NULL, delim);
          sscanf(s, "%lf", &y[i]);
      }
      //init x0
      x[i][0] = 1;
      //read the matrix
      for(int j = 1; j < COLNUM; j++){
          s = strtok(NULL, delim);
          sscanf(s, "%lf", &x[i][j]);
      }
  }
  fclose(inputfile);
}

void writedata(char *filename){
  FILE *outputfile = fopen(filename, "w");
  
  if(outputfile == NULL){
      system("pause");
      exit(1);
  }

  fprintf(outputfile, "%s,%s\n", "Id", "reference");
  //write the result into file
  for(int i = 0, id = OUTPUTID; i < ROWNUM; i++, id++){
      //cout<<"write the line"<<i + 1<<endl;
      fprintf(outputfile, "%d,%.6lf\n", id, result[i]);
  }
  fclose(outputfile);
}

void initTheta(){  //init theta
  char *thetafilename = "theta.dat";
  FILE *f = fopen(thetafilename, "r");
  for(int j = 0; j < COLNUM; j++)
      fscanf(f, "%lf", &theta[j]);
  fclose(f);
  //init the theta
  for(int j = 0; j < COLNUM; j++)
      theta[j] = 0;
}

void saveTheta(){   //save the theta
  FILE *f = fopen("theta.dat", "w");
  for(int j = 0; j < COLNUM; j++)
      fprintf(f, "%lf\n", theta[j]);
  fclose(f);
}

void calculateResult(){
  for(int i = 0; i < ROWNUM; i++){
      result[i] = 0;
      for(int j = 0; j < COLNUM; j++){
          result[i] += theta[j] * x[i][j];
      }
  }
}

double calculateJ(){
  int turn = 0;
  double cost = 0;
  for(int i = 0; i < ROWNUM; i++){
      diff[i] = result[i] - y[i];
      cost += diff[i]*diff[i];
  }
  cost /= (ROWNUM * 2);
  printf("%5d: J(theta) = %.6lf\n", ++turn, cost);
  return cost;
}

void updateTheta(){
  double sum;
  for(int j = 0 ; j < COLNUM; j++){
      sum = 0;
      for(int i = 0; i < ROWNUM; i++)
          sum += diff[i] * x[i][j];
      theta[j] -= alpha * sum / ROWNUM;
  }
}

void gradient_descend_train(){
  initTheta();
  alpha = 0.1001;
  double cost = 1000;
  while(cost > 26.4){
      calculateResult();
      cost = calculateJ();
      updateTheta();
  }
  saveTheta();
}

void test(){
  calculateResult();
}

The Second method to solve the problem(normal equation)

  • This way is a way shown in the statistic learning.
  • use the minimum square function to do a regression analyse on the data.
    • and the process is shown : (38510000)(10000385)(38510000)100001=3851
  • and we can see the feature normalise in the normal equation function

The matlab code of it

  • The main function
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
%load the train data
data = load('train.txt');
X = data(:, 3:386);
y = data(:, 2);
m = length(y);
m2 = size(X);

%load the test data
data2 = load('test.txt');
feat = data2(:, 2:385);
m3 = size(feat);

sum_test = [0];

%use the equation to calculate
theta = normaleqn(X, y, w);

%calculate the result
result = feat * theta;

csvwrite('aaa_ver3.csv', [linen result]);
  • The normal equation function
1
2
3
4
5
6
function [theta] = normaleqn(x, y, w)
    theta = zeros(size(x, 2), 1);
    %theta = pinv(x' * x + 4000.3 * eye(size(x, 2))) * x' * y;
     %theta = pinv(x' * x + 3.3 * eye(size(x, 2))) * x' * y;
    theta = pinv(x' * x + w * eye(size(x, 2))) * x' * y;
en

The cpp code of it

Comments