Newton Polynomial from Divided Differences Table in C++

Background

I am attempting to create a program that creates a fully simplified newton polynomial from a divided differences table. I already have all of the resources to do this, but I am confused on how to get my loops working.

Concept

Conceptually, I have pinpointed that I have a [50][51] floating point matrix that I draw from. I do this so that I can be sure that I can hold 50 points for the interpolation. When I take a look at my program, I take in points and set them in the first two columns of the matrix, which I call difference_matrix.

Once I make that and populate it with the values I need, I take the Lagrange coefficients by taking the values from the first row (row 1) second column (columns start from 1) to the final column and putting them in a vector named coefficients. Within my function to make the polynomial, I extrapolate the numbers in the first column and put them in another vector called matrix_numbers.

In my function, I also have two binomials that can be multiplied by each other to form a new polynomial. there is also a function to multiply them along with a vector to store the intermediate matrix with an extra degree.

Algorithm

Here were my thoughts on the algorithm to be used:

//Initialize all of the stuff needed
coefficients[0]  -> final_matrix[0]
for int i = 1 to coefficients size
   when i = 1
      set the 0th key of a binomial initialized to {1, 1} to -diference_matrix[1][1]
      multipply that binomial by the coefficients[1] value **`...+c(-a1 +x)...`**
      Add the results to the final_vector
   else
      when i = 2
         multiply binomial 1 with binomal_2 with its 0 key set to -[difference_matrix[2][1]
         multiply the result by coefficients and add to the final vector **`...+c(-a1 +x )(-a2 + x) +...`**
      else
        multiply the results of the previous step with a new binomial (binomial_2) where binomial_2[0] = - difference_matrix[i][1]. Multiply by coefficients[i]. **`...+c(-a1 +x )(-a2 + x)(-a3 + x) +... + (an + x)`**
        add the result to the final vector

print the final vector 

Code

Here is the function I wrote to make the final polynomial and calculate it.

makeSmallPoly()
{
    const int poly_length = k - 1;

    vector<float> binomial_1 = { 1, 1 }; vector<float> binomial_2 = { 1,1 }; //binomials to store 
    vector<float> matrix_numbers;
    vector<float> new_poly; //iintermediate polynomial
    int size = 2;
    vector<float> final_poly; //what we print

    for (int i = 0; i < coefficients.size(); ++i)
        final_poly.push_back(0);
    for (int i = 1; i < k-1; ++i)
        matrix_numbers.push_back(difference_matrix[i][0]);
    final_poly[0] = coefficients[0];

    for (int i = 1; i < coefficients.size(); ++i)
    {
        if (i == 1)
        {
            binomial_1[0] = -matrix_numbers[i]; //c(1+x)
            final_poly[0] += coefficients[i] * binomial_1[0], final_poly[1] += coefficients[i] * binomial_1[1];
        }

        else
        {

            binomial_2[0] = -matrix_numbers[i];
            if (i == 2)
                new_poly = multiplyPolynomial(binomial_1, binomial_2);
            else
                new_poly = multiplyPolynomial(new_poly, binomial_2);
            ++size;
            for (int j = 0; j < new_poly.size(); ++j)
                final_poly[j] += coefficients[i] * new_poly[j];

        }


    }

    printVector(final_poly);

}

Now here is the code I used to carry out the multiplication as well as do the printing:

vector<float> multiplyPolynomial(vector<float> poly_1, vector<float> poly_2)
{
    vector<float> product(poly_1.size() + poly_2.size() - 1, 0);

    // Multiply two polynomials term by term

    // Take ever term of first polynomial
    for (int i = 0; i < poly_1.size(); i++)
    {
        // Multiply the current term of first polynomial
        // with every term of second polynomial.
        for (int j = 0; j < poly_2.size(); j++)
            product[i + j] += poly_1[i] * poly_2[j];
    }

    return product;
}

void printVector(vector<float> in_vec)
{
    for (int i = in_vec.size(); i >= 0; --i)
    {
        if (i > 1)
            cout << in_vec[i] << "x^" << i << " + ";
        else if (i == 1)
        {
            cout << in_vec[i] << "x" << " + ";
        }
        else
            cout << in_vec[i] << "n";

    }
}

The Results

Using the points (1, 3), (1.5, 3.25), (0, 3), (2, 1.66667), I get the polynomial -2x^3 + 3.333333x^2 + 2.25. It is supposed to be -2x^3 + 5.325x^2 -3.32833x + 3.

How may I go about fixing this issue with my program? Any suggestions will help immensely!

–Thank you!

Source: Windows Questions C++

LEAVE A COMMENT