SymCalc C++ Examples
Not here for C++? You can choose another language
On this page, you will see some potential use-cases for SymCalc and more ways to work with it.
Let's get started!
Newton-Raphson method for solving roots
The Newton-Raphson method is an iterative algorithms that finds a zero of a function, and is used by computers to solve equations and estimate constants.
The method itself can be described like this: x[n] = x[n-1] - f(x[n-1]) / f'(x[n-1])
We'll implement the algorithm to estimate the square root of two with the highest possible prevision of a variable.
For this, let's start with an equation x = sqrt(2), and then rewrite it as x^2 = 2, then x^2 - 2 = 0. Now we can implement the Newton-Raphson method with SymCalc.
#include <symcalc/symcalc.hpp>
#include <iomanip>

using namespace symcalc;

int main(){
	
	Equation x("x"); // Declare the variable
	
	Equation fx = pow(x,2) - 2; // Declare the function
	
	// Find the derivative of f
	Equation dfdx = fx.derivative();
	
	// Pick the initial value, here - 1
	double x_estimate = 1.0;
	
	// Pick a number of iterations
	int n_iterations = 10;
	
	for(int i = 0; i < n_iterations; i++){
		x_estimate = x_estimate - fx.eval({{x, x_estimate}}) / dfdx.eval({{x, x_estimate}});
	}
	
	std::cout << std::setprecision(15) << "Square root of two estimate: " << x_estimate << std::endl;
	
	return 0;
}
This code outputs "1.41421356237309", which is quite a precise value for the square root of two. This is a really great represantation of how SymCalc can be used!
Gradient descent for machine learning models
One more algorithm we could implement with SymCalc is gradient descent.
Gradient descent is a crucial algorithm and lies in the base of training deep learning models.
Let's implement it with SymCalc in a simple one-unit network.
#include <symcalc/symcalc.hpp>

using namespace symcalc;

int main(){
	
	// Declare needed variables
	Equation input("x");
	Equation answer("y");
	
	Equation weight("w");
	Equation bias("b");
	
	
	// Define the prediction as wx + b (unit logic in deep learning)
	Equation prediction_func = weight * input + bias;
	
	// Define error as the mean squared error (MSE)
	// error = (correct answer - prediction) ^ 2
	Equation error_func = pow((answer - prediction_func), 2);
	
	
	// Create a sample dataset, where the answer is input * 3 - 5, plus some noise
	std::vector<double> inputs = {1,2,3,4,5};
	std::vector<double> answers = {-2.01, 0.98, 4.02, 6.99, 10.02};
	
	// Define a number of epochs and the learning rate
	int epochs = 500;
	double learning_rate = 0.08;
	
	
	// Initialize starting weight and bias
	double curr_weight = 1.0;
	double curr_bias = 1.0;
	
	
	// Find derivatives of the error function with respect to weight and bias
	Equation weight_deriv_func = error_func.derivative(weight);
	Equation bias_deriv_func = error_func.derivative(bias);
	
	for(int epoch = 1; epoch <= epochs; epoch++){
		if(epoch % 50 == 0 || epoch == 1) std::cout << "Epoch: " << epoch << std::endl;
		
		// Declare error sum to calculate the average error
		double error_sum = 0.0;
		
		// Sum weight and bias derivatives over each data entry to apply gradient descent later
		double weight_derivatives (0.0);
		double bias_derivatives (0.0);
		
		// Calculate errors
		for(int i = 0; i < inputs.size(); i++){
			std::map<Equation, double> params {{weight, curr_weight}, {bias, curr_bias}, {input, inputs[i]}, {answer, answers[i]}};
			double error_value = error_func.eval(params);
			error_sum += error_value;
			
			weight_derivatives += weight_deriv_func.eval(params);
			bias_derivatives += bias_deriv_func.eval(params);
		}
		
		double avg_error = error_sum / inputs.size();
		
		if(epoch % 50 == 0 || epoch == 1) std::cout << "Error: " << avg_error << std::endl;
		
		// Calculate new weight and bias with gradient descent
		curr_weight -= weight_derivatives / inputs.size() * learning_rate;
		curr_bias -= bias_derivatives / inputs.size() * learning_rate;
		
	}
	
	std::cout << "Final weight: " << curr_weight << std::endl;
	std::cout << "Final bias: " << curr_bias << std::endl;
	
	return 0;
}
As you can see, the gradient descent algorithm reaches an error of just 0.000182, with the final weight and bias being 3.007 and -5.02, which is really good considering the extra noise added to the dataset!
Experiment yourself!
Don't hesitate to just jump into a code editor and start experimenting yourself!
I'm sure you can find your own ways to use SymCalc, and that its use-cases are limited only by our imagination!
Check out the documentation page to learn about the "insides" of SymCalc and how it works!
If you have any questions or suggestions, you can contact me by email at kyryloshy@gmail.com, or on the portfolio website, kyrylshyshko.me in the "Let's talk" section.
SymCalc is licensed under the Apache 2.0 license. You can read more on the about page.