SymCalc Ruby Examples
Not here for Ruby? You can choose another language
On this page, you will see some potential use-cases for SymCalc and more ways to work with it.
Let's get started!
Newton-Raphson method for solving roots
The Newton-Raphson method is an iterative algorithms that finds a zero of a function, and is used by computers to solve equations and estimate constants.
The method itself can be described like this: x[n] = x[n-1] - f(x[n-1]) / f'(x[n-1])
We'll implement the algorithm to estimate the square root of two with the highest possible prevision of a variable.
For this, let's start with an equation x = sqrt(2), and then rewrite it as x^2 = 2, then x^2 - 2 = 0. Now we can implement the Newton-Raphson method with SymCalc.
require 'symcalc'

x = SymCalc.var("x") # Declare the variable

fx = x ** 2 - 2 # Declare the function

# Find the derivative of x
dfdx = fx.derivative

# Pick the initial value, here - 1
x_estimate = 1.0

# Pick a number of iterations
n_iterations = 10

n_iterations.times do |i|
	x_estimate  = x_estimate - fx.eval(x: x_estimate) / dfdx.eval(x: x_estimate)
end

puts "Square root of two estimate: #{x_estimate}"
This code outputs "1.41421356237309", which is quite a precise value for the square root of two. This is a really great represantation of how SymCalc can be used!
Gradient descent for machine learning models
One more algorithm we could implement with SymCalc is gradient descent.
Gradient descent is a crucial algorithm and lies in the base of training deep learning models.
Let's implement it with SymCalc in a simple one-unit network.
require 'symcalc'

# Declare needed variables
input = SymCalc.var "x"
answer = SymCalc.var "y"

weight = SymCalc.var "w"
bias = SymCalc.var "b"

# Declare the prediction as wx + b (unit logic in deep learning)
prediction_func = weight * input + bias

# Define error as mean squared error (MSE)
# error = (correct answer - prediction) ^ 2
error_func = (answer - prediction_func) ** 2

# Create a sample dataset, where the answer is input * 3 - 5, plus some noise
inputs = [1,2,3,4,5]
answers = [-2.01, 0.98, 4.02, 6.99, 10.02]

# Define a number of epochs and the learning rate
epochs = 500
learning_rate = 0.08

# Initialize starting weight and bias
curr_weight = 1.0
curr_bias = 1.0

# Find derivatives of the error function with respect to weight and bias
weight_deriv_func = error_func.derivative(variable: weight)
bias_deriv_func = error_func.derivative(variable: bias)

(1..epochs).each do |epoch|
	
	puts "Epoch #{epoch}" if epoch % 50 == 0 || epoch == 1
	
	# Collect errors in an array
	errors = []
	
	# Collect weight and bias derivatives per data entry in an array
	weight_derivatives = []
	bias_derivatives = []
	
	inputs.size.times do |i|
		params = {w: curr_weight, b: curr_bias, x: inputs[i], y: answers[i]}
		error = error_func.eval(**params)
		errors << error
		
		weight_derivatives << weight_deriv_func.eval(**params)
		bias_derivatives << bias_deriv_func.eval(**params)
	end
	
	avg_error = errors.sum / errors.size
	
	puts "Error: #{avg_error}" if epoch % 50 == 0 || epoch == 1
	
	curr_weight -= weight_derivatives.sum / inputs.size * learning_rate
	curr_bias -= bias_derivatives.sum / inputs.size * learning_rate
end

puts "Final weight: #{curr_weight}"
puts "Final bias: #{curr_bias}"
As you can see, the gradient descent algorithm reaches an error of just 0.000182, with the final weight and bias being 3.007 and -5.02, which is really good considering the extra noise added to the dataset!
Experiment yourself!
Don't hesitate to just jump into a code editor and start experimenting yourself!
I'm sure you can find your own ways to use SymCalc, and that its use-cases are limited only by our imagination!
Check out the documentation page to learn about the "insides" of SymCalc and how it works!
If you have any questions or suggestions, you can contact me by email at kyryloshy@gmail.com, or on the portfolio website, kyrylshyshko.me in the "Let's talk" section.
SymCalc is licensed under the Apache 2.0 license. You can read more on the about page.