CS130: A Different Take on Math
I was never one to shy away from math. In fact, the old me would probably say that he loved it. Back in grade school, Math always felt like a big game to me, and before you start thinking I’m a very self-indulging person, you can’t deny that a part of you felt the same way too as a kid. Every day, I was met with story-like puzzles, improbable situations, and interesting problems where it was all up to me to figure out the answer.
I felt like a real detective back then — at least, back when it was so simple. I could figure out the radius of a circle, or maybe even the perimeter of a rectangle given the length of one side, and it would make me feel so smart. Counting angles, finding variables, drawing shapes… It was all so simple.
That feeling never lasted. As time flew, topics started to become harder and harder. What used to be so easy for me to visualize slowly became more and more abstract. It was as if the numbers started to look less like numbers and more like letters, because they did.
I hated it. Algebra made me lose so much of my imagination for math. My whole approach before used to be imagining myself in their situation, like what would I do if I needed to count apples or open boxes of marbles. No, it started to become all about memorization and properties. No real visual aids too except the letters that mess up your interpretation of the equations.
Long story short, from high school until now, math has never been my strong suit. Unlike others who seem to just look at equations and instantly know the answer, I simply couldn’t do it, so I did the next best thing — to memorize everything. Not the most pleasant experience, but it was enough to get me through the years.
Fast forward to now and it’s more or less the same thing, except now, I think I’ve gotten used to it. I don’t even notice when I memorize things for math lessons now because I’ve grown to become so accustomed to just making sure that whatever formula or equation is given sticks in my head. It never occured to me that I didn’t understand why the formulas worked since the grading systems of schools never really take that into account when grading students. In hindsight, it didn’t really matter that I didn’t understand most of what was going on, especially since I was still able to reach the grade I was aiming for, and if that was right, who’s to say I’m wrong for thinking this way?
That was until CS130 came along. I was fresh out of Math 55, so naive and happy because I thought that was the last math-heavy course we had to take. Sadly, it wasn’t. For me, the transition from Math 55 to CS130 was like getting fished up from a shark tank and thrown straight into the ocean. There was no hesitation whatsoever when they bombarded us with terms we’d never heard of before.
Lessons that made absolutely no sense to me plagued my first week. Numbers seemed non-existent in a course that was supposed to be about numbers, and everything that went out of my professor’s mouth bounced off of my head like it was afraid of getting in. This was when I realized: this whole strategy I’d been doing my whole life — it wasn’t going to work anymore.
I started to learn more about the topics we were doing, seeing what made each so important. For me, the easiest way for me to do that was to constantly ask questions. Whether those questions were directed at the professor, the internet, or just myself, I tried to make it a point to understand. It was very slow, but eventually I began to absorb information. What sir was saying and writing on the board was making sense now, and I think I was actually able to understand what the hell was happening.
I’m not gonna lie, I think I actually had fun. It started to feel like I was back in grade school again, figuring out things because I understood what was happening. The way this course tackled various problems, some of which were problems we’ve had previously, was very different. This allowed me to absorb it even quicker, and math became less of a memorizing hassle, and back to being this free-wheeled solution finding experience. Overall, I’d say that CS130 reignited my spirit for math, and although I still wouldn’t say that I like the subject (I don’t think I’ll ever confidently say that I do), at least now, I can learn to hate it a bit less.
The Door Opener: Matrices and Gauss-Jordan Elimination
Imagine learning something back when you were a kid, like lets say, riding a bike. You’d know a certain set of steps needed to accomplish the task such as sitting on the bike or pedalling on the pedals. You’d also know how difficult or easy it is since that’s what you’ve been doing your whole life up to that point.
Suddenly, let’s say somebody discovers a new way of riding this bike. Out of nowhere they tell you that you don’t actually need to sit on it before you start pedalling. Maybe they tell you that you can skip the whole “sitting down” part entirely; basically reinventing the very thought of riding this bike.
It may seem like a bit of an exaggeration, but this is how I felt when learning about Gaussian Elimination, and by extension, Gauss-Jordan Elimination for the first time. It provided a way for me to solve systems of equations in a much more creative way compared to what I was previously accustomed to. This lesson introduced the idea of matrices, and although we have had matrices before in previous courses, CS130 was the first course to fully dive into the idea of what exactly they are and how they can be applied.
Before we can get to Gauss-Jordan Elimination however, there are several things that have to be clarified first for those who are unfamiliar. Namely, matrices, systems of equations, and the concept of linear transformations.
What is a matrix?
Real number matrices can be described as rectangular arrays consisting of real numbers. These multi-dimensional arrays consist of i rows and j columns in such a way that every single number within the matrix can be accessed by referring to its position in the matrix (ith row, jth column).
What is a System of Equations?
A system of equations is exactly what its name implies: a set of one or more equations that contain a number of unknown variables. The goal of these kinds of problems is to look for all possible values of the unknown variables, such that it satisfies all the equations that were given.
What is a Linear Transformation?
I’ll be talking a lot about Linear Transformations later on in the blog, but for now, here’s a small smidget so we have an idea of what’s going on.
A Linear Transformation can be described as a change in the direction and/or magnitude of a vector in a given space. In layman’s terms, this means that if we have a given vector and subject this to a transformation, as shown below, it resolves to become a completely new vector.
In order to solve for these, we use the concept of matrix multiplication, where we multiply the transformation matrix to the vector in order to find its result. I won’t go into much detail about matrix multiplication, but here’s a really good video explaining how it works and how to do it.
What we need to take note of, however, is the idea of matrix multiplication on identity matrices.
If a vector containing n elements is multiplied to an n by n matrix containing zeros as its values everywhere except its diagonal (when i = j, where i is the ith row and j is the jth column), in which case it would have values of 1, the result will just be the vector itself. Given that the conditions above are met, then that matrix is called the vector’s identity matrix.
Now that we have those ideas out of the way, let’s dive into the process of how exactly Gauss-Jordan Elimination works. The easiest way to learn this, in my opinion, is by example, so here’s one for us to work with:
2x + 3y = 10
3x + 2y = 5
The first thing we can do here is to represent all of this in terms of matrices and vectors. Just by looking at the two equations, we can see that we are looking for the values of 2 variables (x and y). Notice how x and y are both present in each equation, but their respective coefficients are different. It’s as if the coefficients of x and y has a direct effect on the answer itself, and this is true! This is similar to thinking that the coefficients “transformed” the values into what you see on the right hand side of each equation.
Given that, we can rewrite the given in this form:
Notice how matrix A contains the coefficients of x and y respectively, with the first column representing the coefficients of x in each equation, and the second column for y. To prove that this is the same as the given, notice that by doing matrix multiplication on A to c, the result will actually be the same as the original set of equations.
By breaking this up into matrices, what we’re doing now is changing how we think of this problem. In terms of Linear Transformation as stated earlier, we are applying a transformation to the vector c, which results in the vector b.
But what if we had a way of manipulating matrix A, such that it turns into an identity matrix of c. Won’t that mean that we will instantly know the values of x and y?
Yes! That is exactly the idea behind Gauss-Jordan Elimination. By manipulating the matrices such that we end up with an identity matrix, we will know what x and y are! In order to do this, we introduce another property of matrices called Augmentation. Here’s what it looks like:
Matrix Augmentation simply means to append the columns of two matrices together, but still separated by a line to know which matrix is which. This is the first step to our process. Remember, we want to turn that first matrix into an identity matrix, and in order to do that, we introduce another set of things to know: Elementary Row Operations (EROs).
What are EROs?
In total, there are three EROs, each represented by e or epsilon.
- e1: Swapping two rows.
- e2: Multiplying each number of a row by a non-zero number.
- e3: Multiplying each number of a row by a non-zero number, then adding it to another row.
EROs are needed for Gauss-Jordan to succeed, as these are the tools that will be used to transform the matrices into the forms that we need them to be in.
With that out of the way, there’s one more thing we need to have: a strategy. We can’t just simply transform what we have into an identity matrix without a solid game plan unless we plan on just going in circles.
First, we have to transform the matrix into its Row Echelon Form (REF). A matrix is in REF if every leading (leftmost) element in a row is a 1, and that in its column, all rows below it contain a value of zero. Let’s use our current example to demonstrate what this means. It is encouraged to try this out yourself as well in order to fully grasp what is happening.
There are a few letters to keep in mind of in the follow steps: the e represents which ERO is being used. Also each row is represented by r, with r1 meaning first row and so on. Now, let’s apply the following steps to this matrix :
- e2: r2 = (2)r2
- e3: r2 = r2 + -(3)r1
- e2: r1 = (½) r1
- e2: r2 = -(⅕) r2
If done right, our matrix should now look like this:
All rows contain a leading 1, and that every row below each leading 1 has a value of zero. Therefore, we can say that this matrix is in REF.
With that out of the way, notice how we are almost on our way to our goal. If we were doing Gaussian Elimination, we’d actually already be done, but that’s not what we’re doing.
What we need to do now is to express this matrix in yet another form: its Reduced Row Echelon Form(RREF). The definition of an RREF is similar to that of a matrix in REF, except now, the leading 1 in each row should be the ONLY element in its column with a non-zero value.
We can accomplish this form by doing the one final step:
- e3: r1 = r1 + (-3/2) r2
If followed correctly, we get this:
All leading 1s are the only non-zero values in their respective columns. Therefore, we can say that this matrix is in RREF. Notice how by turning it into its RREF, we also accomplished our initial goal! The matrix on the left is now in terms of its identity matrix.
Let’s work our way backwards in order to see what exactly happened. We can do that by un-augmenting (not sure if that’s actually a term) the matrix.
Notice the magic? If not, try performing matrix multiplication on the left hand side. Either way, I’ll spoil it now and say that we have now found the values of x and y, with x = -1 and y = 4. With that, we have just done the Gauss-Jordan Elimination method for solving a system of equations!
Why this over solving it normally?
Before we get into technicalities such as optimization and the like, I’d personally like to say that I find this way of solving it to be more fun. With this, I actually understand what’s happening and the reasoning behind it. I also get to draw matrices, which for me is way more exciting than doing some algebraic substitutions and whatnot.
In terms of actual practicality, this may not have been the best example to use, since we only had two equations and two unknown variables. The time it took you to read my first set of steps would have been ample time to solve this problem using the normal way. However, if we escalate things more and have 5 or more unknowns with multiple more equations, the algebraic solution becomes incredibly tedious and unorganized, whereas Gauss-Jordan will always have the same number of steps to follow.
Why Gauss-Jordan over other methods?
Gauss-Jordan is not the only strategy of Linear Algebra introduced to us in CS130. We were actually taught Gaussian Elimination right before this, and LU Decomposition right after. I won’t discuss the other two methods here since that would take too much time, but these methods also solve for the same type of problems that Gauss-Jordan Elimination solves for.
Gaussian Elimination came before Gauss-Jordan because it ends at the REF step. It no longer needs to transform the augmented matrix into its RREF because technically, it can be solved without it, albeit with some extra algebraic calculations.
LU Decomposition is different in that although it also gets the job done, it does not completely transform the original matrix into its RREF like what Gauss-Jordan does. In fact, when scaled fully, it is actually incredibly expensive to do Gauss-Jordan compared to LU Decomposition because of the computations needed to transform matrix A into its RREF.
What also gives LU Decomposition the upper hand is when we have to reuse the same left hand side coefficient/transformation matrix on different resulting values. This is because, while Gauss-Jordan is reliant on the pairing of both the coefficient matrix and the resulting vector, LU is only reliant on the former. It can then use its answer on the former to solve multiple different resulting vectors and get answers far quicker.
Personally, however, when it comes down to just one problem and I’d have to answer it by hand, I’d much rather do Gauss-Jordan because of its simplicity in terms of the steps needed to do.
Real World Examples
Since we are solving systems of equations, these methods have instant use in the real world because of how prevalent these types of problems are in our society. Various problems are solved using systems of equations, such as chemical balancing and traffic flow, most notably circuit problems that solve for voltage, current, and the like.
However, Gauss-Jordan Elimination, or any method that solves for matrices, is not limited to just systems of equations. Matrices are also sometimes used as encoders for secret keys and passwords. A phrase would first be translated into numerical form (ASCII, 1–26, etc), then “transformed” using the transformation matrix, which will result in a completely new output. Decoding for the original phrase in this case will just be a matter of solving for it.
Visualization: A Deeper Dive Into Linear Transformations
Earlier I said that we’d go deeper into Linear Transformations, so here we are. The reason I find this topic in particular to be so engaging is because its so visual. You can literally see the transformations happening in vectors when you graph it, and seeing it happen makes things way easier to understand.
For a more formal explanation, we can represent a function, F, of a vector in vector space V as a linear transformation if, and only if:
- F: v->w; where v and w are different vector spaces
- F(u+v) = F(u) + F(v); where u and v are in the same vector space
- F(au) = a * F(u); where a is a real number
This is a lot to take in at first, with some possibly unknown terms (like vector spaces) rearing its head for the first time. Fear not, as we will go through each one by one.
What is a vector space?
A vector space is, in its most simple terms, a set of objects, or vectors, that are enclosed within the normal rules of addition and multiplication. That is to say, vectors within a vector space are subject to the associative and commutative rules of addition, as well as the commutative rules of scalar multiplication.
Vector spaces describe the dimensions of which the vectors are in, as well as the type of its number. Therefore, if I say I had a two-dimensional integer vector space, this means that all my vectors within it are two-dimensional and are comprised of integers.
A vector space must also be bound by its type. This means that if I had a vector space of real numbers, all vectors in it must contain values that are real numbers. Together with this, all addition and scalar multiplication operations I do with these vectors must also result in a vector of real numbers.
The reason why we talk about vector spaces here is to give clarity to what linear transformations actually do: transform the vector. This is not limited to just changing the values of the vectors’ elements, but also changing its vector space if the transformation allows for it. It is possible to have transformations that turn two-dimensional vectors into three-dimensional ones.
How exactly does that work?
Examples are always helpful, so here’s another one to help us out.
Here, we have an example of a function that acts as a transformation for vectors in the two dimensional space. If we were to plug in any random vector, like [1 2], it would go through this and end up as a different vector, as shown below.
Most of the time, vectors become completely different when subject to linear transformations, and if we were to graph our original vector and compare it to our new one, the difference is clear.
Making things a bit more “matrix”-like
Linear transformations are not limited to the form of functions. In fact, all matrices, when acting as a multiplier to another matrix, are considered linear transformations as well! This is provided that if we had a linear transformation, f(v), where v is in the n dimensional vector space of all real numbers, then f(v) = Av, where A is the matrix representation of the function.
We can use our earlier example to show this:
Wait, what’s actually happening?
A key thing to note here is that when we are transforming a vector, we aren’t actually manipulating the vector itself. You can think of it more as manipulating the space in which the vector is based on, and what we’re solving for is what the vector would look like if we were to manipulate that space.
Here’s an example of what we’re actually doing based on our previous example:
Our original vector is based on a three dimensional format represented by i-hat and j-hat, also known as the natural basis for two dimensions, with i-hat and j-hat set at (1,0) and (0,1) respectively. Therefore, when a vector is represented as an array, such as [1 2], it is also the same to say that it is of the form i+ 2j.
As we perform linear transformations, we are manipulating the natural basis to become what we want it to be. Each column of our transformation is actually kind of like a target, telling each component of the natural basis where to go.
By looking at the matrix representation of our transformation, we see that the first row of the matrix contains (1, 1), which is the location of the new i-hat. The same can be said for the second row, where the values (1, -1) are the location of the new j-hat.
When we plug vectors into these transformations, we are just adjusting its orientation to fit the new basis that we set. This is what our whole drawing looks like when we combine the adjusted basis and the resulting vectors from earlier.
The vectors, with respect to their basis, is exactly the same. Since the basis is the one that was transformed, the vector adjusted to reflect this change.
Application in 2D graphics
Before I continue with this, I’m not saying that linear transformations are limited to 2D graphics. We established earlier that this could work with any n dimensions of vector spaces as long as the rules are followed. I just want to use 2D since it is easier to draw.
When diving into 2D graphics, the most important part is making sure you know how to influence what you’re making. What if you were working on a graphic, and suddenly you want to scale it to 5 times its size. Nowadays, we have the necessary tools to accomplish these tasks, but at their core, they still make use of transformations to do their job.
While we see images on our computer as shapes and colors, computers actually see them in terms of pixels. Images to them are rectangular arrays with numerical values in each slot, representing the colors to be shown in each pixel. Therefore, if we wanted to change the orientation of these images, we’d have to “transform” these rectangular arrays, and that’s where everything we’ve been talking about is leading up to.
I’ll be showing how scaling works as an example, but keep in mind that many other transformations, such as reflection and rotation, are also done with the use of Linear Transformation.
If we wanted to scale a vector to something that’s twice, thrice, or however much of its size, we’d have to multiply its coordinates by our scaling constant. In its functional and matrix forms, that would look like this:
It is also cool to note that if we were to scale something by 1, that would give us a matrix representation akin to that of an identity matrix, which makes a lot of sense knowing what we talked about earlier. Scaling a vector by 1 results in the exact same vector the same way that multiplying a vector by its identity matrix results in the vector itself.
Overall, there isn’t much to compare and contrast this topic with others as I didn’t really talk about a solution like I did with Gauss-Jordan. However, I really liked this topic because of the idea of transformation represented through matrices. The idea of warping space around using boxes of numbers is really cool, and animating it through tools online makes it even better to watch.
I could have also talked about the Eigenvalue-Eigenvector problem, which involves finding all possible vectors that do not change orientation or direction at all when applied a specific linear transformation, with the only thing that may change being its length. That topic honestly was also interesting to me since it was really weird seeing vectors stay the same when the rest of space changes entirely.
I just felt as if I would have said way less if I talked about just that instead of the concept of transformation itself.
A Sudden Shift: Exploring Love with Differential Equations
Though I would have loved to talk more about the topics in the previous unit, I can confidently say that the same cannot be said for this next set of topics haha :D
After we finished the lessons regarding linear transformation, eigenvalues and such, our class took a big turn and suddenly started to talk about differential equations. In hindsight, it did make sense since this lesson still fell under the course umbrella. However it was still a big change from what we were studying previously.
Though the topics were different, they still remained interesting to me, albeit in a different way. To be honest, there was really only this one specific problem that got me hooked. Because of this, I will be sharing this problem to all of you so that you can all be hooked as well!
Romeo and Juliet
In one of the samplexes we had before our long exam, the following problem was presented:
This is an interesting problem because it deals with love — something I never expected to see in a math course. Specifically, the problem breaks down the level of affection Romeo and Juliet have for each other, and how much their affection grows or decays with respect to themselves, each other, and through time. The constants that are given (a, b, c, d) are the only factors that can affect this change in affection, and our goal, as seen in number 6 , is to find the values of these constants that will satisfy:
a) steady periodic affection,
b) decaying periodic affection, and
c) breakup :=(
Take some time to read the problem and you’ll notice that if we take away all the noise like the names and representations, what we are left with is a set of two differential equations. In the problem, it is stated that the last two variables of each equation, the ones corresponding to the two’s appeal for each other over time, is set to zero. Therefore, we are left with this:
There are actually a lot of things we have to learn before we continue since this is a big change of tone from the previous topic. These new lessons include First and Second Order Differential Equations (F/SODEs), Systems of FODEs, and the concept of harmonic motion, as well as critically damped, overdamped, and underdamped differential equations. Do not worry though, as I will try to walk us past each step, one by one.
What is a First or Second Order Differential Equation?
Differential equations are something that everyone reading this must have tackled at some point in their college life. These are equations that represent the relationship between two variables with respect to the change in one or both.
The order of a differential equation refers to level of derivation in an equation. In this case, “First Order” would mean the most number of times that any of the variables in the equation has been derived is once. The same goes with “Second Order” meaning derived twice, and so on.
In general, Linear FODEs and SODEs come in these forms, resepectively:
DEs can be further classified into homogeneous and non-homogeneous differential equations. Based on the picture above, an FODE is homogeneous if q(x)=0, and a SODE is homogenous if r(x)=0.
Linear and Non-linear DEs
One concept that has to be brought up as well is the case of Linear and Non-linear DEs. I won’t be going into a detailed explanation into what the exact difference is between the two since that is outside the scope of this problem. All we need to keep in mind of is that from this point on, when I refer to DEs, I mean Linear DEs since that is what is presented to us.
How to solve Homogeneous DEs
Now that we’ve gotten past that, we can move on to actually solving these kinds of equations. In order to do this, lets have another example, like this one:
The first thing we can do is to represent this in terms of its auxillary equation. This means that I want to represent the level of derivation in each term in terms of m. Knowing this, if I see a y’’, that is the same as saying m², while y’ is m, and y by itself is m⁰, which is 1. If done right, our question now appears in this format:
We can now solve for the roots of this auxillary equation by doing whatever you want. You can use the quadratic formula, or if you’re seasoned enough to do it mentally then you can do that as well. Either way, you should end up with roots 5 and -1. With this, we can finally know our solution for y.
But how exactly do we get y from these roots? Well, it actually depends on the format of your roots. Here’s a quick diagram to show all possible roots you can get and what your y should look like based off of these:
With this guide, we now know that our answer should be
Systems of FODEs
Now that we know what FODEs are, it becomes a bit clearer to picture what a system of FODEs is as well. Systems of FODEs are normally given in the following form:
Similar to a regular system of equations, our goal is to find the values of x and y. What makes this “different” (pun intended) is the inclusion of another variable, t, and the presence of x’ and y’, where x’ is the derivative of x with respect to t, and so on for y.
However, the overall goal remains the same: to find solutions for x and y, but before that, lets first discuss what exactly the question is asking for.
The first question asks us to find the values of a, b, c, and d such that the two will experience steady periodic affection, but what does this mean? To demonstrate, lets use a different example.
If you were to recall previous lessons in physics, we all came across the spring-mass system, where a block of mass m is attached to a spring with a spring constant of k. The rate at which the block moves is represented as follows:
Given that we know that acceleration is the second derivative of displacement, we can actually rewrite this this formula in terms of x, giving us the following equation:
If we rewrite this by moving all terms to one side, then dividing everything by m, we get this:
This is now in the form of a linear homogeneous SODE, and by following the guide we encountered earlier, and by realizing that our roots for the auxillary equation here will be purely imaginary, we can get our final answer.
Finally, notice that if you try graphing our answer, you’ll see something that looks like this:
The graph consistently goes up and down with the same amplitude throughout, meaning that there is nothing interferring with its movement, that is why it is called harmonic motion.
Looking back at our guide, you’ll see that we have 1 of the forms graphed (the case where roots are purely imaginary), but how about the other three?
Overdamped, Underdamped, and Critically Damped
Let’s go back to our spring-mass system, but this time, I’m going to be introducing a damper.
Similar to earlier, we can recall our physics lessons to know that we can solve this system with the following equation:
Once more, by moving every term to one side and dividing every term by m, we arrive at this:
Finally, by using the quadratic formula on this, we should get roots of its auxillary equation, which is equal to the following:
In order to get the final three cases, all we need to do is to set conditionals. In the case where we want real, distinct roots, B² - 4km must be positive, which corresponds to overdamped. If we want the same roots, we’d need B² - 4km to be zero, representing critically damped. Finally, if we wanted complex conjugate roots, we’d need B² - 4km to be negative, resulting in underdamped or damped harmonic.
We can see the results more clearly if we were to graph it, as seen here:
Back to the Romeo and Juliet Problem
With all of that done, we can finally solve this problem! After knowing all of this, it’s actually simple to solve for the answer. All we’d have to do is to
- transform one of the equations into the form that we are looking for.
- solve for the constants.
Without further ado, lets begin!
Finding Steady Periodic Affection
In order to solve for this, we’d have to choose one of the equations we have, and manipulate it in such a way that we get the form we are looking for. In this case, steady periodic affection would mean that their love is continuous, and based on our graphs, that probably means Harmonic Motion! Knowing that, I will try to isolate and work solely on J. I actually wrote my solution to this before, and to save time, let me just take a screenshot of it and post it here.
Now that we have our form at line 492, we’ll need to have it be in the same form that will make us achieve harmonic motion. Looking back at our guide, we’ll see that for harmonic motion, the J’, which acts as the damper, shouldn’t be there. Again, I’ve already written this down separately, so heres another screenshot.
…and there you have it! Given that all of this is now here, my challenge to those reading it now would be to answer b and c yourselves. As a hint, I can say that the next two questions also require you to first look for which form you want the equation to look like, then solving for the constants.
That was…a lot of work
Indeed it was. If you were to ask me, I would always graph the equations that I’m working with, since that really makes the picture way clearer. If I were to give pros and cons as to doing this with or without graphing, I’d choose to graph it every single time. The only problem would be that it is tedious to draw, but with the internet, even graphing becomes easy to do!
Overall, when I said that I didn’t really like this unit as much as the first two, I meant it, but that didn’t mean I didn’t enjoy it. Seeing how the equations can be manipulated in such a way that graphing it will show you all those curved lines is still pretty amazing, and not to mention very handy!
We’ve finally reached the end of my blog. To be honest, I’ve completely run out of words to say. My only wish is that I was able to explain everything I did clearly enough that everyone can understand it. I also wish I didn’t make any typos along the way hahaha.
But there you have it! This is me signing out from CS130. Thank you for making it all the way to the end!