Reflecting

The prompt for this week was to

Revisit an earlier SLOG posting of yours. See whether you agree or disagree with your earlier self.

I would like to revisit my blog post on recursion (link) because I believe my explanation on that post was incorrect and I clearly only scratched the surface of that topic.

In the post I answered the question “How does recursion work” in quite a confusing manner. So, I would like to gain some redemption points and provide another, hopefully better and more concise, explanation.

Simply put, recursion is solving a complex problem in terms of the solutions of smaller less complex instances of the same problem over and over again. A key word to remember when thinking about recursion is repetition. Now, with this in mind, I would say that a recursive function is defined by using itself repeatedly and reducing its complexity each time. (moving towards the base case)

Something that I found particularly helpful when I get confused on a concept is to compare it to another concept that I have a better grasp on. I think the best way to understand recursive functions is to compare it with iterative functions.

Iterative Recursive
Go through whole thing each step Break down into smaller problems
Repeats all code as is Repeats simpler version of code
Stop when end of loop Stop when base case is reached

One thing to note is that anything that can be done recursively can be done interatively.

In my old blog post I also talked about the benefits of recursion and for the most part I agree with what I said. It is definitely neater and simpler but this is only if the algorithm “naturally” lends itself to being recursive. I also agree in the fact that it reduces redundancies since we don’t need to duplicate any of our code.

I would like to add that recursion is very beneficial for two main reasons. Firstly, it is a great problem solving exercise since it requires a little more brain power than iterative thinking (at least from what I have encountered so far). Secondly, it develops logical thinking. The act of asking yourself how do I break this problem down into tiny less complicated pieces and how do those tiny pieces fit together to solve a larger problem really forces you to think in different ways.

Testing and Assertions

We learned about choosing test cases and assertions this week, very helpful tools in debugging.

In Heap’s class we began by analyzing test cases of subtract square with minimax. The question posed was how confident were we with the current test cases. Are we able to say that our code works as we wish given the results of the tests?

The cases we tested were quite similar to each other, they had the same probability  of outcomes  and had the same number of possible wins for each move. We don’t really get much information about our code if the tests are similar to each other. Thus, even though all the cases passed, the test were not as strong as we hoped it would be. To make it even stronger we should choose starting points that differ from each other in terms of probability  of outcomes (increasing size). Another way to improve is to test dichotomies so we can ask ourselves if it works properly whether we start with an odd or even number. To make us confident about our code, every single test case must pass, if even one fails it is an indicator that our code is flawed. Although it is very frustrating, an advantage of having failed tests is that it is easier to identify which parts of our code has gone wrong.

Another way to identify logical errors and run time errors is through  assertions. I like to think of assertions as a way to debug in the middle of the program. It checks for conditions we want to assume is true before carrying on some action, then raises an error if this condition is not met. In this way, we get to see where the bugs are in our code right away. (side note: i found that this video is extremely helpful in explaining assertions). The syntax is as follows:

assert <some condition>, <error message>

It would have been very useful to know this during A2. Getting a bunch of run time errors and not knowing exactly where and what happened was a huge source of frustration for my group. I can see so many ways we could have used this to help us speed up the process of debugging.

Side note: saw a really cool slog post by another 148 student (link). The student uses recursion and turtles to create a really cool design, I find it interesting because its a great application of what we have learned in this class (also its just cool so take a look!).

Linked Lists

I have had experience with linked lists in the past so I found that this weeks material was quite easy to grasp. I find that comparing linked lists to a regular list data structure is very helpful in terms of improving my understanding.

Linked lists is a data structure that consists of nodes (black boxes) linked together, each node has a reference to another node (red arrows) and visually it looks as follows:

ll

On the other hand, lists are contiguous elements (stuff right after the other) under one structure

l

A key difference is the way we access each node/element. In a list, each element is accessed through indices while in linked lists we access nodes via the references (for our code that we use in csc148 it is stored in self.nxt of LLNode class). I think of these references as pointers that point to each node adjacent to it, this is why I find that drawing arrows are very useful (as in the image above, the red arrows are the pointers).

Linked lists allow us to insert or delete things much easier than lists precisely because of the way we access each element. In a list, if we want to add an additional element in the middle of the list, we would have to shift every element to make room for that new thing we want to put. In a small list it is not an issue, but if the list is large this takes lots of time.

ml

With a linked list, we can simply create a new node with a reference to the next node in the existing linked list and change the reference of at most one other node already in the linked list to this new node (usually the one before it if you are inserting it in the middle of the list) so that this new node is now linked to the rest of the linked list.

mll

Basically what happened in the picture above is that the reference of A used to point to C, but if we want to insert B in between A and C, we just change the reference of A to point to B then make sure that B has a reference to C thus successfully linking them together. So, as you can see it is much faster than a list because you are changing very few things about the existing list.

I know that one of the first questions that came up in lecture was why do we need a wrapper class that specifies the front and back of the linked list. My answer to that is because we wouldn’t know where to start and when to end when traversing the linked list. The variable front will give us a reference (a pointer) to the very first node so that we can easily get to the subsequent nodes.

Assignment 2 and Test 2

This week my focus was on completing assignment 2. It is now Saturday evening and the assignment was submitted 2 days ago and it is a huge relief. This assignment was significantly more difficult than the first and unlike for assignment 1, I do not feel all that confident in my work. The program works and it runs as specified however I think that some of our code could be simplified greatly if we had implemented it in a better way. I used to think that code was sufficient as long as it works, now I want to aim to make my code more efficient and readable, I felt that I did not achieve this in assignment 2. However, I do recognize that we have not yet learned efficiency strategies, so at this moment I do not have the tools to improve code. Which is why I am looking forward to future courses which would focus on improving run-time and efficiency. In fact, we have already learned at least one way to make an algorithm run faster which is to use binary search trees (which I wrote about in last weeks blog post).

Another focus of this week is preparing for test 2, and so I wanted to dedicate this blog post to talking about Trees (which is the primary content of the test). Last week’s blog post was focused on binary trees which is a type of tree with at most 2 nodes. But, in general, a tree can have any number of nodes and does not have any particular order for the data that is contained in each node. As a consequence, in my opinion, it is not as easy to do an operation on a tree as it is on a binary tree. (ie. finding length, height, branching factors). I find it difficult because we cant say “look at the left tree” and “look at the right tree” and do some operation and combine the results. With a general tree, you have to take into consideration all it’s children. I found that following the procedure proposed to us in labs and in lecture is very helpful, it goes as follows:

Step #1: Determine an example that is simple and no recursion is needed. Then write an if expression that checks for this case.

Step #2: Consider example calls that is not as simple and where recursion is needed (For this step, I usually think of sub-trees)

Step #3: Write code to return what you want for the sample calls in step 2 (this will be your else or elif). Then combine this with step 1

For example, here is the code for listing the interior nodes with comments on what step each code came from:

def list_interior(t):
    ''' (Tree) -> list

    Return list of values in interior nodes of t.

    >>> t = Tree(0)
    >>> list_interior(t)
    []
    >>> t = descendents_from_list(Tree(0), [1, 2, 3, 4, 5, 6, 7, 8], 3)
    >>> L = list_interior(t)
    >>> L.sort()
    >>> L
    [0, 1, 2]
    '''
    if len(t.children) == 0: #step 1
        return []
    #step 2 is not specifically coding something, it is thinking about how sub-trees are
    #accessed, in this case each sub-tree (or each child) is called recursively
    else:
        return [t.value] + gather_lists([list_interior(c) for c in t.children]) #step 3

As you can see the code is fairly short, but if not done carefully it is easy to create a wrong implementation. If we follow the procedure described above, it much easier to come up with the code and less likely to make mistakes.

Binary Trees

We were asked to choose a topic to be graded, please refer to this post on OOP.

This weeks focus was on binary trees. I think I have the most trouble understanding this topic out of everything we have learned so far. I think what makes it difficult is the fact that most of the code we have encountered so far have recursive solutions. I have identified that my biggest problem is that I do not understand recursion as well as I need to.

So, this week I have been focusing on enforcing my understanding of recursion. I found a slog by another student (link) that explains this concept every well. I have also been watching a number of youtube videos (link1, link2) which are very helpful especially for me who likes to learn through live coding and visual aids.

Now that I have a stronger grasp on the concept of recursion I am able to understand binary trees better.

A key characteristic for a binary tree is that each node can have at most two children. I found it very important to remember this because in order to do some operation on the tree (ie. find its height, find whether it contains some data) we need to note that there are two sub-trees (right and left). This is also important when recursion comes into play. For most of the algorithms that we have been dealing with in class, we require that we make a recursive call on the left and right, and this is because each of those trees are binary trees themselves. Here is a diagram that helps solidify this statement (each coloured box indicates that it is a tree):

bt

So when we do something to the left tree (boxed in red) and to the right tree (boxed in blue), the result of that should be used to solve the whole tree.

We also learned about binary search trees which are trees where the left child is less than the node and the right child is greater than the node. The first question I had was what was the point of structuring a tree this way? It turns out that operations such as searching or sorting are much more efficient. It is efficient because it “cuts” how much of the tree we are looking through (visiting). So if we start at the node, we can easily determine if what we are looking for is potentially in the left or in the right, once we know where it could be we can go to that side of the tree and we can ignore the other side.

I watched a few videos to help with these concepts as well (link1, link2). I would definitely recommend watching the videos if anyone else is having difficulties understanding binary trees.

OOP

This week we were asked to write [our] summary of Object-Oriented Programming concepts.

To begin, I think that it is important to start with what does object-oriented programming mean.
Object-oriented programming is, in its simplest form, a style or structure of programming that we use for coding and it focuses on objects and methods rather than functions.

The main concepts involved in OOP are:

  • objects: these are just “things” in python that are stored in some memory location

str, dict, int are all objects
a function is also an object

  • abstraction: this gives rise to abstract data types, essentially it is the idea that we make things general enough so that we “hide” the details of implementation and give more focus on the data and its operations
  • classes: it is like a blueprint for your program, it is a model of what you are trying to accomplish, it is a data structure. I think of it as a certain type, much in the same way that a str is a type, a class is a type of something (can be anything) that you want to create

    It has two important characteristics which are attributes (nouns that would represent the variables of the class) and behaviors (verbs in which you will make into operations)

  • methods: they are actions that are performed in the class (basically, they are the behaviors of the class or what you want the class to do whenever you create a object or instance of the class)

    It can be a operation to compute something, to report something, or to change something. All of the actions done have to connected to the instance of a class.

  • inheritance: the creation of a new class that is connected to a parent class or super class, or in other words, all methods and variables are incorporated to a new class so that it (the new class) represents a special case of an already existing class

    Another concept connected to inheritance is method overriding, which is when we change or tweak a method inherited from a parent class

  • dot notation: I’m not sure if this is necessarily a formal concept but I find it useful to know when thinking about OOP.
    Dot notation tells us what methods to use for an instance of a class or what attributes to find in an instance of a class.

For example, if we have a class called pet then we create an instance of this class called cat, we can use dot notation to get information about that cat

>>> dog = Pet()
>>> dog.name 
Coco
>>> dog.fetch()

Here, python will look at dog (the thing on the left of the dot) then once it has found the reference to that object it will look inside that object and find the thing on the right of the dot. In this case it will look for name and fetch() then execute as it applies to that specific instance of the class.

Tracing Recursion

This weeks prompt is to “Write your impressions of tracing recursion from week 4”.

I found tracing relatively simple, especially once you understand the fact that recursive functions have base cases and general cases. I also found it very important to understand the smallest sample first, without this understanding it is next to impossible to trace calls with multiple layers to them. The smallest sample is the key to “exiting” the function, therefore it is the key to tracing since it tells you what the output is during that moment of tracing. The fact that we trace recursive calls progressively (starting with depth level one, then two, then three, and so on) also helps since it removes repetitive tracing and possible errors. I think these important points are best explained through an example.

For instance, this is the code provided in lab03 handout:

def count_elements(L):
    ’’’(list or non-list) -> int
    Return 1 if L is a non-list, or number of non-list elements in
    possibly-nested list L.
    Assume: L is any Python object.
    # examples omitted!
    ’’
    if isinstance(L, list):
        return sum([count_elements(x) for x in L])
    else: # if L is not a list, count it
        return 1

To trace something like this, I would first identify the base case. Which is return 1 when the input is not list.

Then I would identify the general case. Which is return the sum of the list after it has been called into count_elements. The key in understanding the general case is to think about it in terms of the base case and re-writing it in your own words. So for this one, the general case passes all elements of the list into count_elements again and again until eventually we get to the point where the element that is passed in is not a list anymore. When this happens we go back to being a base case and returning a 1. Basically, we are adding 1 each time we encounter an element of a list, its almost the same as simply getting the len of each list you encounter and adding them together.

Now that I have a rough understanding of the code I can start tracing.

count_elements(’snork’) -> 1

count_elements([2, 1, 3]) -> sum([1,1,1]) -> 3

the reason why i know the list is [1,1,1] is because i know from what we have traced above that passing in one element will return a 1

count_elements([7, [2, 1, 3], 9, [3, 2, 4]]) -> sum([1, 3,1,3]) -> 8

note the bolded 3. since I know from tracing the code above that a list of depth one returns 3, i can just write down a 3 instead of tracing any further

Another thing to note about this week is that we had our first test of the semester. I personally found it fair, not too difficult but not too easy either. It had questions that were similar to the labs and I think the fact that I spent a lot of time practicing through doing the additional exercises helped me feel so confident about the test. I really hope that the mark reflect this confidence.

Recursion

This week we focused on the concept of recursion. It is a very different concept which challenges the way I think and requires a little more planning than usual. There were three questions that came to mind when I first came across this concept (hopefully my answers to these questions help other students):

What is needed in recursion?

There are two parts in a recursive function.

1st is the base case which is the simplest condition so that when solved it can return or exit the call.

2nd is the recursive call which is a line which calls the function again

How does recursion work?

An easy way for me to understand recursion is to think of it as cloning. Recursion happens when a function calls itself within the body of the function. Then when that function call is made it (temporarily) clones everything about that function, its variables and its parameters, then it continues until some condition is met to stop the cloning (in other words it reaches the base case to exit the function). I also find it useful to think of recursion as a task in terms of similar subtasks. So a recursive call means to perform a subtask within the larger task.

(fun fact: “recursive” originates from the Latin verb “recurrere” which means “to run back” and this is exactly what a recursive function does, it “runs back” into itself)

What are the benefits if recursion?

At first I was confused as to why we need recursion. Is to to make the program run faster? Is it to reduce the amount of memory we use? As it turns out one of the benefits of recursion is just to make a program neater and simpler, it reduces redundancies because it eliminates similar or duplicate code.

It took awhile for me to grasp the concept, and I still have some difficulty with it especially when trying to code a function recursively. To overcome this, before coding right away I stop to think about the subtasks that are needed and then go from them. If I find myself repeating code, I stop and think of the ways in which that code should be changed into a recursive call. I think that tracing each call on paper before and after is very helpful in understanding how the code executes.

Why Geeks Need To Know How To Write

Back for a new semester, this time I will be keeping this course blog for CSC148 as opposed to CSC165. Therefore, all post that appear above this will be for CSC148 course.

We have been asked for this first week to write about “why geeks need to know how to write”. The first reason that comes to my mind is communication. I have learned that writing involves a great deal of communication, and this is an essential skill. Why? Well for one it gets you good grades and another is that it gets you respect from peers and those around you. An unclear string of words that lead to no central idea is nothing to be desired. The greatest part about learning is being able to share and discuss, and one way to do that is to write about your knowledge. If we have poor communication skills we are losing the opportunity to share our knowledge on topics which others might consider difficult or unconventional. Not only that but we are losing the opportunity to articulate new ideas that could contribute to the wider community and improve our understanding.

Another reason why it is important is because it keeps our thoughts organized, especially in computer science where our code needs to readable to other programmers. We have to make sure that we can explain, if needed, our programming decisions. The only way we can do this is if we can write about it in an organized manner, making sure that our logic is fully explained. I also find that when approaching programming problems, writing it out in words is the best way to keep myself from getting frustrated and quitting. Being able to look back on thoughts written out on paper acts as a little debugger, I can look back and see which part doesn’t correlate with my original thought process and even pinpoint which part of my code doesn’t fit in. I find that it is easier to catch errors when it is related to sentences indicating what the code should be doing.

Essentially, writing can improve our overall performance. I can’t deny that it is a skill that is of great importance for those of us who grab hold of any topic that is interesting and worth learning.

It’s Over

This week was the last official week of material. We did more computability stuff in the beginning of the week and then moved on briefly to induction.

We looked at cantor’s example, which i will wholly admit is very confusing but also very interesting. Essentially, it is a proof that says we cannot make a infinite set be one-to-one with the set of all natural numbers. In writing, it makes no sense but thanks to diagrams, the concept is more understandable.

dia

Canor said that in a list of all the numbers between (0,1) if we take the numbers in the diagonal list (the red numbers) and change it to be some other number, we will end up with a number that is not in the original set therefore the whole set is uncountable.

Another topic for this week was proof by induction. I was introduced to this type of proof in MAT137 however, 165 challenged the way I learned this method. It is a little confusing and I am still in the process of trying to figure it out. What 165 suggests is to determine the base cases later whereas in 137 we always did the base case first and continued on with the argument afterwards. So with these two conflicting ways, I may need to find the method that works best for me. The way I can reach this is to try proving things myself and see which method works best for me.

Anyway, since this will most likely be the last time I am to post on this bog about 165 I do have to say I will miss this course a little bit. It was challenging enough to keep me interested. It was difficult but that meant I was able to learn so much more than I anticipated. Thank you to my TA’s and the prof for making it a pretty good first semester. 

Now it is time to study for exams!