I have long been of the opinion that I would never get the enjoyment that other people got from rereading a fiction book. I thought that my memory of the book would be too good to really let me enjoy the book in the same way twice and I should just read some other book which I had never read before.

But I started to hold this opinion when my reading life only had say 5 good years behind it. So every book I had ever read was relatively fresh on my mind. But now I realize that of course my memory isn't nearly so good as I give it credit for. I have been rereading the keys to the kingdom books by Garth Nix.

The books are relatively short and I wanted to get the details of the books straight in my memory again before reading the last two books in the series. But as I have been reading the books it has become abundantly clear that not only had I forgotten the details of the books but had in essence forgotten the whole story.

While it is true that I generally remember what is going to happen a couple of pages or a couple of paragraphs before it happens I don't think that really diminishes the enjoyment of reading the book very much. Especially since now the relevance of details that passed me by on a first reading are now clear.

So I have changed my mind, rereading any type of book be it fiction or non-fiction is aok by me.

## Monday, December 28, 2009

## Sunday, December 27, 2009

### Violently Ill

I have been a little sick for the past week. But last night this little sickness showed it could be a big bad illness if it wanted to. I went to bed at midnight because I was feeling a little worse than usual and shouldn't have stayed up so late anyway. I woke up around 2 with the urge to vomit. I vomited quite a lot and thought I had cleared my stomach and hoped that now I could get some sleep. But come 4:00 I hurried into the bathroom to vomit and this time discovered that not only had I not cleared my stomach out completely last time but in the interim I had developed a case of the squirts. Suffice it to say I am glad that I brought extra underwear with me to Idaho.

After that I have been vomiting and experiencing diarrhea at roughly one hour intervals. If you have never experienced rushing to the toilet only to be uncertain whether it would be better to try to vomit first and not crap your pants or better to try and crap without vomiting all over let me sate your curiosity. It isn't that great.

After that I have been vomiting and experiencing diarrhea at roughly one hour intervals. If you have never experienced rushing to the toilet only to be uncertain whether it would be better to try to vomit first and not crap your pants or better to try and crap without vomiting all over let me sate your curiosity. It isn't that great.

## Saturday, December 26, 2009

### Merry Christmas

I am up in Idaho visiting my parents for Christmas. I will likely be up here until the 30th when I will catch a ride back down to salt lake with my parents.

This has been a pretty exciting holiday season for my family. My sister Bonnie just recently gave birth to her first child Zoe and my sister Jenelle is soon going to be giving birth to her 3rd child. Finally, yesterday (Christmas day) my sister Christie accepted a marriage proposal.

Merry Christmas

This has been a pretty exciting holiday season for my family. My sister Bonnie just recently gave birth to her first child Zoe and my sister Jenelle is soon going to be giving birth to her 3rd child. Finally, yesterday (Christmas day) my sister Christie accepted a marriage proposal.

Merry Christmas

## Wednesday, December 23, 2009

### Grading Schemes

I think most people would agree that a grade is (or is supposed to be) some sort of measure of knowledge, skill, and work. Depending on the particular class it could be more or less all a measure of knowledge or all a measure of skill or of work but usually it is a result of a mixture of all three to varying amounts.

But like all measurements a grade is an imprecise thing and a grade is more imprecise than most measurements. For one thing what exactly it is measuring is not exactly clear. But for the moment let us ignore the really difficult questions and simply allow ourselves to accept the fact that grades are (for the most part) assigned from the "percentage" that you have in the class. Looking at grading with these simplifications in mind a grading scheme is simply a partition of the interval [0, 1].

Before we discuss how you might decide on a grading scheme we should talk about the distribution of grade percentages. A grade percentage for an individual is a weighted arithmetic average of their scores on various exercises and homeworks etc throughout the course of the class. Despite what the distributions for individual performances of specific homeworks and tests etc in the limit of a large (very probably an unrealistically large) number of such tasks and assignments by the central limit theorem we can expect the distribution of the total grade to look roughly gaussian (the familiar bell curve).

Each individual students score should be thought of as merely an indicator of the nature of that students performance distribution. Here by performance distribution I mean the distribution of percentage grades that a million copies of the student would get if they were all to take the class. Obviously each copy would do differently in the class. There are many sources of variance here. For instance the teacher will sometimes pick material that a student has studied intensively to be the main subject of an exam and sometimes the student will study an area that will have almost no bearing on the exam questions. Since both tests are equally valid (or at least are conceivably equally valid) we must account for effects like these in our framework for considering grades. Also each student will have an individual variance meaning that a person will do differently on the same test given under the same conditions.

All of this doesn't really amount to much when we act under the assumption that the distribution of performance of a student is more or less the same across different tasks and exams. If that is the case then no matter what the standard deviation of the underlying distribution is for assignments and tests we can make the distribution for the total grade be arbitrarily narrow. Meaning that in the limit when we average scores over a large number of assignments and exams each student's total percentage grade will converge on their average.

For our first foray into grading schemes then we will make a further simplifying assumption that each person's score represents a distribution which is a delta function at their score. (a delta function is an idealized spike, a bump with arbitrarily small width but a non-vanishing area) So that each individual deterministically will get some characteristic score. This is actually the model that is put into use in reality. The assumption is made that either the spread of the performance distribution of an individual student is small compared to the size of the gaps between students scores, or that information about the spread of an individual's performance is either unobtainable or unusable, unimportant... etc.

The most straight forward way to make a grading scheme is simply to arbitrarily pick a minimum acceptable passing percentage and then dividing the remaining space up evenly into the grade levels. Or equally easily just pick arbitrary levels to correspond to different grades. The only problem (and of course some would say it is not a problem at all) with this sort of approach is that if you don't look at the scores at all when you set the levels for the different grades then two people whose class percentage differs by 0.01% can receive different grades. If you really believe that the intrinsic variance in the individual total grades is less than 0.01% then you can at least be confident that the grade difference represents an actual difference. However even when it does represent an actual difference the difference is so small that it is arguably unfair to give the student who is 0.005% above the dividing line the B grade and the student 0.005% below the line a B-.

If you are the sort of teacher who decides on percentage levels for grades before the percentage distribution of the class is known then the answers to this problem are simple. You monitor the distribution of the class as it goes along and then give extra credit and or easier/more assignments as necessary to guide it to something sensible. I view this as a quick fix tactic rather than an answer to the fundamental question. Less effective but more logical is the tactic of reviewing individual scores which are close to the border line and looking for reasons to bump the student over the line and give them the higher grade.

If on the other hand you are grading "on the curve" then you are free to take into account all the details of the distribution of grades that came out of a class. For small class sizes (say below 100) it may or may not make sense to grade on a curve but for large classes with high difficulty levels it becomes a necessity. It can be difficult to accurately gauge the difficulty of an exam or homework so a professor may give in a particular semester much harder tests on average or much easier tests overall. Under a preset grading scheme these differences of difficulty could not be taken into account and therefore the same level of performance in the class could result in two very different grades. More over one student taking the class when the harder exams were given might achieve a C while a less capable student taking the same class during an easier semester could achieve a higher grade. If the difficulty level of the class cannot be maintained relatively constant (as is the case in most physics courses) then it is best to "grade on the curve".

At this point the question arises "what is the fairest grading scheme?" For the moment lets ignore the variance/uncertainty in an individual's percentage. Furthermore let us first consider only relative performance of individuals and leave any absolute measure of performance out of the picture. We take it to be fair that a person's grade reflect their percentage. A higher percentage represents a better performance and therefore should correspond to a higher grade. From this perspective we are in a sense under rewarding those who are at the high end of a grade level and over rewarding those who are at the low end. In some way or other then we want to minimize the average difference between low and high percentage grades within one letter grade level. A very natural way to minimize this "unfairness" is to simply minimize the average difference between any two grades in a grade level. Going one step further we might minimize the sum of the squares of the differences of the scores because then we are dealing with the familiar object of standard deviation.

In my most recent actual grading session we in fact set rough places where the grade divisions should be and then went through the percentages looking for large gaps between scores where we could roughly put the line. Then we looked at the individual performances of people on each side of the line and considered moving the line up or down.

One problem with this gap method when you have 300 people in a class is that a "large" gap is on the order of half a percent. But if we were to apply the idea of looking to minimize the standard deviation of each grade group then a single grade point sticking in the middle of some others won't put you off.

I am not sure how well the above proposed grading scheme would actually work but I am eager to find out how it would work. Perhaps I shall propose it to the next professor under which I shall TA.

But like all measurements a grade is an imprecise thing and a grade is more imprecise than most measurements. For one thing what exactly it is measuring is not exactly clear. But for the moment let us ignore the really difficult questions and simply allow ourselves to accept the fact that grades are (for the most part) assigned from the "percentage" that you have in the class. Looking at grading with these simplifications in mind a grading scheme is simply a partition of the interval [0, 1].

Before we discuss how you might decide on a grading scheme we should talk about the distribution of grade percentages. A grade percentage for an individual is a weighted arithmetic average of their scores on various exercises and homeworks etc throughout the course of the class. Despite what the distributions for individual performances of specific homeworks and tests etc in the limit of a large (very probably an unrealistically large) number of such tasks and assignments by the central limit theorem we can expect the distribution of the total grade to look roughly gaussian (the familiar bell curve).

Each individual students score should be thought of as merely an indicator of the nature of that students performance distribution. Here by performance distribution I mean the distribution of percentage grades that a million copies of the student would get if they were all to take the class. Obviously each copy would do differently in the class. There are many sources of variance here. For instance the teacher will sometimes pick material that a student has studied intensively to be the main subject of an exam and sometimes the student will study an area that will have almost no bearing on the exam questions. Since both tests are equally valid (or at least are conceivably equally valid) we must account for effects like these in our framework for considering grades. Also each student will have an individual variance meaning that a person will do differently on the same test given under the same conditions.

All of this doesn't really amount to much when we act under the assumption that the distribution of performance of a student is more or less the same across different tasks and exams. If that is the case then no matter what the standard deviation of the underlying distribution is for assignments and tests we can make the distribution for the total grade be arbitrarily narrow. Meaning that in the limit when we average scores over a large number of assignments and exams each student's total percentage grade will converge on their average.

For our first foray into grading schemes then we will make a further simplifying assumption that each person's score represents a distribution which is a delta function at their score. (a delta function is an idealized spike, a bump with arbitrarily small width but a non-vanishing area) So that each individual deterministically will get some characteristic score. This is actually the model that is put into use in reality. The assumption is made that either the spread of the performance distribution of an individual student is small compared to the size of the gaps between students scores, or that information about the spread of an individual's performance is either unobtainable or unusable, unimportant... etc.

The most straight forward way to make a grading scheme is simply to arbitrarily pick a minimum acceptable passing percentage and then dividing the remaining space up evenly into the grade levels. Or equally easily just pick arbitrary levels to correspond to different grades. The only problem (and of course some would say it is not a problem at all) with this sort of approach is that if you don't look at the scores at all when you set the levels for the different grades then two people whose class percentage differs by 0.01% can receive different grades. If you really believe that the intrinsic variance in the individual total grades is less than 0.01% then you can at least be confident that the grade difference represents an actual difference. However even when it does represent an actual difference the difference is so small that it is arguably unfair to give the student who is 0.005% above the dividing line the B grade and the student 0.005% below the line a B-.

If you are the sort of teacher who decides on percentage levels for grades before the percentage distribution of the class is known then the answers to this problem are simple. You monitor the distribution of the class as it goes along and then give extra credit and or easier/more assignments as necessary to guide it to something sensible. I view this as a quick fix tactic rather than an answer to the fundamental question. Less effective but more logical is the tactic of reviewing individual scores which are close to the border line and looking for reasons to bump the student over the line and give them the higher grade.

If on the other hand you are grading "on the curve" then you are free to take into account all the details of the distribution of grades that came out of a class. For small class sizes (say below 100) it may or may not make sense to grade on a curve but for large classes with high difficulty levels it becomes a necessity. It can be difficult to accurately gauge the difficulty of an exam or homework so a professor may give in a particular semester much harder tests on average or much easier tests overall. Under a preset grading scheme these differences of difficulty could not be taken into account and therefore the same level of performance in the class could result in two very different grades. More over one student taking the class when the harder exams were given might achieve a C while a less capable student taking the same class during an easier semester could achieve a higher grade. If the difficulty level of the class cannot be maintained relatively constant (as is the case in most physics courses) then it is best to "grade on the curve".

At this point the question arises "what is the fairest grading scheme?" For the moment lets ignore the variance/uncertainty in an individual's percentage. Furthermore let us first consider only relative performance of individuals and leave any absolute measure of performance out of the picture. We take it to be fair that a person's grade reflect their percentage. A higher percentage represents a better performance and therefore should correspond to a higher grade. From this perspective we are in a sense under rewarding those who are at the high end of a grade level and over rewarding those who are at the low end. In some way or other then we want to minimize the average difference between low and high percentage grades within one letter grade level. A very natural way to minimize this "unfairness" is to simply minimize the average difference between any two grades in a grade level. Going one step further we might minimize the sum of the squares of the differences of the scores because then we are dealing with the familiar object of standard deviation.

In my most recent actual grading session we in fact set rough places where the grade divisions should be and then went through the percentages looking for large gaps between scores where we could roughly put the line. Then we looked at the individual performances of people on each side of the line and considered moving the line up or down.

One problem with this gap method when you have 300 people in a class is that a "large" gap is on the order of half a percent. But if we were to apply the idea of looking to minimize the standard deviation of each grade group then a single grade point sticking in the middle of some others won't put you off.

I am not sure how well the above proposed grading scheme would actually work but I am eager to find out how it would work. Perhaps I shall propose it to the next professor under which I shall TA.

## Tuesday, December 22, 2009

### Message from the future

So I just received an e-mail message from myself...

ok, i know you won't believe me but I am you from the future ok??? I just have to tell you one thing: do not pet the llama. Leave it alone, don't even look at it!

from

future tim

ps. buy [redacted] some socks when you get some.

## Saturday, December 19, 2009

### LD 16 timelapse: Alien Diaspora

Here is a really low quality version of the time lapse. But I wanted to try and keep the video file size small. I might try and make a higher quality version at some point but I doubt there will be enough interest in the video to make it worth it.

### Scene Detection Fail

So I am trying to edit my time lapse video of the ludum dare competition so that I can upload it to youtube for all and sundry to see. I am using videospin in order to do it and it apparently has this kinda nice feature where it can autodetect scenes in your movie... But unfortunately for me it, not surprisingly, does not detect scenes well when confronted with a bunch of mostly disconnected screen shots. Apparently my time lapse has about 200 distinct "scenes"...

## Thursday, December 17, 2009

### Assigning Grades

So recently I myself partook in assigning grades to students in a class. The process for physics 2210 went something like this. First we looked at the rough distribution of scores and compared them to the distribution of total scores for previous years of the class. We (though basically just professor Ailion) decided on good ranges to set breaks between grade levels based both on the actual scores and guided to some extent by looking for gaps in the continuum of percentages. With 297 people in the class a "large" gap between scores was 7/10ths of a point.

After deciding on the rough placement of the dividing lines between grades we looked at the people on either side of the dividing lines and talked about whether or not to bump them over the line or under it. For instance in this particular 2210 class the first test was by far the easiest and it had an average score of 74 and quite a number of people got 100's on the first test. The way the class is set up you are allowed to drop your lowest tests. So if you hadn't taken the first test (because of adding the class late etc) but had done well on the other exams we took that as a reason to bump up a person's grade.

Of course we spent much more time arguing about where to put the A to A- transition line than we did about where to put the D to D- line even though somewhat ironically the lower end of the scores was more densely populated than the top so that a difference of a point towards the A region might affect 3 or 4 people but a change of a point down towards the B- to C+ range might encompass 20 people.

But I find the contrast between this realm of grading and the graduate realm of grading rather amusing. The grades for my graduate E+M course for the entire class are posted on the class web page (thinly disguised behind student ID numbers). In the class there was quite a wide range of performances going from over 300 points down to only half that at around 160 points. I have however here omitted an anomaly. There is a student enrolled in the course who does not come to class and rarely turns in homework though the student does come to take the tests. At any rate this student got only 60 some odd points in the class... almost a third of the points of the second lowest score in the class and less than 20% of the total possible points in the class.

I admit that I was expecting this student to receive a failing grade in the course. But graduate classes are a very different world than the world of the 300 student undergraduate physics course. For someone in the P.H.D program a B- is a failing grade because only B's and above can be counted as progress towards a degree. I had internalized this knowledge already but had somehow assumed that surely one could achieve a lower score with a truly dismal performance in a class... But now that I have evidence of a truly dismal performance and I see it rightly paired with its failing grade... it is just that now that failing grade is a B-.

After deciding on the rough placement of the dividing lines between grades we looked at the people on either side of the dividing lines and talked about whether or not to bump them over the line or under it. For instance in this particular 2210 class the first test was by far the easiest and it had an average score of 74 and quite a number of people got 100's on the first test. The way the class is set up you are allowed to drop your lowest tests. So if you hadn't taken the first test (because of adding the class late etc) but had done well on the other exams we took that as a reason to bump up a person's grade.

Of course we spent much more time arguing about where to put the A to A- transition line than we did about where to put the D to D- line even though somewhat ironically the lower end of the scores was more densely populated than the top so that a difference of a point towards the A region might affect 3 or 4 people but a change of a point down towards the B- to C+ range might encompass 20 people.

But I find the contrast between this realm of grading and the graduate realm of grading rather amusing. The grades for my graduate E+M course for the entire class are posted on the class web page (thinly disguised behind student ID numbers). In the class there was quite a wide range of performances going from over 300 points down to only half that at around 160 points. I have however here omitted an anomaly. There is a student enrolled in the course who does not come to class and rarely turns in homework though the student does come to take the tests. At any rate this student got only 60 some odd points in the class... almost a third of the points of the second lowest score in the class and less than 20% of the total possible points in the class.

I admit that I was expecting this student to receive a failing grade in the course. But graduate classes are a very different world than the world of the 300 student undergraduate physics course. For someone in the P.H.D program a B- is a failing grade because only B's and above can be counted as progress towards a degree. I had internalized this knowledge already but had somehow assumed that surely one could achieve a lower score with a truly dismal performance in a class... But now that I have evidence of a truly dismal performance and I see it rightly paired with its failing grade... it is just that now that failing grade is a B-.

## Tuesday, December 15, 2009

### Ligo Music

The LIGO or Laser Inferometric Gravitational wave Observatory is basically a device which constantly measures its own length. But it measures this length with a precision of less than the size of a proton. In fact the measurement is basically as good as it can get since the precision is starting to bump up against the de brolie wavelength of the mirrors whose relative position the device is measuring.

### Exam Proctoring

You know it is a very different world being the one giving an exam instead of being the one taking the exam. All semester I have been proctoring exams for the class that I am a TA for. Every time I do it I have to make myself look at other people and look at their exams and what they are doing. This goes totally counter to my built in test reflexes. As a test taker during a test you are not supposed to look at anybody else and you are definitely not supposed to look at their notes or their exams.

After having been on the side of the test takers for all these years the furtive test taking behavior has become automatic. So that now that I am actually a TA and I am actually supposed to be watching people I have to fight it. It is actually pretty interesting to see what happens during a test. Since now you get to sit and watch people you get to see that this person is having a hard time with problem 3 or that person did problem 4a right but is doing 4b wrong etc. It isn't 2 hours worth of interesting though but then again you wouldn't expect it to be, I'm getting paid to do this.

After having been on the side of the test takers for all these years the furtive test taking behavior has become automatic. So that now that I am actually a TA and I am actually supposed to be watching people I have to fight it. It is actually pretty interesting to see what happens during a test. Since now you get to sit and watch people you get to see that this person is having a hard time with problem 3 or that person did problem 4a right but is doing 4b wrong etc. It isn't 2 hours worth of interesting though but then again you wouldn't expect it to be, I'm getting paid to do this.

### Ludum Dare is Over: Alien Diaspora is just beginning

I didn't take the time to make any posts in the last 24 hours of the game competition since the rate of my coding greatly picked up. Even coding like mad I ended up coding the game over screen only an hour before the deadline and I tried py2exe to make a nice windows executable in the last hour. py2exe however did not work for me it needed some dll that it couldn't find. So I submitted it to the contest as just as source code and content zipped up in a folder.

I decided to call the game Alien Diaspora, I am surprised by how much I actually got done during the competition. Even though I didn't get nearly as far on the AI as I would have liked I still coded 3 levels up upgrades to the pathfinding algorithms the bots use. The highest level of pathfinding is still not very good but it is sufficient to deal with the type and size of mazes that my game generates. If I had had even another 2 hours or so after the competition deadline I would have implemented the ability to scroll around a maze so that I could have made the mazes bigger than the game screen size and added proper information displays to the game etc.

As the game stands now it is not very fun to play and this is finals week so I certainly don't have time to put much more into it now. I plan on actually making it fun to play at some point and I want to make a windows executable out of it because if I don't it will never get played buy much of anyone. If you actually want to download it you can see it and all the other LD 16 games at the ludum dare site my game again is Alien Diaspora and since they are sorted alphabetically it is near the top.

## Saturday, December 12, 2009

### Ludum Dare #16 end of day 1 progress

I am doing significantly better today than yesterday which I suppose is to be expected. I ended up not being able to fall asleep yesterday until 6:00 AM. I will not go into the reasons for reasons except to say that it had something to do with the fact that I drank somewhere between 3 and 5 gallons of water yesterday.

At any rate today I corrected the display issues and put an agent and gold and a fog of war on the map! At the moment the gold cannot be collected and the agents are only capable of completely random movement. Here is how it looks

Not terribly inspiring perhaps but I am still pretty happy with it considering I didn't remember python nearly as well as I thought I did and I didn't learn the use of pygame nearly as fast as I would have liked. (though that part wasn't surprising)

While it isn't anything amazing for 24 hours of progress it will still mean that i can get something playable done inside of 48 hours.

At any rate today I corrected the display issues and put an agent and gold and a fog of war on the map! At the moment the gold cannot be collected and the agents are only capable of completely random movement. Here is how it looks

Not terribly inspiring perhaps but I am still pretty happy with it considering I didn't remember python nearly as well as I thought I did and I didn't learn the use of pygame nearly as fast as I would have liked. (though that part wasn't surprising)

While it isn't anything amazing for 24 hours of progress it will still mean that i can get something playable done inside of 48 hours.

### 5:00 is early in the morning

Yup, I stayed up just so that I could hit 5:00 but I am now really truly going to go to sleep. Night

### Throwing in the towel for tonight

I've got a maze displaying but it doesn't display the way it should in order to make the game playable. In order that I make certain that I get a game that is actually playable tomorrow I am going to make the game into a simple maze navigation game. Ala you are a red dot controlled by the keyboard move to the exit. My plan of attack is to begin with keyboard navigation then add a fog of war and then add money collection after that is done I shall consider making an appropriate random maze generator. Once I get all of that done I should have something vaguely resembling a playable game. But for tonight I think I have had quite enough. though it is tempting to stay up another few hours just so that I will have been up for 24 hours straight... hmm...

### LD progress report

well... considering that I first used python about a year ago and haven't used it for about 8 months I would say that I am doing ok. I have been relearning python at the same time as deciding how to organize my code. On top of learning python I am trying to learn the use of pygame which is proving to be rather unfortunately time intensive. Of course when you throw in a little time wasting (like writing blogs for instance) the fact that I have been averaging about 15 lines of code per hour is really not all that surprising though perhaps a bit disappointing.

## Friday, December 11, 2009

### Ludum Dare #16 Theme: Exploration

So I have gotten all stocked up for the weekend with a bunch of bagels and a toaster in my room. I may not go out of my room all weekend except to go to the bathroom which will also consequently be the source of my drinking water. Ludum Dare #16 has begun and even as it starts I already know vaguely what I want to do for the game.

Before the competition started I had convinced myself that I wanted to do a game where some sort of AI is the center of the game play. Although it would depend on what the theme eventually turned out to be I was pretty convinced that what I wanted to do was a game which involved buying upgrades to some sort of minion. If the theme had ended up being something like "unwanted powers" then I would probably have been forced to come up with a different idea. The theme ended up being exploration.

Adapting my idea of getting better and better minions to do your work is almost too easy to adapt to this theme. You begin in a maze and your goal is to find the way out. You start with a fog of war everywhere except the start point and the end of the maze. But instead of like in a normal maze game the player has no direct control over the exploration of the maze. Instead the player relies on robot minions. The player merely lets the robots roam the maze collecting money. Money can then be used to buy more robots or upgrades for existing robots.

The details of the game might be hard to make work in such a way that the game is fun especially when I only have 48 hours to code the game which means essentially 0 hours to play test it beyond making sure that it doesn't just crash. There goes the first 30 minutes! that is fully 1% of the total time I have to program the entire game and make all of the content!!!

Before the competition started I had convinced myself that I wanted to do a game where some sort of AI is the center of the game play. Although it would depend on what the theme eventually turned out to be I was pretty convinced that what I wanted to do was a game which involved buying upgrades to some sort of minion. If the theme had ended up being something like "unwanted powers" then I would probably have been forced to come up with a different idea. The theme ended up being exploration.

Adapting my idea of getting better and better minions to do your work is almost too easy to adapt to this theme. You begin in a maze and your goal is to find the way out. You start with a fog of war everywhere except the start point and the end of the maze. But instead of like in a normal maze game the player has no direct control over the exploration of the maze. Instead the player relies on robot minions. The player merely lets the robots roam the maze collecting money. Money can then be used to buy more robots or upgrades for existing robots.

The details of the game might be hard to make work in such a way that the game is fun especially when I only have 48 hours to code the game which means essentially 0 hours to play test it beyond making sure that it doesn't just crash. There goes the first 30 minutes! that is fully 1% of the total time I have to program the entire game and make all of the content!!!

## Thursday, December 10, 2009

### Naming Functions, Why our love of the closed form is holding us back.

When communicating a function it is necessary to have a name for it. So if we wish to communicate the function which takes as input a real number and returns the product of that number with itself we say f = x^2 or f = x

While the above example may seem rather contrived. I merely invented a function (or rather a "name for the same function") which obviously reduced to the function/representation of x^2. But is generally true that the representation of a function is so important to the way that we think about the function that knowing the representation of a function is considered the same as "knowing" the function itself. Of course if someone would generally be considered to "know" a function then it is not sufficient for them to know a description of the function such as df/dt = 1 and f(2) = 3. Although this is a description of the function that uniquely describes a function object I wouldn't be considered to "know" the function until I had provided the particular representation f = x + 1.

This preference for representing functions in a particular way is overwhelmingly strong. That particular type of representation being what is known as a "closed form". A closed form of a function is a representation which describes the function in terms of a finite combination of special functions and operations. The operations being of course addition subtraction multiplication division and composition. Composition of course being putting the output of one function into another eg. f(g(x)). The special functions change depending on who you ask about what a "closed form" is. The strictest definition would have the only special functions be the constant functions and the function f(x)=x. In which case the set of functions for which we would be able to create closed forms would be the polynomials. Usually though it is understood that certain very common functions should also be included so that ln(x), e^x, sin(x), arcsin(x) should also be included. The functions which can be put into closed form with these few operations and special functions are the stuff with which we almost exclusively deal. When we allow other types of operations such as integration often when we put nice simple closed forms in for the integrand the function we get out cannot be expressed in closed form. For instance a very useful function known as the dilogorithm is the integral from 0 to x of -ln(1-t)/t dt.

Early on in ones mathematical education the mere existence of such functions seems non-sensical. How can there exist functions that you can't write down? But of course you can write down the dilogarithm you write it down as the integral I just described. But it seems so strange to not be able to write that function in our more normal basis that in some sense it and the infinite number of functions like it become somehow not quite really functions. Our inability to name the function in our usual way means that our ability to think about that function is also impededed. We deal with this by simply effectively adding those functions which cannot be written in closed form to our set of special functions by simply assigning it a symbol. In the case of the dilogarithm for instance the symbol is Li2(x). Alternatively of course and more powerfully we can simply add the operations of integration and differentiation to our list of allowed operations. But integration is not as well understood of an operation as addition or multiplication which leads to the fact that it is often difficult or impossible to evaluate a complicated integral.

Here we stumble upon the real criterion which an expression of a function needs to meet in order for it to be an effective means of communicating that function. Any effective representation needs to be as easy to evaluate as possible. Here a "closed form" of a function is not always the best means of communicating a function. For instance in the case of the dilogarithm the integral cannot be computed using the standard methods of integration (obvious considering that the resulting function cannot be represented with the standard methods of representation) and so the numerical evaluation of the dilogarithm at a point using the integral representation is at best awkward. But the taylor series for the dilogarithm is extremely simple and the value of the dilogarithm can be quickly and easily be calculated using it to within the required accuracy (within reason of course).

But if someone were to specify a function by merely listing the values of the first few coefficients of its taylor series they would not be considered to really "know" the function. In order to "know" a function it is necessary to be able to express that function in closed form. Here I use close form in a generalized sense where any well defined operations are allowed and the only special functions are the constant functions and the identity function. For a mathematician in order to really know a function it really is necessary to be able to put it into this more general type of closed form. However in general we should be less picky about how we are allowed to describe functions in order to "know" them.

The situation is similar to the situation with our system of numbers. Early on our systems of numeration was limited to small positive integers. A number like 1000000000000000000 was too large to be able to be given a name and a number like 2.3 too strange. Over time numbering systems became better and stranger and stranger numbers were allowed into the club of numbers that could be named by the system of numeration. One of the biggest leaps in systems of numeration comes from the ability to specify ratios. With the ability to specify a number like a/b where a and b are integers numbers like 3.5 become things that one can use and reason with. But of course there are still numbers that are left out by this scheme. In fact almost every number cannot be specified as a fraction. The existence of numbers like the square root of 2 (which cannot be written as a fraction) is very much like the existence of functions like the dilogorithm. When we first encountered numbers like sqrt(2) they were frightening and mysterious but they are now routine. And while it is important for me to be able to specify a number like the square root of 2 in closed form as sqrt(2) it is at least equally important that I am not paralyzed by the fact that its decimal representation 1.41 etc is not exact. If I can specify a sufficient number of terms of sqrt(2) it should be considered that I "know" the number. In fact this is very much the reality of the way that people think about numbers now. If I specify a number only up to some precision it is understood that the number could vary around the value that I specified by a small amount. The fact that I do not EXACTLY specify the number is unimportant. Similarly it is important that we begin to think about functions in much the same way that we think about numbers. Functions should be something that we specify in the most convenient way with no preference between closed forms and more general forms such as the first few terms of a Fourier expansion. Just as we do not look down on specifying a number as 1.25 instead of 5/4ths. When functions are specified in these more general bases it should become second nature to us (as it has become for numbers) to interpret that as a valid representation of the function. The fact that the actual function could vary somewhat in a L2 ball around the function should not bother us.

^{2}or f = x*x. These are all names for the same object just as 1.25 is the same object as 5/4. But more profoundly this "name" for the function gives us a handle by which we are able to hold it in our minds. Because of this x^2 becomes more than just a name for the object that corresponds to that particular function it IS that function. In fact were I to name the function differently say for instance f = x^2*(sin^2(2x) + cos^2(2x)) it is very likely that someone would tell me "but that is really just x squared!".While the above example may seem rather contrived. I merely invented a function (or rather a "name for the same function") which obviously reduced to the function/representation of x^2. But is generally true that the representation of a function is so important to the way that we think about the function that knowing the representation of a function is considered the same as "knowing" the function itself. Of course if someone would generally be considered to "know" a function then it is not sufficient for them to know a description of the function such as df/dt = 1 and f(2) = 3. Although this is a description of the function that uniquely describes a function object I wouldn't be considered to "know" the function until I had provided the particular representation f = x + 1.

This preference for representing functions in a particular way is overwhelmingly strong. That particular type of representation being what is known as a "closed form". A closed form of a function is a representation which describes the function in terms of a finite combination of special functions and operations. The operations being of course addition subtraction multiplication division and composition. Composition of course being putting the output of one function into another eg. f(g(x)). The special functions change depending on who you ask about what a "closed form" is. The strictest definition would have the only special functions be the constant functions and the function f(x)=x. In which case the set of functions for which we would be able to create closed forms would be the polynomials. Usually though it is understood that certain very common functions should also be included so that ln(x), e^x, sin(x), arcsin(x) should also be included. The functions which can be put into closed form with these few operations and special functions are the stuff with which we almost exclusively deal. When we allow other types of operations such as integration often when we put nice simple closed forms in for the integrand the function we get out cannot be expressed in closed form. For instance a very useful function known as the dilogorithm is the integral from 0 to x of -ln(1-t)/t dt.

Early on in ones mathematical education the mere existence of such functions seems non-sensical. How can there exist functions that you can't write down? But of course you can write down the dilogarithm you write it down as the integral I just described. But it seems so strange to not be able to write that function in our more normal basis that in some sense it and the infinite number of functions like it become somehow not quite really functions. Our inability to name the function in our usual way means that our ability to think about that function is also impededed. We deal with this by simply effectively adding those functions which cannot be written in closed form to our set of special functions by simply assigning it a symbol. In the case of the dilogarithm for instance the symbol is Li2(x). Alternatively of course and more powerfully we can simply add the operations of integration and differentiation to our list of allowed operations. But integration is not as well understood of an operation as addition or multiplication which leads to the fact that it is often difficult or impossible to evaluate a complicated integral.

Here we stumble upon the real criterion which an expression of a function needs to meet in order for it to be an effective means of communicating that function. Any effective representation needs to be as easy to evaluate as possible. Here a "closed form" of a function is not always the best means of communicating a function. For instance in the case of the dilogarithm the integral cannot be computed using the standard methods of integration (obvious considering that the resulting function cannot be represented with the standard methods of representation) and so the numerical evaluation of the dilogarithm at a point using the integral representation is at best awkward. But the taylor series for the dilogarithm is extremely simple and the value of the dilogarithm can be quickly and easily be calculated using it to within the required accuracy (within reason of course).

But if someone were to specify a function by merely listing the values of the first few coefficients of its taylor series they would not be considered to really "know" the function. In order to "know" a function it is necessary to be able to express that function in closed form. Here I use close form in a generalized sense where any well defined operations are allowed and the only special functions are the constant functions and the identity function. For a mathematician in order to really know a function it really is necessary to be able to put it into this more general type of closed form. However in general we should be less picky about how we are allowed to describe functions in order to "know" them.

The situation is similar to the situation with our system of numbers. Early on our systems of numeration was limited to small positive integers. A number like 1000000000000000000 was too large to be able to be given a name and a number like 2.3 too strange. Over time numbering systems became better and stranger and stranger numbers were allowed into the club of numbers that could be named by the system of numeration. One of the biggest leaps in systems of numeration comes from the ability to specify ratios. With the ability to specify a number like a/b where a and b are integers numbers like 3.5 become things that one can use and reason with. But of course there are still numbers that are left out by this scheme. In fact almost every number cannot be specified as a fraction. The existence of numbers like the square root of 2 (which cannot be written as a fraction) is very much like the existence of functions like the dilogorithm. When we first encountered numbers like sqrt(2) they were frightening and mysterious but they are now routine. And while it is important for me to be able to specify a number like the square root of 2 in closed form as sqrt(2) it is at least equally important that I am not paralyzed by the fact that its decimal representation 1.41 etc is not exact. If I can specify a sufficient number of terms of sqrt(2) it should be considered that I "know" the number. In fact this is very much the reality of the way that people think about numbers now. If I specify a number only up to some precision it is understood that the number could vary around the value that I specified by a small amount. The fact that I do not EXACTLY specify the number is unimportant. Similarly it is important that we begin to think about functions in much the same way that we think about numbers. Functions should be something that we specify in the most convenient way with no preference between closed forms and more general forms such as the first few terms of a Fourier expansion. Just as we do not look down on specifying a number as 1.25 instead of 5/4ths. When functions are specified in these more general bases it should become second nature to us (as it has become for numbers) to interpret that as a valid representation of the function. The fact that the actual function could vary somewhat in a L2 ball around the function should not bother us.

## Tuesday, December 8, 2009

### This is an amazing sandwich

So this might be in part due to the fact that I haven't really eaten much of anything all day today until now. But I just walked to Gandalfo's and ordered a Whitestone Bridge with marinated mushrooms and it is seriously on the list of my all time favorite sandwiches. The Whitestone Bridge is by itself a pretty good sandwich but the mushrooms really lift it a level or three.

In case you were wondering the answer is yes, I am writing random blog posts because I am avoiding doing something. (Studying for my E&M final tomorrow)

In case you were wondering the answer is yes, I am writing random blog posts because I am avoiding doing something. (Studying for my E&M final tomorrow)

### Ludum Dare

Although I really should spend a weekend studying for the final in Mathematical Methods for Physicists class on Thursday I am instead going to work all weekend on coding up a game for a game competition. The game Competition is Ludum Dare which is a latin phrase meaning to give play. I wonder if the sense of the word Dare as being a proposed challenge has anything to do with it being the latin form of the verb "to give".

The Ludum Dare competition is a 48 hour competition wherein first a theme is voted on and selected and then 48 hours are given to the entrants during which they must code a game from scratch. Team entries are not allowed so you have to do it yourself and although preexisting code bases are allowed (so long as they are freely available to everyone before the competition). However all game logic and all content (both graphical and musical) is supposed to be generated during the competition. I haven't the foggiest idea how much of a game I can really expect to code up in 48 hours especially since I have never coded any means of taking user input that wasn't from the terminal (well... at least I haven't coded it well). On top of that I don't want to screw up my sleep schedule for the competition too badly so I will not be staying up the full 48 hours to code though that would be pretty epic (albeit likely counterproductive).

Regardless of whether or not something playable comes out of the experiment though I am going to make a time lapse screen recording of the two days. Which I'm pretty sure would be entertaining for me to watch even without being able to see the development of my coding.

Wish me luck!

The Ludum Dare competition is a 48 hour competition wherein first a theme is voted on and selected and then 48 hours are given to the entrants during which they must code a game from scratch. Team entries are not allowed so you have to do it yourself and although preexisting code bases are allowed (so long as they are freely available to everyone before the competition). However all game logic and all content (both graphical and musical) is supposed to be generated during the competition. I haven't the foggiest idea how much of a game I can really expect to code up in 48 hours especially since I have never coded any means of taking user input that wasn't from the terminal (well... at least I haven't coded it well). On top of that I don't want to screw up my sleep schedule for the competition too badly so I will not be staying up the full 48 hours to code though that would be pretty epic (albeit likely counterproductive).

Regardless of whether or not something playable comes out of the experiment though I am going to make a time lapse screen recording of the two days. Which I'm pretty sure would be entertaining for me to watch even without being able to see the development of my coding.

Wish me luck!

## Thursday, December 3, 2009

### 8 bit theater is AWESOME!!!

It has been an extremely long time since I read the 8 bit theater comic but playing final fantasy 1 for the first time reminded me of it. If you haven't read 8 bit theater you should.

here is the very beginning

http://www.nuklearpower.com/2001/03/02/episode-001-were-going-where/

8 bit theater succeeds in being continuously hilarious and also having a vague continuity of story both of which are rare qualities. I would say more in the praise of 8 bit but basically you just have to read it for a bit and let the awesomeness suffuse you. If you do not get hooked in the 5 minutes it will take you to read the first 10 pages or so then see a doctor something may be medically wrong with you.

Cannot write more, must reread 8 bit.

here is the very beginning

http://www.nuklearpower.com/2001/03/02/episode-001-were-going-where/

8 bit theater succeeds in being continuously hilarious and also having a vague continuity of story both of which are rare qualities. I would say more in the praise of 8 bit but basically you just have to read it for a bit and let the awesomeness suffuse you. If you do not get hooked in the 5 minutes it will take you to read the first 10 pages or so then see a doctor something may be medically wrong with you.

Cannot write more, must reread 8 bit.

Subscribe to:
Posts (Atom)