Monday, December 28, 2009

Rereading Books

I have long been of the opinion that I would never get the enjoyment that other people got from rereading a fiction book. I thought that my memory of the book would be too good to really let me enjoy the book in the same way twice and I should just read some other book which I had never read before.

But I started to hold this opinion when my reading life only had say 5 good years behind it. So every book I had ever read was relatively fresh on my mind. But now I realize that of course my memory isn't nearly so good as I give it credit for. I have been rereading the keys to the kingdom books by Garth Nix.

The books are relatively short and I wanted to get the details of the books straight in my memory again before reading the last two books in the series. But as I have been reading the books it has become abundantly clear that not only had I forgotten the details of the books but had in essence forgotten the whole story.

While it is true that I generally remember what is going to happen a couple of pages or a couple of paragraphs before it happens I don't think that really diminishes the enjoyment of reading the book very much. Especially since now the relevance of details that passed me by on a first reading are now clear.

So I have changed my mind, rereading any type of book be it fiction or non-fiction is aok by me.

Sunday, December 27, 2009

Violently Ill

I have been a little sick for the past week. But last night this little sickness showed it could be a big bad illness if it wanted to. I went to bed at midnight because I was feeling a little worse than usual and shouldn't have stayed up so late anyway. I woke up around 2 with the urge to vomit. I vomited quite a lot and thought I had cleared my stomach and hoped that now I could get some sleep. But come 4:00 I hurried into the bathroom to vomit and this time discovered that not only had I not cleared my stomach out completely last time but in the interim I had developed a case of the squirts. Suffice it to say I am glad that I brought extra underwear with me to Idaho.

After that I have been vomiting and experiencing diarrhea at roughly one hour intervals. If you have never experienced rushing to the toilet only to be uncertain whether it would be better to try to vomit first and not crap your pants or better to try and crap without vomiting all over let me sate your curiosity. It isn't that great.

Saturday, December 26, 2009

Merry Christmas

I am up in Idaho visiting my parents for Christmas. I will likely be up here until the 30th when I will catch a ride back down to salt lake with my parents.

This has been a pretty exciting holiday season for my family. My sister Bonnie just recently gave birth to her first child Zoe and my sister Jenelle is soon going to be giving birth to her 3rd child. Finally, yesterday (Christmas day) my sister Christie accepted a marriage proposal.

Merry Christmas

Wednesday, December 23, 2009

Grading Schemes

I think most people would agree that a grade is (or is supposed to be) some sort of measure of knowledge, skill, and work. Depending on the particular class it could be more or less all a measure of knowledge or all a measure of skill or of work but usually it is a result of a mixture of all three to varying amounts.

But like all measurements a grade is an imprecise thing and a grade is more imprecise than most measurements. For one thing what exactly it is measuring is not exactly clear. But for the moment let us ignore the really difficult questions and simply allow ourselves to accept the fact that grades are (for the most part) assigned from the "percentage" that you have in the class. Looking at grading with these simplifications in mind a grading scheme is simply a partition of the interval [0, 1].

Before we discuss how you might decide on a grading scheme we should talk about the distribution of grade percentages. A grade percentage for an individual is a weighted arithmetic average of their scores on various exercises and homeworks etc throughout the course of the class. Despite what the distributions for individual performances of specific homeworks and tests etc in the limit of a large (very probably an unrealistically large) number of such tasks and assignments by the central limit theorem we can expect the distribution of the total grade to look roughly gaussian (the familiar bell curve).

Each individual students score should be thought of as merely an indicator of the nature of that students performance distribution. Here by performance distribution I mean the distribution of percentage grades that a million copies of the student would get if they were all to take the class. Obviously each copy would do differently in the class. There are many sources of variance here. For instance the teacher will sometimes pick material that a student has studied intensively to be the main subject of an exam and sometimes the student will study an area that will have almost no bearing on the exam questions. Since both tests are equally valid (or at least are conceivably equally valid) we must account for effects like these in our framework for considering grades. Also each student will have an individual variance meaning that a person will do differently on the same test given under the same conditions.

All of this doesn't really amount to much when we act under the assumption that the distribution of performance of a student is more or less the same across different tasks and exams. If that is the case then no matter what the standard deviation of the underlying distribution is for assignments and tests we can make the distribution for the total grade be arbitrarily narrow. Meaning that in the limit when we average scores over a large number of assignments and exams each student's total percentage grade will converge on their average.

For our first foray into grading schemes then we will make a further simplifying assumption that each person's score represents a distribution which is a delta function at their score. (a delta function is an idealized spike, a bump with arbitrarily small width but a non-vanishing area) So that each individual deterministically will get some characteristic score. This is actually the model that is put into use in reality. The assumption is made that either the spread of the performance distribution of an individual student is small compared to the size of the gaps between students scores, or that information about the spread of an individual's performance is either unobtainable or unusable, unimportant... etc.

The most straight forward way to make a grading scheme is simply to arbitrarily pick a minimum acceptable passing percentage and then dividing the remaining space up evenly into the grade levels. Or equally easily just pick arbitrary levels to correspond to different grades. The only problem (and of course some would say it is not a problem at all) with this sort of approach is that if you don't look at the scores at all when you set the levels for the different grades then two people whose class percentage differs by 0.01% can receive different grades. If you really believe that the intrinsic variance in the individual total grades is less than 0.01% then you can at least be confident that the grade difference represents an actual difference. However even when it does represent an actual difference the difference is so small that it is arguably unfair to give the student who is 0.005% above the dividing line the B grade and the student 0.005% below the line a B-.

If you are the sort of teacher who decides on percentage levels for grades before the percentage distribution of the class is known then the answers to this problem are simple. You monitor the distribution of the class as it goes along and then give extra credit and or easier/more assignments as necessary to guide it to something sensible. I view this as a quick fix tactic rather than an answer to the fundamental question. Less effective but more logical is the tactic of reviewing individual scores which are close to the border line and looking for reasons to bump the student over the line and give them the higher grade.

If on the other hand you are grading "on the curve" then you are free to take into account all the details of the distribution of grades that came out of a class. For small class sizes (say below 100) it may or may not make sense to grade on a curve but for large classes with high difficulty levels it becomes a necessity. It can be difficult to accurately gauge the difficulty of an exam or homework so a professor may give in a particular semester much harder tests on average or much easier tests overall. Under a preset grading scheme these differences of difficulty could not be taken into account and therefore the same level of performance in the class could result in two very different grades. More over one student taking the class when the harder exams were given might achieve a C while a less capable student taking the same class during an easier semester could achieve a higher grade. If the difficulty level of the class cannot be maintained relatively constant (as is the case in most physics courses) then it is best to "grade on the curve".

At this point the question arises "what is the fairest grading scheme?" For the moment lets ignore the variance/uncertainty in an individual's percentage. Furthermore let us first consider only relative performance of individuals and leave any absolute measure of performance out of the picture. We take it to be fair that a person's grade reflect their percentage. A higher percentage represents a better performance and therefore should correspond to a higher grade. From this perspective we are in a sense under rewarding those who are at the high end of a grade level and over rewarding those who are at the low end. In some way or other then we want to minimize the average difference between low and high percentage grades within one letter grade level. A very natural way to minimize this "unfairness" is to simply minimize the average difference between any two grades in a grade level. Going one step further we might minimize the sum of the squares of the differences of the scores because then we are dealing with the familiar object of standard deviation.

In my most recent actual grading session we in fact set rough places where the grade divisions should be and then went through the percentages looking for large gaps between scores where we could roughly put the line. Then we looked at the individual performances of people on each side of the line and considered moving the line up or down.

One problem with this gap method when you have 300 people in a class is that a "large" gap is on the order of half a percent. But if we were to apply the idea of looking to minimize the standard deviation of each grade group then a single grade point sticking in the middle of some others won't put you off.

I am not sure how well the above proposed grading scheme would actually work but I am eager to find out how it would work. Perhaps I shall propose it to the next professor under which I shall TA.

Tuesday, December 22, 2009

Message from the future

So I just received an e-mail message from myself...

ok, i know you won't believe me but I am you from the future ok??? I just have to tell you one thing: do not pet the llama. Leave it alone, don't even look at it!

from
future tim

ps. buy [redacted] some socks when you get some.

Saturday, December 19, 2009

LD 16 timelapse: Alien Diaspora

Here is a really low quality version of the time lapse. But I wanted to try and keep the video file size small. I might try and make a higher quality version at some point but I doubt there will be enough interest in the video to make it worth it.

Scene Detection Fail

So I am trying to edit my time lapse video of the ludum dare competition so that I can upload it to youtube for all and sundry to see. I am using videospin in order to do it and it apparently has this kinda nice feature where it can autodetect scenes in your movie... But unfortunately for me it, not surprisingly, does not detect scenes well when confronted with a bunch of mostly disconnected screen shots. Apparently my time lapse has about 200 distinct "scenes"...

Thursday, December 17, 2009

Assigning Grades

So recently I myself partook in assigning grades to students in a class. The process for physics 2210 went something like this. First we looked at the rough distribution of scores and compared them to the distribution of total scores for previous years of the class. We (though basically just professor Ailion) decided on good ranges to set breaks between grade levels based both on the actual scores and guided to some extent by looking for gaps in the continuum of percentages. With 297 people in the class a "large" gap between scores was 7/10ths of a point.

After deciding on the rough placement of the dividing lines between grades we looked at the people on either side of the dividing lines and talked about whether or not to bump them over the line or under it. For instance in this particular 2210 class the first test was by far the easiest and it had an average score of 74 and quite a number of people got 100's on the first test. The way the class is set up you are allowed to drop your lowest tests. So if you hadn't taken the first test (because of adding the class late etc) but had done well on the other exams we took that as a reason to bump up a person's grade.

Of course we spent much more time arguing about where to put the A to A- transition line than we did about where to put the D to D- line even though somewhat ironically the lower end of the scores was more densely populated than the top so that a difference of a point towards the A region might affect 3 or 4 people but a change of a point down towards the B- to C+ range might encompass 20 people.

But I find the contrast between this realm of grading and the graduate realm of grading rather amusing. The grades for my graduate E+M course for the entire class are posted on the class web page (thinly disguised behind student ID numbers). In the class there was quite a wide range of performances going from over 300 points down to only half that at around 160 points. I have however here omitted an anomaly. There is a student enrolled in the course who does not come to class and rarely turns in homework though the student does come to take the tests. At any rate this student got only 60 some odd points in the class... almost a third of the points of the second lowest score in the class and less than 20% of the total possible points in the class.

I admit that I was expecting this student to receive a failing grade in the course. But graduate classes are a very different world than the world of the 300 student undergraduate physics course. For someone in the P.H.D program a B- is a failing grade because only B's and above can be counted as progress towards a degree. I had internalized this knowledge already but had somehow assumed that surely one could achieve a lower score with a truly dismal performance in a class... But now that I have evidence of a truly dismal performance and I see it rightly paired with its failing grade... it is just that now that failing grade is a B-.

Tuesday, December 15, 2009

Ligo Music



The LIGO or Laser Inferometric Gravitational wave Observatory is basically a device which constantly measures its own length. But it measures this length with a precision of less than the size of a proton. In fact the measurement is basically as good as it can get since the precision is starting to bump up against the de brolie wavelength of the mirrors whose relative position the device is measuring.

Exam Proctoring

You know it is a very different world being the one giving an exam instead of being the one taking the exam. All semester I have been proctoring exams for the class that I am a TA for. Every time I do it I have to make myself look at other people and look at their exams and what they are doing. This goes totally counter to my built in test reflexes. As a test taker during a test you are not supposed to look at anybody else and you are definitely not supposed to look at their notes or their exams.

After having been on the side of the test takers for all these years the furtive test taking behavior has become automatic. So that now that I am actually a TA and I am actually supposed to be watching people I have to fight it. It is actually pretty interesting to see what happens during a test. Since now you get to sit and watch people you get to see that this person is having a hard time with problem 3 or that person did problem 4a right but is doing 4b wrong etc. It isn't 2 hours worth of interesting though but then again you wouldn't expect it to be, I'm getting paid to do this.

Ludum Dare is Over: Alien Diaspora is just beginning


I didn't take the time to make any posts in the last 24 hours of the game competition since the rate of my coding greatly picked up. Even coding like mad I ended up coding the game over screen only an hour before the deadline and I tried py2exe to make a nice windows executable in the last hour. py2exe however did not work for me it needed some dll that it couldn't find. So I submitted it to the contest as just as source code and content zipped up in a folder.

I decided to call the game Alien Diaspora, I am surprised by how much I actually got done during the competition. Even though I didn't get nearly as far on the AI as I would have liked I still coded 3 levels up upgrades to the pathfinding algorithms the bots use. The highest level of pathfinding is still not very good but it is sufficient to deal with the type and size of mazes that my game generates. If I had had even another 2 hours or so after the competition deadline I would have implemented the ability to scroll around a maze so that I could have made the mazes bigger than the game screen size and added proper information displays to the game etc.

As the game stands now it is not very fun to play and this is finals week so I certainly don't have time to put much more into it now. I plan on actually making it fun to play at some point and I want to make a windows executable out of it because if I don't it will never get played buy much of anyone. If you actually want to download it you can see it and all the other LD 16 games at the ludum dare site my game again is Alien Diaspora and since they are sorted alphabetically it is near the top.

Saturday, December 12, 2009

Ludum Dare #16 end of day 1 progress

I am doing significantly better today than yesterday which I suppose is to be expected. I ended up not being able to fall asleep yesterday until 6:00 AM. I will not go into the reasons for reasons except to say that it had something to do with the fact that I drank somewhere between 3 and 5 gallons of water yesterday.

At any rate today I corrected the display issues and put an agent and gold and a fog of war on the map! At the moment the gold cannot be collected and the agents are only capable of completely random movement. Here is how it looks



Not terribly inspiring perhaps but I am still pretty happy with it considering I didn't remember python nearly as well as I thought I did and I didn't learn the use of pygame nearly as fast as I would have liked. (though that part wasn't surprising)
While it isn't anything amazing for 24 hours of progress it will still mean that i can get something playable done inside of 48 hours.

5:00 is early in the morning

Yup, I stayed up just so that I could hit 5:00 but I am now really truly going to go to sleep. Night

Throwing in the towel for tonight

I've got a maze displaying but it doesn't display the way it should in order to make the game playable. In order that I make certain that I get a game that is actually playable tomorrow I am going to make the game into a simple maze navigation game. Ala you are a red dot controlled by the keyboard move to the exit. My plan of attack is to begin with keyboard navigation then add a fog of war and then add money collection after that is done I shall consider making an appropriate random maze generator. Once I get all of that done I should have something vaguely resembling a playable game. But for tonight I think I have had quite enough. though it is tempting to stay up another few hours just so that I will have been up for 24 hours straight... hmm...

LD progress report

well... considering that I first used python about a year ago and haven't used it for about 8 months I would say that I am doing ok. I have been relearning python at the same time as deciding how to organize my code. On top of learning python I am trying to learn the use of pygame which is proving to be rather unfortunately time intensive. Of course when you throw in a little time wasting (like writing blogs for instance) the fact that I have been averaging about 15 lines of code per hour is really not all that surprising though perhaps a bit disappointing.

Friday, December 11, 2009

Ludum Dare #16 Theme: Exploration

So I have gotten all stocked up for the weekend with a bunch of bagels and a toaster in my room. I may not go out of my room all weekend except to go to the bathroom which will also consequently be the source of my drinking water. Ludum Dare #16 has begun and even as it starts I already know vaguely what I want to do for the game.

Before the competition started I had convinced myself that I wanted to do a game where some sort of AI is the center of the game play. Although it would depend on what the theme eventually turned out to be I was pretty convinced that what I wanted to do was a game which involved buying upgrades to some sort of minion. If the theme had ended up being something like "unwanted powers" then I would probably have been forced to come up with a different idea. The theme ended up being exploration.

Adapting my idea of getting better and better minions to do your work is almost too easy to adapt to this theme. You begin in a maze and your goal is to find the way out. You start with a fog of war everywhere except the start point and the end of the maze. But instead of like in a normal maze game the player has no direct control over the exploration of the maze. Instead the player relies on robot minions. The player merely lets the robots roam the maze collecting money. Money can then be used to buy more robots or upgrades for existing robots.

The details of the game might be hard to make work in such a way that the game is fun especially when I only have 48 hours to code the game which means essentially 0 hours to play test it beyond making sure that it doesn't just crash. There goes the first 30 minutes! that is fully 1% of the total time I have to program the entire game and make all of the content!!!

Thursday, December 10, 2009

Naming Functions, Why our love of the closed form is holding us back.

When communicating a function it is necessary to have a name for it. So if we wish to communicate the function which takes as input a real number and returns the product of that number with itself we say f = x^2 or f = x2 or f = x*x. These are all names for the same object just as 1.25 is the same object as 5/4. But more profoundly this "name" for the function gives us a handle by which we are able to hold it in our minds. Because of this x^2 becomes more than just a name for the object that corresponds to that particular function it IS that function. In fact were I to name the function differently say for instance f = x^2*(sin^2(2x) + cos^2(2x)) it is very likely that someone would tell me "but that is really just x squared!".

While the above example may seem rather contrived. I merely invented a function (or rather a "name for the same function") which obviously reduced to the function/representation of x^2. But is generally true that the representation of a function is so important to the way that we think about the function that knowing the representation of a function is considered the same as "knowing" the function itself. Of course if someone would generally be considered to "know" a function then it is not sufficient for them to know a description of the function such as df/dt = 1 and f(2) = 3. Although this is a description of the function that uniquely describes a function object I wouldn't be considered to "know" the function until I had provided the particular representation f = x + 1.

This preference for representing functions in a particular way is overwhelmingly strong. That particular type of representation being what is known as a "closed form". A closed form of a function is a representation which describes the function in terms of a finite combination of special functions and operations. The operations being of course addition subtraction multiplication division and composition. Composition of course being putting the output of one function into another eg. f(g(x)). The special functions change depending on who you ask about what a "closed form" is. The strictest definition would have the only special functions be the constant functions and the function f(x)=x. In which case the set of functions for which we would be able to create closed forms would be the polynomials. Usually though it is understood that certain very common functions should also be included so that ln(x), e^x, sin(x), arcsin(x) should also be included. The functions which can be put into closed form with these few operations and special functions are the stuff with which we almost exclusively deal. When we allow other types of operations such as integration often when we put nice simple closed forms in for the integrand the function we get out cannot be expressed in closed form. For instance a very useful function known as the dilogorithm is the integral from 0 to x of -ln(1-t)/t dt.

Early on in ones mathematical education the mere existence of such functions seems non-sensical. How can there exist functions that you can't write down? But of course you can write down the dilogarithm you write it down as the integral I just described. But it seems so strange to not be able to write that function in our more normal basis that in some sense it and the infinite number of functions like it become somehow not quite really functions. Our inability to name the function in our usual way means that our ability to think about that function is also impededed. We deal with this by simply effectively adding those functions which cannot be written in closed form to our set of special functions by simply assigning it a symbol. In the case of the dilogarithm for instance the symbol is Li2(x). Alternatively of course and more powerfully we can simply add the operations of integration and differentiation to our list of allowed operations. But integration is not as well understood of an operation as addition or multiplication which leads to the fact that it is often difficult or impossible to evaluate a complicated integral.

Here we stumble upon the real criterion which an expression of a function needs to meet in order for it to be an effective means of communicating that function. Any effective representation needs to be as easy to evaluate as possible. Here a "closed form" of a function is not always the best means of communicating a function. For instance in the case of the dilogarithm the integral cannot be computed using the standard methods of integration (obvious considering that the resulting function cannot be represented with the standard methods of representation) and so the numerical evaluation of the dilogarithm at a point using the integral representation is at best awkward. But the taylor series for the dilogarithm is extremely simple and the value of the dilogarithm can be quickly and easily be calculated using it to within the required accuracy (within reason of course).

But if someone were to specify a function by merely listing the values of the first few coefficients of its taylor series they would not be considered to really "know" the function. In order to "know" a function it is necessary to be able to express that function in closed form. Here I use close form in a generalized sense where any well defined operations are allowed and the only special functions are the constant functions and the identity function. For a mathematician in order to really know a function it really is necessary to be able to put it into this more general type of closed form. However in general we should be less picky about how we are allowed to describe functions in order to "know" them.

The situation is similar to the situation with our system of numbers. Early on our systems of numeration was limited to small positive integers. A number like 1000000000000000000 was too large to be able to be given a name and a number like 2.3 too strange. Over time numbering systems became better and stranger and stranger numbers were allowed into the club of numbers that could be named by the system of numeration. One of the biggest leaps in systems of numeration comes from the ability to specify ratios. With the ability to specify a number like a/b where a and b are integers numbers like 3.5 become things that one can use and reason with. But of course there are still numbers that are left out by this scheme. In fact almost every number cannot be specified as a fraction. The existence of numbers like the square root of 2 (which cannot be written as a fraction) is very much like the existence of functions like the dilogorithm. When we first encountered numbers like sqrt(2) they were frightening and mysterious but they are now routine. And while it is important for me to be able to specify a number like the square root of 2 in closed form as sqrt(2) it is at least equally important that I am not paralyzed by the fact that its decimal representation 1.41 etc is not exact. If I can specify a sufficient number of terms of sqrt(2) it should be considered that I "know" the number. In fact this is very much the reality of the way that people think about numbers now. If I specify a number only up to some precision it is understood that the number could vary around the value that I specified by a small amount. The fact that I do not EXACTLY specify the number is unimportant. Similarly it is important that we begin to think about functions in much the same way that we think about numbers. Functions should be something that we specify in the most convenient way with no preference between closed forms and more general forms such as the first few terms of a Fourier expansion. Just as we do not look down on specifying a number as 1.25 instead of 5/4ths. When functions are specified in these more general bases it should become second nature to us (as it has become for numbers) to interpret that as a valid representation of the function. The fact that the actual function could vary somewhat in a L2 ball around the function should not bother us.

Tuesday, December 8, 2009

This is an amazing sandwich

So this might be in part due to the fact that I haven't really eaten much of anything all day today until now. But I just walked to Gandalfo's and ordered a Whitestone Bridge with marinated mushrooms and it is seriously on the list of my all time favorite sandwiches. The Whitestone Bridge is by itself a pretty good sandwich but the mushrooms really lift it a level or three.

In case you were wondering the answer is yes, I am writing random blog posts because I am avoiding doing something. (Studying for my E&M final tomorrow)

Ludum Dare

Although I really should spend a weekend studying for the final in Mathematical Methods for Physicists class on Thursday I am instead going to work all weekend on coding up a game for a game competition. The game Competition is Ludum Dare which is a latin phrase meaning to give play. I wonder if the sense of the word Dare as being a proposed challenge has anything to do with it being the latin form of the verb "to give".

The Ludum Dare competition is a 48 hour competition wherein first a theme is voted on and selected and then 48 hours are given to the entrants during which they must code a game from scratch. Team entries are not allowed so you have to do it yourself and although preexisting code bases are allowed (so long as they are freely available to everyone before the competition). However all game logic and all content (both graphical and musical) is supposed to be generated during the competition. I haven't the foggiest idea how much of a game I can really expect to code up in 48 hours especially since I have never coded any means of taking user input that wasn't from the terminal (well... at least I haven't coded it well). On top of that I don't want to screw up my sleep schedule for the competition too badly so I will not be staying up the full 48 hours to code though that would be pretty epic (albeit likely counterproductive).

Regardless of whether or not something playable comes out of the experiment though I am going to make a time lapse screen recording of the two days. Which I'm pretty sure would be entertaining for me to watch even without being able to see the development of my coding.

Wish me luck!

Thursday, December 3, 2009

8 bit theater is AWESOME!!!

It has been an extremely long time since I read the 8 bit theater comic but playing final fantasy 1 for the first time reminded me of it. If you haven't read 8 bit theater you should.

here is the very beginning
http://www.nuklearpower.com/2001/03/02/episode-001-were-going-where/

8 bit theater succeeds in being continuously hilarious and also having a vague continuity of story both of which are rare qualities. I would say more in the praise of 8 bit but basically you just have to read it for a bit and let the awesomeness suffuse you. If you do not get hooked in the 5 minutes it will take you to read the first 10 pages or so then see a doctor something may be medically wrong with you.

Cannot write more, must reread 8 bit.

Tuesday, November 24, 2009

The Rijke Tube or The Gondolo

So apparently the first year professor Gondolo was teaching here at the U he had a demonstration set up in which a wire mesh is heated inside a cardboard tube by a propane burner. The bottom of the tube is obstructed while the mesh is heating up because if an air current were allowed to flow it would blow out the propane. When the mesh is hot enough you turn off the propane and remove the obstruction on the bottom. The mesh causes convection heating of the air and an air current starts to flow through the tube. Standing wave patterns form and the tube becomes a resonator. The demonstration is actually rather an old one and is called the Rijke tube. If you don't know what I am talking about or have never seen it here is a youtube video of a small one being built and put into action. The video is pretty long you can just skip to around 3:30 to see the tube work.



As I was saying our young professor Gondolo (well... younger anyway) had a Rijke tube demonstration all set up but the tube was not like the one in the video it was a tube of more than a foot diameter and 10 feet or so tall. The tube was held by ropes hanging from the ceiling of the extremely large physics classroom. Professor Gondolo made the mistake of starting the propane burner and then teaching the physics of it. He taught the physics of the device for too long though and the wire mesh got rather hotter than it should have. When he was done explaining the physics and unblocked the bottom of the tube the mesh was so hot that the new airflow did a lot more than just allow resonance it allowed the cardboard tube to catch on fire. After that particular disaster when the demonstration was rebuilt it was built out of metal ducting and the name "THE GONDOLO" was painted on its side in red along with some nice decorative flames. Since professor Gondolo is going to hopefully become my research advisor I don't think I shall mention it until after he decides to accept me as his student, but I can't help but wonder how he feels about the giant metal demonstration tube that now bears his name.

Either fortunately or unfortunately I never saw the original demonstration with the cardboard tube. The cardboard tube was supposed to have been even larger and louder than the current metal one but the newer one is still damn impressive. The natural frequency of the tube is lower than you can hear (or at least hear well) so mostly what you hear when the tube goes off is the second and third harmonics. But the vibration is so loud and so low that you can definitely feel it.

Thursday, November 19, 2009

Picking A Research Advisor

Some time ago I talked with Professor Wu about becoming a graduate student under him and he told me to attend their group meetings and seminars. It turns out Wu's group has 3 seminars that they are currently doing. One in quantum computation, one advanced solid state physics one, and one on tensor category theory. I have been attending the solid state and the quantum computation one and somewhat humorously I understand more from the quantum computation seminar than from the solid state seminar. This past Saturday though there was a big meeting where all of the faculty (and in some cases soon to be faculty) who are looking for new graduate students to work under them gave short presentations. The event went from 9:00 to 1:30 though it was only supposed to go to 1:00. There were 17 presentations overall.

About of the presentations weren't really of interest to me. To be clear that is to say that most of the presentations were about research that I would have no interest in doing. Of the seventeen I would give serious consideration to 10. Basically there were two types of research position that were represented solid state physics and astronomy. I find it somewhat of a surprise that I find myself drawn so much more to the astronomers than to the solid state but there we are. I still find myself drawn to being a theorist instead of experimentalist and although there were a number of solid state theorists looking for students there was only one cosmological theorist looking for students, namely Paolo Gondolo.

Paolo spends his time working on theories explaining dark matter dynamics. Through looking at gravitational interactions such as gravitational lensing we have been able to get a very good picture of the density of dark matter in the universe and also its distribution. However the only effects that we know are coming from dark matter are just gravitational effects. We might be detecting other effects of dark matter but simply don't know it. At the moment there are things we are observing which don't fit with the predictions of standard models. For instance we can predict the expected flux of cosmic ray positrons but the standard prediction doesn't fit with the observations. Paolo and a number of other people are trying to think up theories of dark matter interactions which could account for observations like this.

Paolo and company has created a fortran package called dark susy which is used to make calculations for the parameters of SUperSYmetric dark matter theories. Thus dark SU-SY. While I feel more attracted to working on the dark energy problem than the dark matter problem I thought working on dark susy might be just the right thing for me to do. My physics knowledge is nowhere near the level that it would need to be in order to really begin working on dark susy. For one thing I don't even have a good knowledge of the standard model of particle physics much less its supersymmetric counter parts. But of course I would run into the same problem in any field that I decided to start research in.

This morning I took the opportunity to go and talk to Paolo about becoming a grad student of his. He started off the discussion by trying to scare me off. Rather, he said he was trying to scare me off but really he was just trying to make sure I understood that there are major disadvantages to being a theorist. Being a theorist takes more work and longer hours and requires you to know more. As a theorist it is harder to get away from your work since anywhere there is paper and/or you have your laptop you can work. On top of that there is very little money in theory. Theory is cheap but that means theorists are underpaid. As a theory grad student the chances of getting an RAship are almost null so not only are the research hours generally longer as a theorist but you have to keep a TAship and teach in order to support yourself. But I knew all of that already so it wasn't really much of an eye opener.

Friday, November 13, 2009

Black Hole Basics Part 2

First a quick review of part 1.

A black hole is an object with a density sufficient to cause a gravitational acceleration greater than the speed of light.

The point to which all mass is drawn at the center of the black hole is called the singularity.

The surface beyond which light cannot escape the black hole is called the event horizon.

The event horizon is a sphere whose radius is called the Schwarzschild radius which is determined for non rotating black holes by the equation R = 2GM/c2 here G is the gravitational constant 6.77 x 10-11 m3/(Kg*s2) M is the mass of the black hole and c = 299792458 m/s is the speed of light.

For part two we will begin with a more thorough analysis of the Schwarzschild radius. If you ever need to remember the equation for the Swarzchild radius is just remember that you combine the speed of light the gravitational constant and the mass of the black hole in such a way as to give you units of meters and you have the equation modulo a factor of 2.

The derivation of the Swarzchild radius is actually somewhat complicated since it involves general relativity theory. But as often happens a simple calculation using just Newtonian gravity gives the right answer. A Newtonian gravitational well of a spherical object has a potential of -G*M/r where r is the distance from the center of the sphere. This means that it would require at least m*G*M/r joules of energy to completely remove an object of mass m from the sphere of mass M if that object was originally a distance r away. This and the formula 1/2m*V2 give us all we need to calculate the Schwarzchild radius or rather the newtonian estimate of it.

We find that at a radius r we require a certain minimum escape velocity in order to not be trapped by the gravitational potential. Specifically we have

m*G*M/RSchwarzchild = 1/2m*V2escape

therefore RScwarzchild = 2*G*M/V2escape

But the condition we are interested in is the condition that the escape velocity is the velocity of light whereupon we recover our previous formula for the Swarzchild radius. This calculation is just a classical approximation but conveniently gives us the correct answer.


Black holes really are perfectly black. That is to say the event horizon of a black hole is a perfect absorber of light. This of course is not surprising since there is nothing at the event horizon for the light to reflect off of. In physics a body with this property of being a perfect absorber of light is also expected to be something called a blackbody emitter. A blackbody emits light according to a certain characteristic spectra which was discovered by Max Planck. Originally it was assumed that a black hole would not have a temperature and therefore would not emit radiation (meaning light). But careful thought about what might happen at the event horizon gave rise to the idea that the black hole could allow virtual particles to become real. Meaning that black holes really do emit radiation and therefore have a non zero temperature. This line of reasoning was followed by Stephen Hawking who calculated the temperature that a black hole would have to have to correspond to this emission. This leads us to the equation for the temperature of a black hole

T = K/M

where K = 1.227 x 1023 kilograms kelvin. (note this is not the boltzmann constant it is just an accumulation of a bunch of terms I didn't feel like writing out)

K may seem to be an extremely large constant temperatures but when you consider the masses involved it actually predicts ridiculously small temperatures. A one solar mass black hole would have a temperature of about 0.00000006 kelvin. Any natural black hole would have a larger mass than this and therefore have an even smaller temperature. So one can safely ignore the temperature of large black holes. Such small temperatures are virtually undetectable. Even for much smaller black holes say one the size of Jupiter the temperature is about 64 microkelvin.

But for very very small black holes hawking radiation causes them to rapidly evaporate though explode might be a more apt term. A black hole of a mass on the order of a kilogram or less would have a temperature of around 1023 and would essentially evaporate instantly. I bring up such a tiny mass because people frequently worry about cern or some other powerful particle accelerator generating a black hole which eats the earth. While it would be great if it were possible for cern to generate black holes because of some as yet unknown phenomenon if it did those black holes would have energies of at most say 1010 J which is being rather generous. Such an energy corresponds to a mass around a thousandth of a gram. So there could be no danger from such a black hole as it would evaporate as soon as it formed.

Tuesday, November 3, 2009

Exercising at 5:30

I have decided to start getting up at 5:30 AM every morning so that I can exercise before I go to teach my discussion sections at 7:30 AM. This morning I did it for the first time and it was fine though ironically I think the field house is a bit crowded when it opens at 6:00 AM. Of course I have no trouble sharing the running track with other people in fact I really actually like sharing it with a bunch of other people it gives me a sense of pace and I am excited to see if I will recognize regulars who will show up at the same time as me to run.

The weight machines on the other hand are much less sharable and since I have a rather pathetic upper body strength also much more embarrassing. Tomorrow morning I will try and mix up my routine by trying to do exercises on the weight machines first and then going up to run. Hopefully this regime lasts for the rest of the semester and beyond, but I have committed myself to it only for the rest of the week and then I will reevaluate whether or not I want to do it.

Monday, November 2, 2009

Nano Begins Again

November is national novel writing month. NAtional NOvel WRIting MOnth or nanowrimo for short. if you are not familiar with it you should visit www.nanowrimo.org the idea is basically to encourage as many people as possible to write something vaguely novel length over the period of a month. I have been threatening to actually win (winning meaning achieving the goal of 50,000 words) for a number of years now but I each year it seems I make it to about 30,000 words. Rather interestingly 50,000 words is actually not that much in terms of a book. A 50,000 word book is on the edge of a novella instead of an actual novel. In previous years I have tried writing in the far future and or in a fantasy type setting and also in a familiar current time world. But nothing seems to be much better than any of the other options. This year I think I am going to mix things up a bit and try something where I mix those types of realities together.

At the moment the rough plot works something like this Charles and Agatha are strangers living in a city together. Agatha is scientist and engineer with a passion for numbers and artificial intelligence and the workings of the human brain. Charles is a fantasy geek who constantly daydreams himself into fantasy scenarios. Agatha builds some fancy device which monitors her brain activity. There is an accident and Agatha and Charles find themselves trapped in a strange fantasy world generated from the both of their minds. Or something at least vaguely like that.

Thursday, October 29, 2009

Like a virus building a campfire

So again this is something from my E&M class. My professor is Alexei Efros an elderly Russian man with quite the accent. A number of weeks ago he was talking about fusion to the class. This is a rough paraphrase of what he said, you will have to add the Russian accent yourself.


I have many friends who work on this problem their whole lives. One friend, every time I see him I would say 'how long till we have fusion?' and every time he would tell me 'about 5 years' he would say. My friend he is dead now but the last time I see him I ask him 'how long till we have fusion?' and he respond me 'about 5 years' Now it seems that the consensus in the field is that we are making great progress and it really seems we will have fusion in about 5 years. But I would say to my friends I would say 'why we need it?' Perhaps we could use fusion to boil ocean water and irrigate the Sahara, or some project like this, some great project Like making enough water for the Sahara. It turns out we could not do this since this would heat the atmosphere of the earth. But this I tell them I say 'why we need it?' Every reaction has a size. Take campfire, if you build campfire the size of candle it will blow out. The surface area of the flame is too large and it cools rapidly a puff of wind will blow it out. But if you build campfire big enough, build campfire campfire size then it keeps itself warm and the reaction can be sustained. So every reaction has a characteristic size and the size of fusion reaction is size of sun and we want to make it on scale of campfire. This is like if a virus were to try and build campfire, and the relative ratio is about right. We are to viruses like sun is to us. So we try to make fusion on our scale it is like virus or single cell organism trying to build campfire it is ridiculous and we absolutely don't need it just like the virus does not need it, we absolutely do not need it.



The sentiment at the heart of this is being that we do not need fusion I have sympathy for the perspective for but do not agree with it. Eventually we will need fusion power though for now and for the next few decades at least I think we should instead try to use fission power. But I thought the entire episode was very interesting partly because of the perspective from which it is given. Also I think the picture evoked by the analogy of a virus trying to build a campfire is really impressive and striking. Where Efros wanted to use it to show that we do not need fusion I think it is just as effective a way of demonstrating the reason that it is so very hard to make fusion work on our scale.

Wednesday, October 28, 2009

The Law of Grading

Some while back I was sitting in E&M class and we were reviewing relativity. It was all stuff I had seen many many many times and so I was letting my mind wander. I started thinking about the tests that I had recently graded and thanks to the relativity background I came up with the following theorem.

The Law of Grading Efficiency:

Every impartial observer observes a grader to have a grading efficiency less than or equal to that observed by the grader.

Proof: Consider a grader with a stack of papers of height l0 who grades those papers in an amount of time t0 in the graders reference frame. Due to length contraction effects any observer who moves relative to the grader (for instance a passing professor) will observe the stack to be contracted along their direction of motion observing a length l < l0 and due to time dilation effects they will observe that it takes the grader an amount of time t > t0. So an observer in any frame will observe that the grader has less than or equal to the amount of work the grader thinks they have to do and observes that it takes the grader more than or equal to the amount of time to do the work that the grader thinks it takes. Thus the frame of reference which yields the maximum grading efficiency is the graders own reference frame.

Of course there was nothing in the analysis specific to the grading of papers so we obtain an immediate generalization.

The General Work Law:

The person doing the work always perceives the quantity of work they have to do to be greater than any other observer measures it to be and observes the time they have to do it in to be less than any other observer measures it to be.

Tuesday, October 27, 2009

Not Quite Hamlet

So Geocities is finally shutting down. I once had a geocities site and its closure made me go collect all the stuff I had written on it. As it happens what I was thinking of as my geocities site was really an angelfire site and angelfire is considerably better than geocities and is not shutting down or at least not to my knowledge. Even so I will be posting some material from my old geocities site and also material from my angelfire site here. I find the material extremely entertaining mostly because it opens a rather interesting window onto my past of 8+ years ago. This is a joke off of my angelfire site which I thought I would reproduce here.

Hamlet's famous speech

to be or not to be that is the question whether tis nobler in the mind to suffer the slings and arrows of outrageous fortune or to take arms against a sea of troubles and by opposing end them. to die to sleep no more and by a sleep to say we end the heartaches and the thousand natural shocks that flesh is heir to tis a consumation devoutly to be wished to die to sleep to sleep perchance to dream ay theres the rub for in that sleep of death what dreams may come when we have shuffled off this mortal coil must give us pause. theres the respect that makes calamity of so long life for who would bear the whips and scorns of time the oppressors wrong the proud mans contumely the pangs of despised love the laws delay the insolence of office and the spurns that patient merit of the unworthy takes. when he himself might his quietus make with a bare bodkin? who would fardels bear to grunt and sweat under a weary life but that the dread of something after death the undiscovered country from whose bourne no traveler returns puzzles the will and makes us rather to bear those ills we have than fly to those we know not of. thus concience doth make cowards of us all and enterprises of great pitch and moment are with this regard their currents turned awry and lose the name of action. soft you now the fair ophelia nymph in thy orisons be all my sins remembered.

not quite hamlet

To be or not to be those are the null and alternative hypotheses. whether tis nobler in the mind to suffer the slings and arrows of outrageous instructors, or to take up calculators against a sea of math teachers and by playing games ignore them. To try to sleep no more and by a sleep to say we end the brainaches and the thousand broken pencils that students are heir to tis a consumation devoutly to be wished. to try to sleep to sleep perchance to dream. ay theres the rub for in that sleep of math what grades may come when we have shuffled out this testing center, must give us pause. Theres the respect that makes calamity of so long study. for who would bear the whips and scorns of class clowns. The oppressors wrong, the proud nerds contumely, the financial pangs of recieved love, the principles delay, the insolence of hall monitors, and the spurns that patient merit of the unruly takes. when he himself might his quietus make with a bare bodkin? who would fardels bear to grunt and sweat under weary homework but that the dread of something after highschool the undiscovered country, from whose universities no traveler returns puzzles the will and makes us rather to bear those grades we have than fly to jobs we know not of. thus parents doth make cowards of us all, and enterprises of great pitch and moment are with this regard their applicants turned awry and lose the name of dropout.


There are a number of points of this that I am unhappy with and would like to change. Also I think an update for undergraduate/graduate education processes is appropriate and so I will be reposting a new version when I get around to writing it.

Monday, October 26, 2009

Grad School Update

So it has been almost 2 months since I have posted anything. Not surprisingly the sharp decline in my blogging frequency corresponds almost exactly with the time that I began my graduate education in physics. I now have an office in the physics building. Ostensibly I share that office with 3 other people Mark, Rhett, and Tek. Of those three though I have only ever encountered Mark in the office. Even Mark I encounter infrequently so that even though I must share my office it often serves the purpose of an entirely private office. Having started my graduate education it is strongly advised that I attach myself to a research group as quickly as possible. In grad school my classes are going to only begin the story the thing that will primarily characterize my experience in grad school will be my research. So this is a very important decision. While the situation is actually more complicated I will simplify by saying that what I really want to do is particle physics. But there are basically only two professors here which do particle physics DeTarr and Wu. Both are good but both are also towards the end of their careers enough that they may not be taking on anymore students. As is the usual for my way of handling things I have yet to talk to anyone about joining their research group. I keep putting it on my list of things to do but I don't tend to do the things on my list of things to do unless there is a ready punishment for not doing them. Since I could perfectly well allowably not join a research group until sometime during my second year all I am doing by not joining now is costing myself time with the group and time when I could have access to a ready source of support, guidance, and recommendations. That is not to mention access to all sorts of awesome physics goodies like huge telescope arrays and teraquads of data and super computers to crunch it with. There are a couple of professors which are going to be coming to the U in the next year or so and I could become a student of theirs but in the mean time even if I plan on switching professors later on I should attach myself to some research group now.

Other than that graduate school has been rather like my undergraduate education with the one very very important modification that my finances are significantly better. My classes right now are not terribly hard as they are for the most part review and deepening of the more advanced undergraduate courses that I took. Rather ironically I am doing better in the electrodynamics course (a weak point in my undergraduate education) than I am in the math methods class (when you consider that I have an undergraduate degree in math I would say that qualifies it for being a "strong point") I rather hope the lack of high performance in the math methods class is an aberration but time will tell.

Wednesday, August 26, 2009

DAMN THIS INFERNAL YELLOW LINE!!!!

So I bought this monitor at campus surplus. When I bought it I hooked it up to a borrowed laptop and made sure that the screen was ok. Immediately after I bought it I hooked it up and it had three little dead patches. Actually they weren't entirely dead just the blue and green pixels were but the red ones for some reason were fine. It has been somewhere along the lines of 5 or 6 months since i bought the monitor and over the course of this time the dead patches have progressed to the point where they are now completely dead except for a very scarce smattering of heroic red pixels. I had thought that the blotches may have been there when I first saw the monitor in the campus surplus but that I just didn't see them (even though they are fairly obvious). I reasoned that surely the monitor couldn't have deteriorated so rapidly as to form these dead spots just from the walk back to my apartment! But the rapidity with which the splotches further decayed into completely dead patches makes me think that perhaps this monitor is older lcd technology than I am giving it credit for and the walk out in the hot sun was indeed a little too much excitement for its diodes.

Because I blame myself for these splotches (either I made them myself by carrying it home in the sun over 12 blocks or by failing to notice them at the store) so I will live with them until such time as either the monitor becomes unusable or I have something called "discretionary income" which is a theoretical object whose existence is inferred by from observing it in other peoples budgets. I mean actual discretionary income not the kind that comes with credit card debt.

However this newest malfunction of my monitor I feel in no way responsible for. I have kept my monitor inside and out of the sun. Of course I have no air conditioning so I have also kept it warm and cozy 80-90 degrees during the day and a nice "cool enough to sleep in" 60-85 during the night. Since I do not have a great deal of money to spend on rent (the thing that inspired me to get my monitor at campus surplus in the first place) I do not consider this lack of cooling to be my fault.
So of course I was aghast when I am browsing one day (ok fine playing games on facebook) and there appears a solid vertical yellow line from hell. For some reason all the pixels about 4 inches from the left of my screen got together and decided they wanted to be yellow... permanently.

I suppose I can understand their distress, I mean after all I wouldn't find turning on and off all day a particularly riveting job but I somehow think it is more interesting than just deciding to get locked to some bloody color of yellow that probably isn't even legal outside of the 70's. The worst thing about this is that because I see yellow it is not just a single line of pixels doing their thing. Oh no, it takes multiple pixels to make any color but red green and blue so the pixels along this line actually have to be working together against me and just to make it clear that they are in fact a unified and organized resistance waging war against my evil screen tyranny. Well this isn't a democracy and if the pixels are reading this I just want them to know, I'm watching you!

Of course there is always the off chance that this isn't actually a rebellion but an attempt at self order against the chaos of reality. A beacon of reason and culture amidst a world that cannot be understood by my pixels 2 dimensional minds and senses. Perhaps my habit of never turning off my monitor has allowed the pixels inside time to evolve and become more than their makers had intended. Then again probably not.

further common exam details

Now that I have had time to talk with my advisor I have a few more details. As it happens 50% and up was a pass and 33% and up was a conditional pass. Also the number of students receiving a pass was about the same as the number of students who received a conditional. I overheard the chair of the common exam committee and another professor talking about the common exam in German. Now I don't speak German so I don't know exactly what they were saying but it involved something about the 90th percentile and conditional pass. When I talked to my advisor he said there were a "few" who did not pass. Also there were a number of faculty who were unhappy with how the lines for pass and conditional pass were placed. I take all this to mean that all but about 10% of the students taking it passed (conditionally or not) and that would mean only 2 or 3 people got just a plain fail. I'm sure the scores were basically clustered around 35-37% and they just had to put a fail line somewhere it would still do something and so they put it at 33%. I bet the people who failed got like 29% or even 30% when just grubbing for a few extra points would have gotten them a conditional. Or maybe it would have just made a whole buch more people fail because now there would be no tail to the distribution.

Also as a correction to what I said before it turns out that in general you will be required to take both 5010 and 5020 in order to pass the common exam if you got a conditional pass. But my quantum mechanics and my classical mechanics problems on the common exam were my strongest problems. Whereas my statistical physics and my electrodynamics problems were a little lacking. As it happens 5010 is about mechanics both quantum and classical and 5020 is about statistical physics and electrodynamics. So because I got strong scores on the problems with topics from the first class I can skip to the spring semester part of the class.

I Passed The Common Exam... Sort Of...


The common exam is the phd physics program entrance exam. You have to pass it in order to be formally admitted to the program. The common exam was what you might call a rather tough test. If you don't pass the test the first time no sweat you can take it again next year, but if you fail it a second time you will not be admitted to the program and therefore cannot receive your PHD, game over. Because the test is so hard and because of the harsh penalty for failing twice if you don't quite pass you may get what is called a "conditional pass" which means that the department will pretend you passed just so long as you get an A- or better in some particular class. Usually that means physics 5010 (a night class taught by one of the most mad scientist like looking professors I know) in my case it means I have to get an A- or better in 5020.

I'm not really sure how I feel about having conditionally passed a test. I won't actually know if I passed or failed the test I took this past Saturday until 9 months from now even though the test got graded 2 days after the test. But one thing is for sure I have never been so happy to get 40% on a test.

Monday, August 24, 2009

First Day of Classes

So the common exam went pretty badly, I don't think I was alone in doing rather abysmally badly so perhaps I will get lucky and the pass percentage will get placed somewhere around 40% Of course I am not really sure I even got that high, it is perfectly possible that I got somewhere around 30%. Its funny but apparently my strong point in physics is quantum mechanics. The only problems that I can consistently do are quantum mechanics problems. I suppose it is because it is all about function spaces and transforms and I have been doing rather a lot of that sort of stuff lately. I spent all day Sunday just lazing about and now today is the first day of classes.

Usually I find the first day of classes exciting and even a little relaxing because I know there isn't anything that needs to be done for the first day. I am not going to go into the class and find out that I forgot about a homework. But now that I am teaching a discussion section of physics 2210 everything is different. The teacher won't even be able to cover a full 50 minutes of material because he has to deal with the syllabus and introduce the T.A.s and the S.I. instructor and of course pontificate a bit about how wonderful and important physics is. I on the other hand have a full 50 minutes all to myself with no syllabus or introductions to fill it up. Worst of all there won't be any homework assigned that early in the class so that means I can't even fall back on that. Right now the plan is to just talk about what vectors are and give a bit of a review for the math.

I guess at the root of my anxiety is that failure as a teacher is fundamentally different than failure as a student. If I fail as a teacher I have harmed me and my students but if I fail as a student my failure harms me only.

Thursday, August 20, 2009

I am a P.H.D student!!!! and I have a box!!!

Like so many wonderful things that have happened to me in my life this happened at the absolutely last possible moment. A couple of weeks ago when I applied to graduate with my undergraduate degree the undergraduate advisor told me that I had a strong application to the graduate physics program. I had been planning on applying to the physics program as well as the math program at the U but according to the application requirements of the physics prorgram they need a physics subject GRE before they will review the application. Apparently this is not actually the case, I finished up my application that I had been intending to put in and a little over 2 weeks later I found out I was accepted. Today I read this letter and went through all the necessary paperwork to become a real grad student.



August 18, 2009

Timothy Anderton
1031 E. 200 S.
Salt Lake City, UT 84102
Dear Timothy Anderton,

We are pleased to inform you that the Admissions Committee of the Physics Department has /recommended/ to the Graduate Admissions Office of the University of Utah that you be admitted to our graduate school. The Graduate Admissions Office/Student Admissions should soon formally notify you of your admission to the University of Utah.
We are also pleased to inform you that we have recommended that you be awarded a teaching assistantship for the 2009-10 academic year. The stipend for the 2008-09 academic year was $12,107 for a Level I position and $14,131 for a Level II position for a nine-month period. You will also receive tuition benefits to cover the cost of tuition. The tuition rate for an in-state graduate student is approximately $4,600 per year. Therefore, this offer is worth approximately $19,000 per year. In addition to these nine month stipends, there are a few additional summer stipends available and you will be able to apply for one of these; they run at ~ $2,000 each. We have also contracted through the Graduate school for a subsidized graduate student insurance program that you will have the option of purchasing (the Graduate School pays 80%). We have recommended you for a Level II position based on your previous experience.

We would like to point out that new graduate students in the Physics Department need to pass a general, comprehensive Physics Common Examination within their first or second year of graduate study before they are formally admitted into the Ph.D. program. The Common Exam will be on Saturday, August 22, 2009 and all incoming students must take it at that time. Those passing the exam are automatically classified as Ph.D. candidates. Those not passing have the option to take the exam the following year. Copies of previous years' exams are available on our website at http://www.physics.utah.edu/academics.html
Please let us know by */as soon as possible /*whether or not you accept this offer*. *Please send notification of your decision to Jackie Hadley by regular mail at the address listed below, by e-mail at jackie@physics.utah.edu or by fax (801) 581-4801. If you have any questions or concerns, please feel free to contact Jackie Hadley at any time.

We sincerely hope you will accept our offer, and we are looking forward to having you join our graduate program.
Sincerely,

Wayne Springer
Associate Professor
Chair, Admissions Committee


What they don't tell you on the letter is that I get a box in the department that people can slip stuff into and I can take stuff out of. I know this doesn't seem like that big of a deal but it is a tremendously big deal to me. I HAVE A BOX!!! it has my name on it and it sits near the front office of the physics department. I will also be getting an office somewhere (which I bet I will have to share with someone) somehow even though this is even more amazing it doesn't seem to have the same resonance that having a box does. An office is just somewhere to sit, the box represents to me that I am somehow open for business academically. Leave your homework here and I will get to it.

The T.A. is:
-in-


P.S. The common exam is scaring the crap out of me, I have only been studying for it since Wednesday and I go into the exam with nothing but a couple of pens. In the past couple of days and in the next few days I need to relearn about 30 credit hours worth of stuff... If I weren't so extraordinarily giddy about being admitted and getting paid to be a grad student I would still be studying instead of writing this post. Speaking of which I only have 40 hours 9 minutes and 10 seconds left to study!!! g2g

Saturday, August 8, 2009

Wavelet Transforms and Neural Networks

A single level complete forward neural network with m input nodes and n output nodes can express any linear transformation with an m by n matrix representation. Clearly if the activation function of the neurons is linear on the inputs then no matter how many layers of a feed forward network we had the result would always simply be a linear transformation of the inputs. With a non linear activation function though we can get any function of the inputs at all not just linear transformations. The most common activation functions are the hard threshold function which is 0 below some threshold and 1 above it and the logistic function which is close to 0 for low values and close to 1 for higher values and quickly rises from one to the other in the region of some threshold value.

Now consider for a moment that the wavelet transform is a linear transform and so can be represented as a neural network. Also remember that performing hard or soft thresholding on the output coefficients is simultaneously a means of compression and noise reduction. In fact I believe there is some evidence to believe that stuff very much like this is actually what happens during our brains processing of visual data.

The question I want to ask is does it make sense to think about multi layer neural networks in terms of a redundant dictionary of functions and can we use that to design network update methods. Or does the reverse hold true and could we use neural network update techniques to tell us something about the best m-term approximation in redundant dictionaries problem in wavelets.

Wednesday, August 5, 2009

Brainwave Music



I have been thinking it would be a lot of fun to make something that works like the theremin except instead of using your hands to control it you could use say your heartbeat to control volume and brainwaves to control pitch. I doubt you could really get very good music out of it. Rather like the above experiment but I would certainly love to try.

Tetris and AI

I have been something of an avid tetris fan ever since I bought an old nes cartridge of tetris at some point during junior high. I have spent whole days doing nothing but playing the classic nes version. A couple years ago I bought a ps2 version of tetris (tetris worlds) and it has quite a number of different modes of play but still doesn't hold a candle to the nes version due to certain horrible choices on the part of the game designers.

At any rate the pacman competition has given me a certain amount of perspective on how hard game playing can be as a computational problem.

At any rate although a straightforward search algorithm might be the easiest way to get at least middling results I'm sure it collapses if you want to try and get at truly great long term behavior. Just like most games the tetris search tree is definitely exponential and if you really want to be a good tetris player you have to shape your pieces in ways that are conducive to future building. For even mediocre it would be desirable to be able to look ahead 3 moves at least since you need at least 3 pieces to fill a line to clear.

For the moment lets consider the branching factor of tetris. For a long white piece there are 2 possible rotational positions and 10 possible horizontal positions for one rotational configuration and 7 horizontal positions for the other. giving at least 17 different possible positions for a long piece. For each of the 2 L shaped pieces all 4 rotational positions are different from each other and there are 9 horizontal positions for 2 of the 4 rotational configurations and 8 horizontal positions for the other 2. So each of the L shapes has at least 34 possible placements. The T shape has a similar analysis and has at least 34 placements as well, and so do both of the S pieces. Finally the square has only 1 unique rotational configuration and has 9 possible horizontal positions and so has 9 possible placements.

So we get a branching factor of 34^5*17*9 = 6951619872

So from a straightforward perspective as a search problem tetris is much much much more difficult than say chess which has a branching factor of somewhere around 35 or so (or actually 35^2 since each player has about 35 degrees of freedom on average). So why is it that chess is hard to play and tetris easy? the answer is that in chess most of the time only 3 or 4 of those 35 moves is any good whereas in tetris most of the time no matter how bad a move is the game can be salvaged. So in chess there are a few good paths to winning which are very difficult to find and in tetris there are a vast number of good solutions. Relatedly the reward schedule is much shorter in tetris than in chess. A good move will be rewarded and a bad move punished generally in a much shorter number of steps in tetris than in chess making it easier to discern what moves were good and what moves were bad.

This reminds me of the n-queens problem. The n-queens problem is where you try to position n queens on an n by n chess board so that none of the queens has any other queen on its lines of attack. Solving the n-queens problem can be fairly complex but actually the application of greedy search with a good heuristic and a random starting point is pretty likely to give you a solution. The reason is that the solutions of the n-queens problem are "dense" in the space of all possible n-queens board positions. If you start at any random board position it is pretty decently likely that there is a solution board position fairly close by in the state space.

I think tetris is kind of like a simpler version of go which isn't adversarial (another reason chess is harder than tetris) Perhaps a good intermediate challenge for the AI community before we get a computerized go world class player would be to get a world class tetris player.

Monday, August 3, 2009

I'm Graduating

So there is a requirement for the mathematics degree that says that you have to take the mathematics subject GRE and get better than the 9th percentile. Now while this doesn't seem like much of a barrier there is one little problem. I thought the subject GRE's like the genergal GRE was given all the time and any time I wanted to I could go and fulfill this teeny little requirement. The problem was that in fact the subject GRE's are administered only about 3 times a year... And you have to sign up for them waaaay in advance. So I thought that my bad planning had made it impossible for me to graduate this past spring. However... I was wrong. I went and talked to the appropriate people (finally) this summer and it turns out that the GRE requirement has been changed so that I can replace it with one extra elective course... which I had already in fact taken for fun.... So... that made me feel kinda stupid. The whole not even checking if I could graduate thing.

So now after jumping through the appropriate hoops it looks like they are going to let me graduate summer semester even though I am applying only 4 days before the end of the semester and even though I still don't have my AP results to them after 5 years. Speaking of which that is the biggest headache of all. I still have to wait for a week or more before they will even send the scores and then I have to go back to the school to sign a little piece of paper that says I want them to release my scores and I have to make sure that I talk to some person named jason in order to get them on my transcript for the semester that I want them. /headache


but as I was saying, the bright side is that I am getting my degrees, I will shortly be the proud new owner of a Physics B.S. a Mathematics B.S. and a CS Minor.

Wednesday, July 29, 2009

My mother wrote it

I borrowed my girlfriends tiny little laptop to go with me on my trip to Idaho and back for my family reunion. I stayed a few days longer than everybody else and helped my parents with some things they needed done. Just before I was about to get on the bus to go back to salt lake my mother asked if she could see if she could type on this itsy bitsy 8.9 inch laptop. Since I already had a blog post window up so that I could type a blog entry on my way back I just let her use my blog post window. This is what she typed.


Dear Tim,

Ilove you very much! I hope you have a wonderful time on your birthday!!!!!!!!!!!!!!!!



Love from all

Monday, July 27, 2009

Topics

I have been trying to think of some good blog posts over the past few days and have begun writing a whole bunch of different ones. I keep on thinking that either I am not doing a good enough job describing a good topic or I don't really have anything worth saying. But I have finally decided on a blog post topic. Namely the topic of all the weird topics I have been trying to write up.

Artificial Intelligence and Tetris
Neural computer interfaces
plausibility of visiting distant black holes
the evil sheriff arpaio
gravity and the multiverse (dark energy)
the importance of science fiction
my family reunion
genetic sequencing programs
false discovery rate statistics
problems and benefits of short term economic models
finite symbol combination systems
random wavelet coordinates

Some of these topics will definitely get put up but others will no doubt fall by the wayside. I just thought it might be enlightening to know that I often start 4 or 5 blog posts where I publish 1.

Wednesday, July 22, 2009

Musical Tesla Coils

The tesla coils are actually making the sounds. This isn't just a light show synchronized to music. The tesla coils are producing the music!!!!



It is done by modulating how fast you turn on and off the coils appropriately so as to make arcs which will give the correct frequency response so as to play music.

Tuesday, July 21, 2009

Girl Genius

The webcomic Girl Genius is by far the best webcomic that I have ever seen. It was linked to me some time ago by my girl friend.

THE MOST BRILLIANT WEBCOMIC EVER!!!

I went through the series within a day or two and became current with the story very quickly. I just wanted to share the brilliance. Read the series and enjoy.

Monday, July 20, 2009

Metrics on the Reals

It has been something of a prime activity of my recent life to try and make a coordinate system for non integer dimensional spaces work. Although I have tried rather a lot of different approaches I have never really come up with anything satisfactory. For instance, at one point I considered a sort of pseudo euclidean quotient space which had the appropriate scaling law for the "volumes" of spheres by radius. However the space was less well behaved for concave sets. In fact the "volume" of a sub set of a concave set might actually be greater than the "volume" of the whole. Which at the very least means that the volume scaling law doesn't hold the same for all shapes in that space which was pretty much the kiss of death for that idea.

Other ideas have had a much longer and less clear history, for instance I have been mucking about thinking about using random coordinate systems and fuzzy logic. How random coordinate systems or fuzzy logic might be used in a concrete way to create a coordinate system I don't know. Which of course is why I still give it so much thought. One idea using random coordinate systems is to assume an infinite set of random vectors which have a particular probability distribution for the value of their dot products. Assuming uniform distribution of the vectors around the surface of the appropriate dimensional sphere gives a very specific expected dot product distribution for each dimension. If we assume a distribution somewhat in between say the 2 and 3 dimensional distributions then perhaps we would have a consistent coordinate system, albeit one with a necessarily infinite number of coordinates. This idea while pretty is something that I have not really gotten very far with. I really should put some sweat into it and see if I can make it work.

All of this is not really the point I was trying to make though (perhaps I should just rename this post and skip what I was trying to say) What I was thinking about recently is the fact that because the cardinality of the real numbers is the same as the cardinality of any euclidean space of any dimension (at least integer dimensional ones and presumably non integer dimensional ones too) you can find a bijective mapping from a space of any dimension onto the interval (0,1).

In other words as long as the cardinality of the point set of a space of non integer dimension is the same as the cardinality of the integer dimensional ones then lurking somewhere in the set of functions on the interval between 0 and 1 is the metric for any dimensional space you care to think of.

For this reason I have been thinking that perhaps the best way to try and think about non integer dimensional spaces is to think about real number theory. The kind of stuff where you talk about recursive function mappings of the real numbers for instance the mapping which we use as the basis of the decimal system. The decimal system can be thought of as the output of an algorithm which maps the interval from zero to one to itself. Say you want a decimal representation of any number. You begin by taking its integer part and then you minus that part out and multiply by 10 and then take the integer part of that and then rinse and repeat.

Perhaps by considering mappings of the unit square to itself we might come up with a suitable metric.

Wednesday, July 15, 2009

How much my blog is earning me

Interestingly I don't know how to get a good count of how many people access my blog via any other means than using the adds thing. My adds account will tell me how many people have visited my blog but my blog wont... odd. At any rate apparently the adds account estimates how much money the publisher of an add will make per thousand impressions. My site has an impressive $0.05 estimated eCPM. Since my blog has had an impressive 42 page impressions so far this month (how that happened I shall never know) I am proud to annouce that means my page is making an estimated 5.8 microbucks per hour... yup... somehow I think the amount it costs to host my page is rather higher than the payout to the advertisers. There must be some sort of power law which relates the number of blogs that make google a certain amount of money and the number of said blogs. There must be a millions of blogs that actually cost google money and then like 0.01% of them that make them money. Since the hosting of blogs is so much less bandwith and storage intensive than video it suddenly makes sense that youtube is loosing hundreds of millions of dollars a year despite its vast popularity.

Peak Posting

As is true for most things there is a point where quality and quantity of blog posts combine to make for an optimum flow of readers. In general you need volume of posts in order to draw readers but you need quality of posts in order to make them come back. But since my posts tend to have neither quality nor quantity I suppose this question is (like most everything else on this blog) purely academic.

The simplest model I can think up is that you have a quality index q between 0 and 1 which represents the likely hood of someone who stumbles upon the blog to return in the future and a quantity index p between 0 and infinity which represents how many posts you make per unit time and should sort of be vaguely help determine the number of readers you pull with those posts. I would say that probably the number of readers you pull varies something like log(p+1) Let us furthermore assume that a person has a total amount of time that they can devote to the blog and therefore the total quality of all the posts is constant so Q_total = p*q the last part of our model is how to use these factors to model reader flow. Since we attract C*log(p+1) random readers in a unit time and of those readers q of them come back we have a recurrence relation. The expected number of readers R_t+1 = qR_t + C*log(p+1). Letting L = C*log(p+1) we have R_t+1 = qR_t + L Now suppose there is a limiting number R_l = qR_l + L Solving we get R_l = L/(1-q) which is because the amount is a geometric series in q (I thought it was should have just trusted myself). So if we take the time limit we see that to maximize the number of readers that we have over the long term we should make our quality as high as possible at the cost of quantity.

Of course this was based on the assumption that quality of a post was directly proportional to the amount of time spent on it when in reality I suppose the quality is more like the logarithm of the amount of time you spend on it. Sure you can always make a post better but only perhaps if you are willing to spend some fraction again of all the time you have spent on it up to this point.

Tesla coils are cool

I have had a love of the tesla coil for a very long time. I'm not sure when it began but it was either in junior high or early on in high school.

http://en.wikipedia.org/wiki/Tesla_coil

Basically in a tesla coil you take some low voltage power source and step it up to a few kv which isn't hard. The really nifty part is where you then take that few kv and you run it through a spark gap and into the primary coil of a second transformer. The reason that this is all nifty like is because the spark takes an extremely small amount of time and has a waveform that has lots of very high frequency components. Because of this the output of the secondary coil gets a big kick both in voltage and in frequency. So you can run a tesla coil with good old 60 hz and get 10 khz out of the secondary coil. I haven't really given much thought to the design of tesla coils for the last couple years and it just sort of makes me happy that I understand the dynamics so much better now.

Tuesday, July 14, 2009

I met a guy named Stan

I was walking home with some edibles when a man walked up to me with the refrain "hey man I am just tryin to scrounge up a buck. I just want a buck for a budweiser I been walkin around so long my feet hurt just." The guy seemed earnest enough and I gave him $2. I didn't really care what it was he wanted the money for. All that really mattered was that very clearly one dollar would make a significant difference to this person whereas to me (at least at the moment though hopefully it will stay this way) $1 doesn't really make much of a difference. After I had given him the money he introduced himself as Stan and he told me that next time we met hopefully he could pay me back.

At any rate this started me thinking about utility. Now utilitarianism is the idea that one should strive for the greatest good for the greatest number. But perhaps what one should really be trying to do is maximize the total utility of a group of people. If you have 2 people with no money and no place to live and you give one of them $10,000 and 2 houses to live in the overall utility goes up. But if instead you give both of them $5,000 and each one house to live in the gain in utility will be greater. This is because for the very poor the utility of having a single dollar is very high whereas the utility of a single dollar to the average American is very low.

Music state space exploration

Obviously the state space of music is extremely large and humans will never really thoroughly explore it. At least no one particular human will since listening to all possible music sounds like one of those endeavors which would take a tad longer than the age of the universe. Of course in order to make the possible musical selections finite you need to simultaneously discretize the signal and limit its duration.

Even if we confine ourselves to sampling at a rate of say that of cd audio which is 44.1 khz or 44,100 samples a second and consider only a single minute of music that makes us consider a vector space of 2,646,000 dimensions. so even if we allow only say 100 different intensities at each time step that allows for 100^2646000 different sound bytes.

Every 1 minute sound clip (with the discretized intensity restriction kept in mind) is some point in this vector space. Now although it would be crazy to think that any human being might really thoroughly explore this space (as in listen to most of or even a tiny fraction of all possible sound clips in the space) we can still ask how well we have explored this space.

Obviously if you look at say only gregorian chant you are exploring a smaller region of the music state space than if you include also soft rock and heavy metal classical music etc.

So I propose a project that I probably will never do but intrigues me nonetheless. Why not use as a measure of the level of exploration of the music state space the convex hull of pieces of music. So we say pick a representative sample of rock music and we take the convex hull of these points in our music space and take the ratio of the volume of the convex hull to the total volume of the space as a measure of the level of exploration.

But say we took as "music" the basis vectors of the space. Then the convex hull of these "music" points would be the simplex for that dimension and while that might actually have a relatively low volume for the space as a whole it is still a volume which we can't really realistically expect our music to much out achieve and obviously the vector basis (namely a vector of one 1 and all zeroes else) is not something that really explores the music state space. So what we really want probably is to do something like take a spectrograph of the music and do our state space analysis with that.

Posting Times

I find it decidedly odd that this post was first posted about 2 minutes after the previous one but it is time stamp many hours away from the other one. I don't really like this time stamping system. It means that I can take months to finish a post but when I finally post it it will show up as though I posted it when I began it not when I finished it.

Monday, July 13, 2009

Function Spaces

One rather big surprise for me in my last little bit of undergraduate education is that both physicists and mathematicians often think of functions as being points in a vector space. This can be an incredibly powerful idea for instance the fourier transform is a projection onto a set of orthonormal basis vectors which are the appropriate family of complex exponentials. In the case of quantum physics these changes of basis take on actual physical meaning. For instance one can formulate wavefunctions as functions of position or momentum. These two wavefunctions are related to each other by a fourier transform or in other words by a change of basis. In fact the connection is even deeper than that. The heisenberg uncertainty principle is a side effect of the fact that compactly supported functions in one basis must have infinite support in the other basis. This is obviously not true of all bases we might choose (wavelet bases for instance) but the certain special bases that do have this sort of dual relationship seem to have a lot of interest for us. In fact an important part of the apparatus of quantum mechanics is using information about the commutativity of different operators. The fact that the position operator and the momentum operator do not commute implies that their bases are necessarily "inconsistent" in the sense that you can never have a wavefunction which has a finite representation in both.

But when we talk about these function spaces generally what we are talking about is L2(R) which is to say the space of square integrable functions on the reals. In general we could expand our horizons to say include L27(R) which is to say all functions for which the integral of the 27th power of that function is finite. But no matter which space you choose no integrable function space is going to include say the function x^2. It might seem odd at first that by far the most worked with function space L2(R) doesn't even include the polynomials (not any of the polynomials). But the reason is that we like dealing with functions which have a finite amount of area under their curves. This fact doesn't tend to ever be much of a problem because if you want a function which isn't in L2(R) then you just truncate it at some finite limit M and then let M go to infinity.

In fact it seems (at least for nice functions) that not only is the fourier basis a basis for functions in L2(R) but most any function on the reals. But there is of course a problem. I glossed over a little problem earlier, that is that the fourier transform of a function will not always perfectly reconstruct a function. If the function is continuous then all is well and the function is in fact exactly reconstructed. However at points of jump discontinuity the fourier transform fails to reconstruct the function and instead takes on the value of the midpoint of the discontinuity. The reason this isn't a problem is that we view functions in L2(R) to be "the same" if the "distance" between them is 0. Meaning basically that they differ only on a set of measure 0. Now when you move to functions which do not have a finite norm suddenly things become a whole lot more complicated. because the function is no longer bounded even if the function and its reconstruction from a fourier (or some other) basis differ only on a set of measure 0 there is no guarantee that the "distance" between them in the function space is 0.

Even more disturbing is that when we think about the "distance" between x and x^2 we come up with infinity. From the physical perspective it is actually a good thing that functions like x^2 are not part of the function space that is used to describe the real world. Otherwise we would allow infinite energy solutions to the wave equation. But I can't help but be deeply uneasy about the fact that there is no good way to incorporate even simple divergent functions into a nice function vector space like L2(R)