Tuesday, June 30, 2009

The NIF is online!

Sad to say it took me this long to find out that the National Ignition Facility came online in march this year!!!

http://www.wired.com/science/discoveries/news/2009/05/gallery_nif?currentPage=1

the pictures are amazingly cool looking and they are expected to finally be able to get more energy out than they put in with this one!!! In case you are wondering I checked on ITER too and they are not expecting to switch on until 2018. Makes you wonder if icf isn't the future of fusion after all.

blowing up Jupiter again

Because it has been so very long since the last time I discussed the blowing up of Jupiter on this blog I figure it is time again. Furthermore I really think it would be very healthy for me to try to make a blog post every day. Up to this point I have always been serious about the ways that one might employ in order to blow up poor Jupiter but now I think it is time to put the constraint of serious aside. I think I am going to start posting methods of blowing up Jupiter that just don't make any practical sense. For instance we could try to turn jupiter into a black hole by shooting a single extremely highly energetic proton into its atmosphere. If the proton had the kind of energy of say 10^45 J then the first particle it hit would cause a collapse into a positively miniscule black hole hurtling towards the planet giving off insanely high energy hawking radiation in every direction this black hole would probably evaporate but the high energy radiation coming out from the black hole would cause a huge pressure wave which would itself spawn multiple black holes in the pressure wave and the whole planet would become a huge explosion. Unfortunately the formation of all these tiny black holes would probably ensure that instead of collapsing into a black hole afterwards the planet would just explode and completely destroy the solar system and very probably any other system within say a few hundred light years. Jupiter is a loss then but the shockwave could very possibly cause the collapse of hundreds or thousands of stars into black holes. Admittedly this doesn't seem very appealing but it is just such a pretty picture, sending one particle into a planet to make a apocalyptic explosion.

The weight of light continued

The previous weight of light blog piece was inspired by thinking about how big the mass of the cosmic background radiation is. As per my previous post an individual photon does not have mass but a collection of photons generally will. At any rate since the cmb is isotropically distributed that means it has a net zero linear momentum and so we can find its rest mass by the simple application of the E = mc^2 law. The cmb is basically a perfect blackbody distribution so we can use the relation u = 8pi^5t^4/(15h^3c^3) Where t is the temperature in joules. Since the cmb has a temperature of about 2.7 kelvin that translates to 3.7 x 10^-23 joules. That yields an energy density of about 4 x 10^-14 J/m^3 a quick google search provides a radius for the universe at roughly 78 billion light years. A light year = 9.4605284 × 10^15 meters so we get a volume of 4/3*Pi*r^3 = 4/3*Pi*(78000000000*9.4605 x 10^15)^3 = 1.6 x 10^81 m^3 So multiplying the two gives a value for the total energy content of the cmb which is 6.4 X 10^67 J. Translating this into Kg we get m = 7.1 x 10^50 Kg.

The mass density of visible matter in the universe is estimated at 3 x 10^-28 kg/m^3 so the total mass of the visible matter using the above given value for the volume of the visible universe gives a mass of 4.8 x 10^53 Kg. So the cosmic background radiation weighs in at about a thousandth of the mass of visible matter in the universe but still a not insignificant contribution to the overall mass. Anyway I thought it would be fun to post t3h calculation.

We Won!!!!

I can't believe I forgot to put up a post about this. We won the pacman competition our team came out on top over berkeley and I didn't have to take the final in AI. I'd write a more detailed story but its late and I am going to bed.

Monday, June 29, 2009

The weight of light

It is a fairly commonly known fact that the photon has no mass. While this is true for a single photon it is not necessarily true for collections of photons. Lets take a step back for a moment and say what we mean when we say "rest mass". In both Newtonian dynamics and relativity the energy of an object is a function only of its velocity. In Newtonian dynamics the zero for the energy scale is determined arbitrarily and only changes in energy have meaning. In other words it doesn't matter whether you say a baseball going 2m/s has an energy of 4 joules or -10 joules all that matters is that when the ball goes from 2m/s to 4m/s the energy of the ball goes from 4 to 16 or from -10 to 2. Once a zero was picked for the energy scale you could talk about changes in energy and that was all that mattered. Relativity changed all that because in relativity there is a more absolute meaning to energy. I don't remember the derivation but the end of the story is that the equation E^2 = (pc)^2 + (mc^2)^2 holds. Here p is momentum m is mass E is energy and of course c it goes without saying is the speed of light. Solving this equation for mass you get m = c^-2 * sqrt(E^2 - (pc)^2) Just like for any other particle light obeys the relation p = h/L where L is its wavelength. Also E = hf and f = 1/T = c/L so
E = hc/L plugging this in to the first equation we get that m = c^-2*sqrt((hc/L)^2 - (h/L * c)^2 = 0 and all is well. So you might think that the mass of a system of 2 photons would also be massless 0 + 0 = 0 right? Well... maybe and maybe not. A single photon had no mass because it just so happened that for a photon E = pc so E^2 - (pc)^2 = 0. But when you consider a system of 2 photons while its energy is just the sum of the energies of the two particles the momentum of the system is the vector sum of the two momenta. So if you have two photons of the same wavelength moving exactly opposite of each other then you get 0 momentum for the system. Now you find that the system of 2 photons does indeed have mass!! m = 2*h/(cL).

So here is the big idea I want to know what happens when two photons traveling perfectly parallel to one another with one in front of the other by a little distance hit a mirror. At first they are traveling with each other so their momenta add and we have a system with 0 mass. But after the first photon hits the mirror and gets reflected we now have a system with mass because now their momenta cancel. This is where an awesome experimental setup comes into play. The LIGO setup is built in order to be an incredibly sensitive instrument for the direct detection of gravity waves. This is done by making a gigantic perfectly tuned michelson inferometer. You watch the fringe on the inferometer and look for changes. Theoretically if there are no gravity waves there shouldn't be any changes to the fringe but of course because the thing is so damn sensitive it actually picks up vibrations from people walking around a mile away and stuff like that. Fortunately the changes in the fringe from local vibration and what you would see from a gravity wave are different. Now the thing is that we still haven't detected a gravity wave with the LIGO. So here is the deal why not try and make a gravity wave right smack on top of the damn thing? The idea is to shove an extra little laser pulse on top of the continuous beam already circulating in the inferometer. every time the laser pulse passes we add a little bit to it. The laser pulse would have to be extremely brief and extremely powerful in order for it to make a measurable g-wave. I don't really have the necessary knowledge to know exactly how powerful it would have to be to be detectable but as the wave began to hit a mirror its mass would increase to a peak as half the pulse has been reflected and then go back to zero as the rest of it got reflected. The mass generated by the pulse even at its peak would be absolutely minute but it would go from 0 to full mass in the tiniest fraction of a second. I don't know if that violence would be enough to make a measurable gravity wave either. The point is it is a totally freaking awesome idea. Use the device built to detect gravity waves to make them!!!!

To put things in perspective the gravity wave would be extremely extremely weak. Even assuming we could get the total energy in the burst to be somewhere around the order of a petajoule the mass generated by the pulse would be about a ten thousandth of a gram. So a bounce event at one of the mirrors would be something like a mote of dust appearing and vanishing in the tiniest fraction of a second. Now on top of the absolutely tiny mass we are faced with an additional 11 orders of magnitude or so coming from the gravitational constant. So even a petawatt burst would probably not be enough.

Actually I just realized that I've been ignoring the fact that the momentum from the light being reflected goes into the mirror so when you consider the mirror and photons as a system the total rest mass is unchanged. so the whole idea is dead in the water. Still though it was an interesting thought.

Wednesday, February 4, 2009

Rock Paper Scissors Lizard Spock

I am quite the fan of rock paper scissors. Usually the attraction of playing rock paper scissors is that the outcome is viewed as random. Random is here taken to mean that all outcomes have equal probability. Because there is no obvious advantage to choosing say rock over scissors we assume that the choice between these elements is done at random in which case the game really does provide a random decision process. But suppose that the entities playing the game are not capable of truly random behavior (e.g. they are human) in which case there is now an incentive to try and predict what the action of your opponent is going to be. Especially if you are playing a game of RPS such that the victor is decided by the best two out of three or the best three out of five. Because then you have the data of what happened on the previous play which will give you an extra boost to decide what to do now. I once heard of a computer program that was written to analyze the probability distribution of the behavior of human opponents in a game of RPS and after a little while of observing the human player playing the computer opponent was able to beat the human player somewhere around 60-70% of the time.

I have been thinking rather a lot lately about different methods by which it might be profitable to do behavior prediction of agents playing games. The paper rock scissors type game seems to be a rather simple and interesting type of testing grounds on which i could try to test my methods of strategy analysis. Today I stumbled upon a game called Rock Paper Scissors Lizard Spock which is a simple generalization of the idea of a RPS type game

http://www.samkass.com/theories/RPSSL.html

In RPS we are using a directed version of the complete graph on 3 elements k3, which is just a triangle. The win loss relationships in RPSSL are more complicated because we are using k5 instead which is now more than just a loop. We can make a further generalization of this type of game by using the complete graph on n nodes kn and then randomly choosing a direction for each of the connections between the nodes denoting which node wins in a contest of the two. While admittedly this doesn't make for very interesting humanly playable games it does provide a convenient way to generate a large field for testing ideas for strategy prediction.

The method used in the aforementioned RPS playing computer program was Markov chaining. Which basically means the program would just look at the last couple of plays of the human player and take them to be the indicator of what they would do next. The program analyzes the probabilities of the human player and then just does what the probabilities suggests it should. For a human player playing RPS this works well. Part of the reason for this is that the state space for RPS is small. Meaning that the previous play combinations have only a relatively small total number of possibilities. There are 9 possibilities of play at each step in the game meaning that if we want to try and get decent probabilities based on a one step Markov chain we only need to observe behavior for perhaps a couple dozen turns. Of course a branching factor of 9 is still fairly large and no human player is going to play the computer long enough to make the 5 depth probabilities accurate with 95 = 59049 But that is probably not likely I think the 1 depth probabilities would give decent performance and 2 depth probabilities better. Even 3 depth probabilities with 93 = 729 are certainly within reach for long games. Since I am interested primarily in being able to beat other computers and not other humans though even depth 8 or 9 Markov chains are not out of reach computationally. I would be willing to hypothesize that computers playing against other computers with the same sort of Markov models would generally tend to end up tying each other. I think it would be an interesting exercise to code up such generalized RPS games and pit different prediction strategies against one another. I'll post if it ever gets done.

Monday, February 2, 2009

Pacman competition

This semester I am taking an upper division computer science course on artificial intelligence more or less for fun. The CS department tries to only let full cs majors into the upper division classes so I had to go through a few people in order to actually get into the class. So far the class has been fairly interesting and quite a bit of fun. The first two programming projects in the class are centered around code for a Pacman game. This code (or more likely a modification of it) will form the basis for a competition at the end of the semester. The most exciting part about this competition is that it will have two parts an intra university competition here at the U and an inter university competition between the U and an AI class being run at Berkeley. I was planning on taking this course before I knew about the competition but this competition has elevated the class to very near top priority among my classes.

Although the actual complete details of the competition have not yet been revealed the first two programming projects have been working with Pacman code and I have been thinking about some Pacman related problems as a result. Therefore I think I am going to begin posting about Pacman code and Pacman related problems. For now just know that my Pac-agents are destined for victory!!!!!

-P.S. given an set of integer coordinates and allowing only movements on a grid what is the most efficient algorithm for calculating the minimum path distance to visit all the points in the given set?

Wednesday, January 21, 2009

This is the Nadir

I am at something of a crossroads being about (hopefully) to embark on a graduate education in mathematics or physics. Many things are uncertain for me, most fundamental among these is my financial instability. Since essentially 100% of my financial support has been dependent on my participation in the realm of undergraduate academia I find this moment unsettling. I can see far enough to know that the sources of money that I have relied upon thus far cannot carry me for more than a few extra months but I do not yet have some other means of support. President Bush is now former president Bush and we finally have a leader who has the potential to be more than an embarrassment. I can feel my mathematical and physical knowledge approaching a sort of critical mass which will allow me for the first time to really fully function as a mathematician and physicist in the professional sense.

I feel that this is a time filled with potential both for me individually and from a wider perspective. Usually the declaration that one is at a low point of life is a negative statement. But implicit in the declaration of the lowest point is the idea that this is the point where things took the turn upwards.

This is probably not even a local minima for my life but that doesn't matter, the declaration of a nadir is intended as the declaration of an upward arc in the future.


P.S. As an interesting side note I will mention that as I wrote this I got an e-mail confirmation of a tutoring position. Looks like I was within about 3 minutes or so of hitting the low point of my financial situation. Now lets hope that my graduate admissions go well.

Wednesday, January 7, 2009

Normal Sea

When I think of the word normalcy I can't help but think of it as split into "normal sea". While it is often good to talk about things as though there were no real "normal", this is not the case. If you ask me to define "normal" and I tell you that normal, in the sense I am using it, means conforming to the standard or the common type (thank you dictionary.com). You might say that nobody really fits this description since everybody is quite different from everyone else and so there really is no "normal" but if you have a shuffled deck of cards there are 52! different configurations. You can't point to a particular configuration and say "that's the normal one!" it wouldn't make any sense. But intuitively what we mean is that a "normal" configuration is one that people would believe occurred by chance. If the deck is perfectly ordered 2 through ace and by suit we would probably conclude that the deck had been intentionally ordered that way.

While it is true that intrinsic to any such judgement of which card configurations are "random" (and therefore normal) and which ones are not is to put some sort of structure on the cards which is not fundamental to them as objects. Likewise I agree that there is no fundamental structure intrinsic to human beings which allows us to say what is "normal" however that does not rob the word of all meaning or use. To be normal means a great many things and to be abnormal has an even greater myriad of meanings. Rather ironically once we consider about 20 different independent criterion of "normal" the majority of people will end up being highly abnormal with respect to at least one of these criterion (assuming the standard bell curve).

I often times however observe the realm of "normal" wherein a vast sea of people live. It seems that as a group the majority of human beings strive for this concept of normalcy. To be normal is to have privilege and power, society bends to the will of the normal or at least to the will of what is percieved normal. In many ways what is normal is actually determined not by the general consistency of a populace but by the distribution of power within a culture.

Tuesday, January 6, 2009

as time goes by

Perception of time is a funny thing. We can perceive time to be going very quickly or slowly depending on our mental state. I have always found it fascinating that people are capable of assimilating vast amounts of information in the form of written stories that they would not be able to consume if it were in the form of a movie. That is not to say that for everyone or every book a book would take less time to purvey the information of a story than a movie. Rather I'm saying that the upper limit of how quickly a story of a particular length can be absorbed is limited in the case of a movie to the regular flow of time. Just as in a book we can give the viewer the impression of months or weeks passing when only a few seconds of viewer time have passed. In a book all it takes is "weeks passed" while in a movie a quickened sequence of images (say of the sun rising and setting) would also do the trick.

Interestingly any movie represents vastly more information than any book. That is if we wanted to write down a sequence of symbols that would allow someone to reproduce a book our job is already done the book is composed of a sequence of symbols. For the movie the task is a little harder but it is also fairly easy to do. In fact because it is more robust than the analog method of storage all movies are now recorded digitally which is to say they are recorded as a sequence of symbols. Most of the information which we capture with a movie is nothing that anyone will ever give the slightest attention. The fact that there are exactly 137,021 leaves distinctly visible in scene 2 at second 12.6 is not something that a viewer is capable of taking away from the film. The precise nature of the tapestries behind the dueling figures is not important. In a book the author could not give us such details even if they wanted to because the line "he tied his shoe" could then take hundreds of pages to fully describe.

Because of the incredibly streamlined way that the information of a story is contained in a stream of language it is possible to experience a book much much faster than you can experience a movie. Jenna reads twice as fast as I do but that does not in any way diminish her capacity to believe the action. Of course if you have ever (and I have) watched a movie at 1.5x or 2x the normal speed while you can still understand what is going on (most of the time) the degree of approximation to reality is quite lost.

The question I have then is this, how fast is a human being capable of experiencing a compelling story? If someone reads the lord of the rings trilogy in a little over a week reading every day for a number of hours we would not begrudge them the claim that they "experienced" the story in the books. At least I would not though I would be more than happy to raise the question more seriously. However if someone else were to read all three books in the span of an hour or two I might not be so willing to accept that what they had done constituted "experiencing" the story. Even if I could then quiz them and found that they had precisely the knowledge of the tale that the other slower reader had that would not demonstrate to me that the two people had an equivalent experience with the book.

But where do I draw the line? At what point is someone experiencing the book too fast? Very probably we should make our judgment based not on how quickly someone assimilates information but on the mechanism of assimilation. Did the person who read the books in an hour visualize all of the scenes? Did they let themselves feel fear or sadness? Did they hear the distinct voices of the characters in their minds? If they didn't but the slower reader did then it is easy to give the slower reader the label of "experiencing" the story whereas the quick reader perhaps simply absorbed it.

Different types of processing take different amounts of time within the brain. I don't know precisely what the figures are but it takes more time for someone to hear someone say "it was an apple" than it takes for someone to envision an apple. Thanks to our wonderful visual processing capacity when we read we recognize the different shapes and can process the information from written words much faster than we could process the information if someone said it out loud. But sensory processing is not really absolutely necessary to understand a story. You can understand the statement that "then the spaceship exploded in a brilliant silent flash" without visualizing it. In the end all the generated imaginary sensory information will fade away. You probably won't remember what you imagined the black hole to look like but you will remember that while they were trying to fix the engines to escape the black hole the crew accidentally blew themselves up.

That is to say that in the end your mind will probably only see fit to store the semantic information not the generated sensory information. More to the point your brain saves a summary of the information and then you can generate at will any sensory information you like. Of course this may not be true of all people and sometimes the only thing you might remember from a book is a powerful image that took your fancy. But the essence of the issue is that even if you remember the book in a sensory way the information is conveyed in a semantic manner. When I say the word "pink" while I am giving you a sensory input that input is in no way pink. The word "pink" is just a series of sounds and in your mind the word is assosciated with the visual input which is gotten from seeing the color pink. So even in the case of a stunning description of a visual display in a book it is perfectly possible (and even probable) that you store the image away not as an image per se but rather as enough semantic information to be capable of regenerating that image.

All of this rambling is intended to help in the analysis of a question about virtual realities. The examples of movies and books are both examples of virtual realities in the case of a movie and a book both are capable of giving impressions of stories that take years in only a few moments. Because of the purely semantic nature of books they are capable of transmitting stories much more rapidly than are movies. The question is this: In the future how quickly will it be possible to transmit experiences?

Lets say a person is capable of experiencing completely immersive VR how fast can we push the brain into experiencing a story? If we are stuck simply giving direct sensory input then we don't really have control over how fast someone experiences something. This mode of immersive VR is something akin to the holodeck where the experiences happen in real time. But what about an the immersive VR version of a movie? Where all of your emotions and reactions and sights etc are already predetermined for you and shaped to give you the experience the creator wished to give. I would be willing to bet that we could up the clock rate of the brain in a situtation like this. Since responses aren't required you could push the experience faster and faster. At this point I am only guessing but I would be willing to bet that you could make someone experience a complete 24 hours worth of experiences within the span of 4 hours. That is I'm betting that you could trick the brain into experiencing things about 8 times faster than normal. This is because for the most part our brains operate on a tick rate of something like an observation a second or maybe a little slower. But in cases where our brains have to react to something very quickly we can act and think in a time span of as little as a tenth of a second or so.

Finally though we come to the area of memory implantation. I see no reason that in the space of a few hours it wouldn't be possible to implant decades worth of memories into someones brain. Perhaps the capacity to do this will never in fact be possible but lets think of the implications for a moment. This would very likely be a much more effective way of giving sculpted experiences to people instead of making them experience them in fast forward. In the space of a few hours you could get the memories of having been part of a story that could have lasted weeks or months or years. If it were possible to make the memories more or less perfect instead of the usual patchy memories people actually have how would this differ from being able to sit back and watch it at will? This would be the sort of perfect rewind and fast forward we would already know what happened and what would happen and how we felt at each moment. This is fundamentally different from a story which it is possible to transmit now even though we all have experience with these kinds of stories. Is such a thing even desirable?