I admit I was reading Prime Numbers by Pomerance again and I have taken a fancy to one of the factorization methods described in the book. The book only gives a single paragraph on the method and I thought it would be fun to talk about it here. The idea is essentially this, to factor a number n pick chop the numbers less than n up into equal sized blocks and multiply all the numbers in these blocks together mod n and then take the greatest common divisor of the product of these blocks of numbers with n. Once we hit a block of numbers which has a gcd > 1 then if the gcd < n then we have a proper factor of n or if not then we can just search through the numbers in that block one at a time, checking to find the first number with a gcd with n greater than 1, and wallah!

Since we are guaranteed at least one factor in the numbers less than n^1/2 we only need to worry about breaking up the numbers less than that up into blocks. Rather intuitively the best division of work would be to make the blocks all sized equally and so you naturally come to the decision to make the blocks of size B = n^1/4. In order to make the calculation of the products of these numbers as quick as possible we take the time to construct the polynomial p(x) = x * (x-1) * (x-2) * (x-3) * ... * (x - B + 1) and reduce its terms mod n. Once we have that polynomial we just have to check gcd(p(B*i), n) to see if there is a factor of n in the block of numbers [B*i, B(i-1)] until we find an i that it works for.

I rather like the algorithm it isn't quite as cool as the rho method of factorization but it is surprisingly elegant and effective and has the additional advantage that it is clearly guaranteed to work. Just for fun I want to propose an idea that is almost certain to be worthless as an effective means of factorization but nonetheless appeals to my sense of aesthetics.

These polynomials which we are using to quickly evaluate the product of chunks of numbers are using the fact that factorials are nice smooth numbers with a lot of factors. In Pomerance's book he motivates the polynomial evaluation method by talking about how if one could evaluate factorials easily then factorization could be achieved by a simple binary search since gcd(k!, n) > 1 if k is greater than the smallest prime factor of n and equal to 1 otherwise.

Being a physicist I rarely have to care about the exact integer value of a factorial and instead am perfectly alright with using Stirlings approximation to the factorial as though it were exact since for any numbers for which it would be arduous to calculate the factorial exactly the error in its approximation is the most minuscule fraction, far below the error that one could hope to measure in a physical experiment.

But of course if we are interested in the factorization of n! then if we are off by even 1 the factorization will be completely different. Clearly trying to directly apply the binary search idea utilizing Stirling series is useless. The fractional error may drop dramatically with size but the absolute error still grows exponentially. So trying to keep enough terms in the approximation to get within 1 of the value of the actual exponential is a losing game. We could simply try all the numbers "close" to the estimated number but besides the fact that we might have to try an uncomfortably large number of candidate numbers in order to be certain of having tried the actual factorial there is a more serious problem. If we were trying to factor a 10 digit number the size of the factorials involved would be millions of digits long which is just a tad bit of overkill for factoring a 10 digit number. So if we want to keep the operations manageable we should really properly evaluate the factorial mod n. But because the factorials are so much larger than n even a tiny difference percentage wise in the value of the factorial would correspond to many times n and so give answers which are well mixed on n.

From all of these considerations it would appear that we would be much much better off simply doing trial division or for that matter actually using the Pollard Strassen polynomial evaluation method to factor our number. But still one might potentially be able to use Stirlings approximation or something like it (perhaps an appropriate adaptation to keep coherence mod n) might work out in the end.... perhaps even make for an actually useful factorization algorithm... I doubt it though. Still it is fun to think about and I would consider it something of a triumph to design a factorization algorithm that used the stirling approximation that just wasn't worse than the sieve of Eratosthenes.

## Monday, October 18, 2010

## Saturday, October 2, 2010

### Listening to bubbles to find dark matter

The departmental colloquium this last week was given by Peter Cooper who is part of the COUPP search for dark matter. Just like all the other dark matter searches the idea is to make your detector have as much material in it as possible and try to shield it as much as possible from any known interactions, cosmic rays, background radiation, etc. What you don't shield away you want to be able to discriminate anything we know about and then what you are left with must be dark matter.

In the case of the COUPP experiment they are using bubble chambers deep underground with several tons of scintilator oil on top. A bubble chamber works by keeping the fluid inside it at a temperature which is just above its boiling point. But because a bubble of gas takes more space than the liquid it takes a little bit of extra energy to make a bubble form. When a cosmic ray or other particle comes through the liquid it deposits energy and this extra little kick can make bubbles form in the liquid.

Usually in a bubble chamber you would be interested in the track that a particle leaves in the detector. But anything that leaves a track of bubbles interacts way too strongly to be dark matter. So the question is how do you discriminate between dark matter single bubble events and more boring single bubble events? The answer is you listen to the noise the bubbles make when they are made! alpha events in the chamber caused by trace amounts of radioactive materials in the device will show up as louder bubbles!

I can't help but think of a picture of a physicist carefully listening to the vibrations of tiny bubble floating deep in the dark and hoping to hear snatches of unknown harmonies of nature.

In the case of the COUPP experiment they are using bubble chambers deep underground with several tons of scintilator oil on top. A bubble chamber works by keeping the fluid inside it at a temperature which is just above its boiling point. But because a bubble of gas takes more space than the liquid it takes a little bit of extra energy to make a bubble form. When a cosmic ray or other particle comes through the liquid it deposits energy and this extra little kick can make bubbles form in the liquid.

Usually in a bubble chamber you would be interested in the track that a particle leaves in the detector. But anything that leaves a track of bubbles interacts way too strongly to be dark matter. So the question is how do you discriminate between dark matter single bubble events and more boring single bubble events? The answer is you listen to the noise the bubbles make when they are made! alpha events in the chamber caused by trace amounts of radioactive materials in the device will show up as louder bubbles!

I can't help but think of a picture of a physicist carefully listening to the vibrations of tiny bubble floating deep in the dark and hoping to hear snatches of unknown harmonies of nature.

Subscribe to:
Posts (Atom)