In previous posts (such as this old one), I’ve mentioned a few computer programs that I use fairly frequently, but one particularly interesting one was missing: SAGE, which packages together many different scientific software and libraries with a convenient interface around the Python language. The reason was that SAGE may be quite a bit more difficult to install than, for instance, GAP or Pari/GP (which it actually includes). Or at least, it was for me: my current laptop is quite nice but also quite recent, and so is Fedora 9, which is my Linux distribution. Some SAGE binary packages exist for Fedora 8, but not for Fedora 9 at the moment, and unfortunately there is a subtle packaging issue of some kind with some cryptographic routines that prevent the Fedora 8 package from working. So the current solution is to download and compile SAGE from source. But here another problem arises: the Core 2 CPU is misdetected by one of the packages involved in today’s version of SAGE (the ATLAS linear algebra package, version 3.8.1), which sees it as some kind of freaky Pentium III, and then takes forever to build.
As often where open source software is involved, the solution is not very far after a bit of search. In this case, the issue is known, as well as the workaround: commenting out two lines in one of the Atlas source files, and modifiying a third one. Once this is done, the rest of the build process worked perfectly. However, one must notice (or not forget), contrary to what I first did, that the build process of Atlas used by SAGE starts by installing and working from a pristine copy of the source code, contained in a compressed file, which will therefore erase the patched version of the delinquent code if one is not careful…
What must really be done is to create a new archive with the modified file. And in case anyone else has the same problem and finds this post, here is a copy of such an archive: it is the file $SAGE_DIR/spkg/standard/atlas-3.8.1.p3.spkg
, valid for release 3.0.6 of SAGE (where $SAGE_DIR
is the base directory where the SAGE files are found), and should be copied (with that name) in that location; this is a 2.8MB file.
(All this will soon be obsolete of course; within a month at most, the necessary patch will have found a permanent place on all the required software).
Now, this being done, I have started reading the documentation to actually use SAGE. Learning it is easier because of the concrete project of adapting the GAP script I have been using to compute Chebotarev invariants (at least, I find this type of manageable project is a good way to adapt to a new programming environment).
For illustration, here is the resulting SAGE program:
def chebotarev(G): G1=gap(G) g=G1.Order()._sage_() M=G1.ConjugacyClassesMaximalSubgroups() M1=[i.Representative() for i in M] # Python list nbM=len(M1) C=G1.ConjugacyClasses() # Gap list O=[c.Size()._sage_() for c in C] # Python list print O print "Done precomputing" cheb=0.0 scheb=0.0 ar=matrix(len(C),len(M1)) # C is a GAP lists hence starts at 1. for i in range(1,len(C)+1): for j in range(0,len(M1)): #print i,j if C[i].Intersection(M1[j]) != gap([]): ar[i-1,j]=1 for i in subsets(range(0,len(M1))): if len(i) != 0: density=0.0 for j in range(1,len(C)+1): isin=True for x in i: if ar[j-1,x]==0: isin=False if isin==True: density=density+O[j-1] #if len(i)==1: print "---", i, C[j], density #if len(i)==1: print i, density cheb=cheb+(-1)^(len(i)+1)/(1-density/g) scheb=scheb+(-1)^(len(i))/(1-density/g)*(1-2/(1-density/g)) return cheb, scheb
(Since this is actually Python code, remember that the indentation is significant if copying the script).
Compared with the earlier script, notice that while the “mathematical heart” of the computations are still done in GAP (the maximal subgroups and the conjugacy classes are computed in GAP from SAGE), the logic is written in the Python language. This has a number of advantages in this case: (1) it is easy to loop over all subsets of the maximal subgroups (GAP doesn’t have a command to get all subsets of a finite set; I had to implement it by interpreting subsets as bit patterns of integers, and the annoyance of this was compounded by the absence of a bitwise “and” operator in GAP…); (2) it is easy to get a real approximation instead of a rational number with enormous denominator (the Chebotarev invariants are rational numbers of this type as soon as the group is not ridiculously small); GAP doesn’t implement any floating point operations, and earlier I transferred the rationals to Pari/GP to get such real approximations… (Note that Magma does not have the two defects mentioned here, and as I said before, it is currently more efficient for some of these computations).