Python, Windows and PyMol

So I was chatting with an ex-colleague who had been having some fun getting PyMol to install on Windows. So I thought I’d do a quick write-up.

To start with, you need four things:

  1. The latest (C)Python installation package (AMD64 for 64-bit Windows 10)
  2. The NumPy+MKL wheel from Christoph Gohlke’s collection for AMD64
  3. The PyMol wheel from the same for AMD64
  4. The PyMol Launcher wheel from the same (optional) for AMD64

Install Python:

Make sure to tick “add to path” also, disable the pathlength limit, since it’s an option:

Open a commandline: (shift+right click)

Use pip to install wheel:

Then install NumPy+MKL:

Then PyMol: (this will also pull in Pmw)

Then PyMol Launcher (if you want)

Then find where PyMol is installed: (a ‘cheat’ here is %appdata% in the address bar!)

And create a shortcut on the Desktop/Start Menu/Task Bar (to your taste)…

Protein loaded is 5XA7, a recent submission and nothing whatsoever to do with me. I just picked it for demonstration purposes.

And now it should work! In all honesty, I find the pip install system far more trouble than just finding and installing the relevant libraries used to be…

Music for Monday

And so, another Music for Monday.

Since I decided at the weekend to buy myself a new toy (read: musical instrument) so that I’ve actually got something to do when it’s either a) too hot, b) too rainy or c) too late in the evening I got a little distracted.

Also, I suck at instruments with frets. I need to recalibrate my brain for a fretted instrument, after having played the violin, then the viola, for a total of… yikes… 25 years!

But a guitar was the cheapest thing I could buy that I always wanted to play.

But it’s taken me back a bit… so to Led Zeppelin! And possibly their most cliche track…


NWChem: Headaches

Been struggling to get NWChem to compile today. I think I’ve nearly got it, but I’m calling it a day for now because otherwise I’ll still be working on it at 0500…

For the record, it compiles fine if I don’t want CUDA or OpenBLAS (the internal BLAS libraries are horribly slow, by the developers’ own admission) but it’s getting CUDA and a faster MKL in place that is causing me grief.

Basically, I’m hoping that it can provide me some speedup over a CPU computational chemistry package – I’ve had the chance to test TeraChem, which I actually think is awesome… it’s just that it doesn’t like Pascal generation GPUs, so it’s fine on my laptop with a 980m (well, if you call the GPU sitting at frighteningly high temperatures “fine”) but doesn’t want to know about the twin GTX1080’s I’ve got in a sort-of-but-not-workstation.

So, anyway, as soon as I’ve figured out what compile flags I need to get it working properly, I’ll update this post.

How To: Intel GPU for monitor, nVidia GPUs for CUDA

There are lots of methods for getting this working spread over the internet. Some work, but are a mess (and involve faking monitor connections to the nVidia GPUs using some Xorg trickery) or just flat out don’t work.

This method works great, provided you don’t want fan control over the GPUs. So just get some decent case fans and you’re golden.

I used the nVidia Ubuntu PPA for drivers; you can use the binary blob .run file, and frankly it’ll probably save you headaches in the long run (like, for example, when the drivers update and freakin’ break everything! /rant)

First, I got a more up-to-date kernel on Ubuntu 16.04…

apt-get install linux-headers-4.10 linux-headers-4.10-generic linux-image-4.10-generic linux-image-extras-4.10-generic

You should have dkms installed, but if not, pull it in with:

apt-get install dkms


Add nVidia drivers PPA:

add-apt-repository ppa:graphics-drivers/ppa

apt-get update

apt-get install nvidia-375

apt-get install nvidia-settings nvidia-prime


at this point, the graphics may work, or they may not. If they do, great. if they don:t, well, it:s easy to fix by reinstalling nvidia-settings and nvidia-prime.

Set the focus GPU to intel via the system tray nvidia settings app. log out, log back in again. 3d accel should still work.

Download the CUDA repo .deb (I used network, but local works too) and install using dpkg. there should not be any dependencies unmet or conflicting.

dpkg -i [cudarepofile.deb]

apt-get update

apt-get install cuda

This will pull in CUDA8.0. Again, like I said, you can always use the local .run file, which will save you updating headaches.

Now, at this point, CUDA wants to reinstall the gpu driver. I let it. It’ll break 3d acceleration, but reinstall nvidia-settings and nvidia-prime to fix it again.

At this point, nvidia-smi (the nvidia system management interface) will stop working. This is because it can’t cope with the idea of you not using the nVidia GPUs for Xorg. Dumb, given the drive nVidia has been putting into GPGPU, but nevertheless true.

To get it working again, do the following; you can either remove or rename the link in /usr/bin. I rename, others might remove:

mv /usr/bin/nvidia-smi /usr/bin/nvidia-smi.backup

then, place the following script in a file called “nvidia-smi”

vi /usr/bin/nvidia-smi

LD_PRELOAD=/usr/lib/nvidia-375/ /etc/alternatives/x86_64-linux-gnu_nvidia_smi "$@"

OK. Now, you need to change the /nvidia-375/ bit to whatever driver version you are using. If it’s 375, awesome. If it’s 378, you’ll get an error until you change it.

The “$@” bit is necessary as nvidia-smi for the correct parsing of the command; without it, it’ll throw a nasty error.

So, nvidia-smi works, but some things will still throw up wobblers. To fix this, add;

# CUDA stuff
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib:$LD_LIBRARY_PATH

# Fixing Intel/nVidia conflict (nVidia for compute, Intel for display)
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/lib/nvidia-375:$LD_LIBRARY_PATH

To your .bashrc

To get persistence working (that is, power state management, which will allow the card to clock down when it’s not being used) edit the following:

vi /etc/rc.local

and add

/usr/bin/nvidia-smi -i 0,1 -pm ENABLED

above the “exit 0”

This tells nvidia-smi to select both GPUs and enable persistence mode so that GPUs actually freakin’ work properly. This has the side effect of making the nvidia-smi command instant again (it wasn’t when switching to Intel iGPU). Obviously, if you’ve only got a single GPU, you just tell it -i 0… if you’ve got three or four, you can add accordingly. The GPU count is always from 0.

Whenever you have a driver update occur, it’ll replace the nvidia-smi script with the actual program again. Let it. CUDA will break when you reboot (so long as drivers don’t update, it’ll survive any kernel updates without breaking) and it’s easy enough to remove the symlink in /usr/bin/nvidia-smi and replace with that little script again.

I’m using this to drive a 4K@60Hz monitor via the iGPU while using a pair of GTX1080’s for compute.

The only thing I can’t get working is fan control. I’d rather have my GPUs loud, than I would have them cooking. But I compensate with some fairly chunky intake fans pointing straight across the GPUs.

I’ve seen people (some fairly high-ranking Professors, as well) suggest watercooling. Well, yes, that is a possible solution. It’s just a shame that I’m a little shy about watercooling 24/7 systems that are largely unattended, because… I’ve had two watercooling pumps, of different types, die on me recently. I was fortunate that one system was idle when it happened, but it still reached “idle” at 66 degrees C. That was with full-copper (read: expensive) waterblocks, and CPU only. I don’t doubt that GPUs would be dead if that had happened to them.

So, yeah, I’m a little leery of watercooling systems for high compute loads unless backed by manufacturer warranty. Haha.

Music for Monday

OK, so the last week has gone by a heck of a lot quicker than I expected.

As for the weekend… well, I know I didn’t sleep through it, because I went shopping. But where did it go? Somewhere, or rather, somewhen. In between fighting with a Raspberry Pi 3 that didn’t want to do what I told it. Perhaps more on that in the future…

But another Music for Monday…

Piano Opera: Final Fantasy IV-V-VI.  Specifically, Kefka’s Theme.

Originally from Final Fantasy VI (Final Fantasy III in the US, back in the SNES days; now thankfully synchronised with the rest of the world) there have been various takes on this. But the piano solo version has some special charm I can’t quite put my finger on.

Dodgy Internet Connections: Ping Script

I’m having some fun at the minute with a dodgy internet connection. It really is driving me to distraction, I must admit. It’s quite frustrating.

It manifests as a connection that drops in and out almost at random. In fact, if I heavily load the connection, rather than doing what I expect and dropping out, it seems to stabilise. But I can’t really download hundreds of GB all the time, can I? Also, it does likewise drop out occasionally at high loads – that’s the worst issue anyone has to try to troubleshoot: an intermittent one.

Regardless, I’m trying to collect evidence for the flakiness (although tonight it doesn’t seem to be playing – I’ve had a largely painless experience online this evening) so need some way of proving that something is up.

Note that I’m in Windows 10 most evenings, due to Skype, Word and Visual Studio. Yup, spend all day at work on Linux, move to Windows of an evening. Insert moaning about no good Office suites on Linux here.

So, anyway, a quick Windows (non-Powershell) script to ping a site at regular intervals, and log if not successful. Right now I’m not interested in when it is working – only when it’s not.

@echo off
setlocal enabledelayedexpansion
set hostIP=[put an IP address here]
set pingline=1
for /f "delims=" %%A in ('ping -n 1 -w 250 -l 255 %hostIP%') do (
    if !pingline! equ 2 (
        set logline=!date! !time! "%%A"
        echo !logline! | find "TTL=">nul || echo !logline! >> pinglog.txt
    set /a pingline+=1
timeout 10
goto loop

All credit due, I found this here and modified slightly so I didn’t have to download something from Windows Server 2003. OK, so timeout is slightly less accurate than sleep, but for my purposes it is sufficient. It will run until you kill it. Closing the command line is sufficient.

Torque PBS & Ubuntu 16.04/Mint 18

There are some programs that like MPI. There are others that are… kind of single threaded, but work pretty well with a PBS (portable batch server) to actually queue up tasks and generally speed up execution.

The I-TASSER suite, for protein structure prediction, is one of the latter.

If you’re in academia, I-TASSER is free, so it’s a useful tool to have even if it’s not used very often.

But getting Ubuntu to play nice with a PBS can be something of a trick… partly because the version included with Ubuntu is now old. Very old.

And the newer versions are still free – it only costs money for Torque if you want to use the more powerful schedulers like Maui. Which I don’t, because I’m usually the only person actually logging in to the boxes I administer. This may change in the future, but for now, I don’t need a complex PBS.

Anyway, to get Torque working without using the version included in the repos (because it’s ancient) requires relatively little work in the grand scheme of things…

The first job is to get the basic requirements for Torque installed:

sudo apt-get install libboost-all-dev libssl-dev libxml2-dev

Boost pulls in a ton of things, so it may or may not be worth adding --no-install-recommends to the end of that apt-get command. I didn’t, but I’m not short on space.

If you’ve not got a C compiler installed, now is the time for that as well. Fortunately, Torque doesn’t need anything fancy like cmake to build, just good ol’ ./configure, make, make install.

Now they’re installed, you can go download the Torque source code from Adaptive Computing. Now, annoyingly, the most recent release ( as of writing) screws up for me for reasons I can’t figure out. I know from prior experience that 6.0.2 works 100%, so I’ll stick to that. It’s still newer than what is in the Ubuntu repos…

Extract the source somewhere sensible, like ~/bin using tar xzvf [torque.tgz] and run ./configure, then watch for any errors – there shouldn’t be any. When it’s all done, type make. You can use make -j [number of CPU cores] to speed things up a bit. Once that is done, switch to root with either sudo bash or su -, and type make install.

Now comes some fun bits.

There is a nice script in the folder you just built Torque in called torque_setup, but that’s not everything you need.

The first thing to check is that you have your hostname listed appropriately in /etc/hosts. Now, here is where static IP addresses really make your life easier: if you are using DHCP and your router decides to change you IP, Torque will stop working. Very frustrating.

Anyway, while lots of things need to point to localhost, Torque also needs it to point to the server name. I name mine after elements of the periodic table, but you can do whatever you want.

Here’s what my /etc/hosts file looks like: localhost hydrogen hydrogen helium lithium beryllium

Without this extra entry, Torque doesn’t work. It also works putting localhost and the hostname on the same line.

Now you can run ./torque_setup [username] and answer y at the prompt.

Now run, echo '/usr/local/lib' > /etc/ and ldconfig. This tells the system where the Torque libraries are.

And echo "hydrogen np=32" > /var/spool/torque/server_priv/nodes and echo "hydrogen" > /var/spool/torque/mom_priv/config. These tell Torque about the nodes (and how many CPUs each node has) and the pbs_mom which server it’s running on. With Torque 6, trqauthd should do the job of pbs_mom, I think?

Get the server running again with pbs_server, pbs_sched and trqauthd (as root) at the commandline.

Then check that it’s working with qmgr -c 'p s' (the space is important).

Finally, check that it works by starting an interactive PBS session with qsub -I as a normal user (you can’t run this as root).

Should all work OK now!