How to fix LockObtainFailedException on Solr start

If you have experienced a Solr crash during a commit (which, of course, is the worst possible time, Murphy`s law etc), you might have received a LockOptianFailedException when trying to start Solr. This makes it impossible to start Solr at all.

You can solve this by configuring Solr to delete any lock files when starting up,
you can also configure Solr to use a normal lock file instead, which you can manually delete if you prefer that.

Simply correct your solrconfig.xml to the following values, they are per default
commented out in solrconfig.xml.

<lockType>simple</lockType>
<unlockOnStartup>true</unlockOnStartup>

How to measure the temperature of your Raspberry PI

When running Raspian on your PI (and other distro`s for PI), an Raspberry PI specific application is installed, which let you monitor everything from volt-status, HDMI-status to the PI`s core temperature.

To check the core temperature, simply run the command  “/opt/vc/bin/vcgencmd measure_temp”

PS: Run the commando  “/opt/vc/bin/vcgencmd commands” to see all the data you can extract via that application.

Unfortunately, it does not appear to be possible to check the temperature elsewhere on the (physical) Pi, without getting a temperature sensor.

How to install Raspian for Raspberry Pi from Windows

I am currently in the process of setting up a SETI@Home cluster running on Raspberry PI boxes. Currently i am working with three nodes, and i will expand it if is a success.

For OS, i use Raspian (Debian clone), combined with some cheap 8GB SDCards from a random cheap vendor.

In order to install the Raspian ISO image to a SD card, i use the application win32 disk imager, which simply takes a ISO image, and writes it to a designaded SD card.

It is worth noticing that this will cause the partition(s) on the SD card to not be any larger than the image itself. But you can fix this upon your first Raspberry PI boot, as the application “raspi-config” will be auto-started. (if not, start it)

Workaround for inotify on NFS-clients

I run multiple virtual machines which shares all their data via a NFS-server. I have an application that fetches files, and another server that watches for new files in a folder, using inotify. But i quickly noticed that inotify does not work over NFS. I “fixed” this by doing a simle workaround that finds all files in the watched folder, and simply touches them. This causes inotify to dectect those files.

Example:
find s -type f -exec touch {} \;

This works for me since i can simply run the command via cron, or via another scheduled job, but that does not mean that i works for your case. But perhaps it gives you some ideas.

Adding SSH support to PHP on Ubuntu

I have previously written a post on how to add SSH support to PHP, but that post is old, and it has now become even easier to get it up and running. And therefore making it much easier to auto upgrade WordPress via SSH. (automatic, yay!)

As root, do the following:

1: apt-get install libssh2-1-dev libssh2-php

2: Check that is installed: php -m |grep ssh2

3: Restart apache: service apache2 restart

And you should now have SSH support in PHP.

Outdated entry in the DNS cache error when using Remote Desktop

You may sometimes get error messages about outdated entries in the DNS cache, when you are using remote desktop. (The connection cannot be completed because the remote computer that was reached is not the one you specified. This could be caused by an outdated entry in the DNS cache. Try using the IP address of the computer instead of the name.)

The first thing you should do, is to make sure that the clock is correct on these nodes:
1: The client machine
2: The remote machine you are connecting to
3: And if used, the domain controller(s).

And of course, make sure that you don’t have any invalid dns cache. clear the dns cache by doing this in cmd: ipconfig /flushdns.

If it is still an issue, another work around can be found here: http://extremeengineers.net/?p=126 (although, you should fix the problem and not the symptom)

 

Enabling SSH installation of Data Protector clients

Just a quick how-to- make sure that you can install new Data Protector clients using SSH.

All this is done FROM the Linux the installation server, using the root user.

First of all, you have to copy the .omnirc-file if you have not done it before.
cp /opt/omni/.omnirc.TMPL /opt/omni/.omnirc

Then, edit the .omnirc file in /opt/omnic/ and make sure that OB2_SSH_ENABLED is set to “1”.

Then you need to make sure that the installation server can SSH to the client(s) without being asked for a password. If you do not have a SSH-key, you can generate one by running ssh-keygen.

Now copy your public key to the client(s) that you are going to install Data Protector on, by running ssh-copy-id root@<client>. Test it afterwards by SSHing to the client(s), if you do not get asked by a password, you are good to go!

PS: You have to SSH to the client at least once, as you will be asked to confirm the client certificate the first time.

KVM: Optimizing performance on virtual machines (VM`s)

After having set up quite a few VM`s in my career, i have picked up a couple of tips on how to get the most power out of your VM`s:

Get new/correct drivers for your VM`s
Remember to make sure that you have all the correct drivers, this is at least important for IO(Disk) and Network devices. Windows has many devices that will work with basic Microsoft drivers, but that does not mean that the performance magically gets awesome. After making sure that the correct drivers were in place, i managed to go from ~900Mbit to 9.9Gbit on a 10gbit-network between a Linux (Red Hat) and Windows 2k8 server. (tested with iperf which exists for both Windows and Linux)

Turn off Power saving options in BIOS / Hardware
More or less all servers, either they being brick servers, home servers or blade servers, have BIOS settings that enables or disables power saving mode. I know from experience that at least all HP blades comes with power saving enabled per default. Turn off this to make sure that your VM`s gets the performance they expect to get. (I have had VM`s simply be sluggish with this feature turned on, turning it off made CPU-performance get normal)

Turn off CPU throttling on the VM host machine
I have also had issues with slow VM`s even when the power options was fixed in BIOS. I then realized that some Linux distributions (Ubuntu) have a default CPU scheduler that throttles down the CPU when it is not needed. After making sure that the host did NOT do this, the VM`s finally started acting as they should. Check out your Linux distributions guides on how to change this.

 

Any other tips i should add to the list? Feel free to add a comment below! 🙂

How to install Munin-node on RHEL6

Munin-node is not available per default on RHEL6 servers. But luckily, somebody has made a mirror which contains many nice applications, including the munin packages we want.

rpm -Uvh http://ftp.uninett.no/linux/epel/6/x86_64/epel-release-6-8.noarch.rpm

yum install perl-XML-SAX

yum install munin-node

And voila! 🙂

perl-XML-SAX has to be installed first due to dependency issues.

Edited 7.march 2013, updated URL`s

 

RHEV: Cannot export VM. VM with the same identifier already exists

When trying to export a virtual machine via the RHEV (Red Hat Enterprise Virtualization), either via the API or via the RHEVM admin console, you might encounter the error message “Cannot export VM. VM with the same identifier already exists”.

This is thrown when you are trying to export a VM to a location which already have the VM there, perhaps an older copy of the VM.

If you are doing this via the RHEVM admin console, simply select “force override”, to overwrite any existing VM there.

Now, if this is the API, then you must specify this via the action command, via an exclusive command:

<action><storage_domain><name>*STORAGE DOMAIN TO EXPORT TO*</name></storage_domain><exclusive>true</exclusive><discard_snapshots>false</discard_snapshots></action>

The documentation says that you should use overwrite, but this is wrong/a bug, and is confirmed by Red Hat.

There is also another reason which can trigger this, and which is in my eyes, a bug. If the VM already exist on the export domain, but with another name, then you will get this error no matter what. The only work around i know about so far is to make sure that the names are correct both places. I have contacted Red Hat to get this checked up.