dd unix CLI command taking a while? Just ask how it is doing …

June 17th, 2015

It turns that you can ask a running dd command how it is doing.

In the terminal that runs it simply enter a CTRL^T command.

The dd will react to the SIGINFO that this sinds with statistics of the running process.

Quiet handy since dd can take a while if you start it with a lower block size.

assemble images randomly

September 5th, 2013

Want to assemble images randomly into one? That’s easy in the command line if you have imagemagick installed:

Assuming you have all pictures in the current directory:

for i in *jpg ; do mv \"$i\" $RANDOM.jpg ; done

this renames all files ending in jpg into files that have a random number. Files might get overwritten should the random number come up again.

Assembling is as well:

montage *jpg -mode concatenate outputfilename.jpg

If you would run this again outputfilename.jpg would be a used as source image as well.

May 23rd, 2013

When installing opendkim on a Centos 5.7 or 5.9 system following the wonderful howto by Steven Jenkins mail stops going out and the maillog shows:

May 23 12:55:53 her9 postfix/cleanup[4836]: warning: cannot receive milters via service cleanup socket socket
May 23 12:55:53 her9 postfix/smtpd[4832]: warning: premature end-of-input on public/cleanup socket while reading input attribute name
May 23 12:55:53 her9 postfix/smtpd[4832]: warning: cannot send milters to service public/cleanup socket
May 23 12:55:53 her9 postfix/smtpd[4832]: 8DBDB4D48004: client=localhost.localdomain[]
May 23 12:55:53 her9 postfix/master[4824]: warning: process /usr/libexec/postfix/cleanup pid 4836 killed by signal 11
May 23 12:55:53 her9 postfix/master[4824]: warning: /usr/libexec/postfix/cleanup: bad command startup -- throttling

The syslog is even scarier:

May 23 12:55:53 her9 kernel: cleanup[4836]: segfault at 0000000000000008 rip 00002b152350db10 rsp 00007fff855746e8 error 6

Yes, a segfault. Things work better when SELinux gets disabled.
Without going deeply into the reason of this incompatibility the following commands make opendkim work while SELinux is still active.

This command will show you what did cause trouble today, and convert it already in to the syntax for an ‘allow’:

ausearch -m avc -ts today | audit2allow
If what you see is indeed only about opendkim you then can go ahead and install this:
ausearch -m avc -ts today | audit2allow -M yourdesiredmodulename
semodule -i yourdesiredmodulename.pp

Things work much better then.

The Centos SELinux How To is a helpful resource for this kind of thing.

infinite sar display - neat option

May 21st, 2013

wanting watch sar run in a terminal in linux indefinitely one can start it with

sar 1 0

The first number indicates the sampling time in seconds. The second number is usually the number of samples you like ot see.

If this number is 0 then sar will not stop. And as another bonus will look at how large the terminal is and will display a new header

Command line can be user friendly. I really like those little gems that show up in all software: People spending their time to make something better. It is like a little gift to the world. With software the value of even a little detail can potentially be significant. Which is an awesome thing.

For all we know it might very well be that the feature described here will please people in a hundred years from now.

I don’t think that mankind will manage to drop unix at this point. Neither can it give up on the use of steel. Yes there might be new systems, much like there have been new materials.

The new gets all the attention. But in many cases the new will not replace the old entirely. Only journalists tend to think that way. In reality the findings of Mr Newton help Boeing and Airbus today to build tubes with wings that shuttle people around the globe close to the sound of speed.

hfs dmg files in Centos

February 25th, 2012

In Centos 5.7 mounting dmg files created under OS X 10.6 with hdiutil no longer worked:

mount  -t hfsplus -o loop dmgFileFromOSX10.6.dmg  /mountpoint

results in

mount: wrong fs type, bad option, bad superblock on /dev/loop0,

Without having researched it I doubt that it is the actual Centos Version that matters here.

The dmg has been created under OS X 10.6 in the terminal via:

hdiutil create -size 1024k dmgFileFromOSX10.6.dmg -fs HFS+ -volname 'test dmg'

DMG creation in the command line is a work around for the (arbritrary?) minimum size requirements of the Disk Utility program of 5.2MB and 10.1MB.

It turns out that a dmg file created with the same command line in OS X 10.4 ( on a G4 machine ) works fine. On which side 10.5 falls we did not test.

centos source install wget 1.13 and GNUTLS

February 4th, 2012

installing wget 1.13 from source on Centos 5.7 the configure command was not happy:

checking for main in -lgnutls... no
configure: error: --with-ssl was given, but GNUTLS is not available.

It turns out the solution is simple:

./configure --with-ssl=openssl

did the trick.

At first I thought that a “yum install gnutls-devel” might help. The ./configure part indeed finished after the install of the developer package, but the actual make still failed:

gnutls.o: In function `ssl_connect_wget':
gnutls.c:(.text+0x3f1): undefined reference to `gnutls_priority_set_direct'
gnutls.c:(.text+0x481): undefined reference to `gnutls_priority_set_direct'
collect2: ld returned 1 exit status

Configuring with the openssl option made everything work very smoothly …

mail me later

January 31st, 2011

I really like email. It works well for me. One thing that I grew accustomed to was the abillity to postpone email. To set up quick reminders easily. I used ‘replylater.com’ for this. Unfortunately last month they stopped working for me.

I decided to just implement the same features myself: Mail Me Later works pretty much like replyater.com.

The ‘problem’ is, that once a tool works for me I completely start to rely on it. Having a topic delegated to a service like ‘mail me later’ means that I will entirely forget about it. Good since it saves hassle, really bad if that service fails.

Having this part of me workflow now in an environment where I can quickly verify its operation makes me very happy.

libavcodec.so: undefined reference to `x264_encoder_open_104′

September 6th, 2010

A recent svn checkout on a Centos 5.5 x86_64 of ffmpeg refused to link. It complained:

libavcodec.so: undefined reference to `x264_encoder_open_104'

The fix was as simple as


before ./configure. In a weird way it makes even sense.

ldap_sasl_bind(SIMPLE): Can’t contact LDAP server (-1)

May 10th, 2010

On a centos machine ldapsearch was not giving me much love when accessing a Microsoft Global directory server via ldaps and a given port. The error message I got was:

ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

When turning up debug level via -d 1 as in

ldapsearch -d 1 -v -H ldaps://servername:portnumber

I got the bit more revealing error message:

TLS certificate verification: Error, unable to get local issuer certificate
TLS trace: SSL3 alert write:fatal:unknown CA
TLS trace: SSL_connect:error in SSLv3 read server certificate B
TLS trace: SSL_connect:error in SSLv3 read server certificate B
TLS: can't connect: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (unable to get local issuer certificate).

It turns out that a simple line like


in ldap.conf makes things better. In my particular install a simple ‘locate ldap.conf’ was a bit misleading. The true location of your config file can be revealed via:

strace ldapsearch -v -H ldaps://servername:portnumber 2>&1 | grep ldap.conf

looking up memory chips

March 28th, 2010

If a machine sports edac then

find /sys/devices/system/edac/ \( -name mem_type -o -name size_mb -o -name mc_name \) -exec cat {} \;

will display quickly what kind of memory modules are visible to the kernel, and in which state they are in.

getting shells in the same path

January 31st, 2010

Often I work with a couple of shells simultaneously in the same directory. One may be the editor with a program in it, and the other one running it.
When I add the following lines to .bashrc

alias sd='pwd > /tmp/ddd'
alias d='cd `cat /tmp/ddd`; pwd'

I just need to type ’sd’ (for Set Directory) in a shell that is already in the right directory. When I then log in with the other shell a simple ‘d’ gets me where the other shell already is. Extra benefit: When I want to continue where I was last I just type ‘d’ again. Just a little thing. But the world is made out of little things. Lots and lots of them.

osx wrtg54 connection reset ssh

January 16th, 2010

When using via the wrtg54 ssh connections timed out after a while.
Which is was mildly annoying. The problem that with mildly annoying things is they are mildly annoying.
So one does not go and fix them soon enough. In this case it was terribly easy to cure errors like:

Read from remote host Connection reset by peer
Connection to closed.

All that it needed was to create a file called .ssh/config in the home directory and add something like these lines:

ServerAliveInterval 60
ServerAliveCountMax 5000

Nice that it didn’t require any changes on the other end.

gmail backup

December 23rd, 2009

Over the last years I accumulated quiet a bit of mail in Gmail. It works, and I find it very inspiring to see its features grow while I keep all my data. But I also grew worried: What would happen if my mail should go away? I have paid google exactly zero for keeping all my email. There would be nothing I could do.

Turns out that it is possible to make a copy. Googles own Matt Cutts described it well

I found that these getmail parameters worked well for me:

type = SimpleIMAPSSLRetriever
server = imap.gmail.com
username = EMAIL@gmail.com
password = PASSWORD
mailboxes = ("[Gmail]/All Mail",)

user = getmail
type = Maildir
path = /root/.getmail/

read_all = false
verbose = 2
received = true
delivered_to = true
message_log = /root/.getmail/gmail.log

It took a while. Actually days. It seems that you only get mail out at a slow data rate. Then there is a bandwidth limit. getmail failed after a while with:

getmailOperationError error (IMAP error ([ALERT] Account exceeded bandwidth limits. (Failure)))

Just waiting a couple of hours took care of this. Having had the mail not backed up for 5 years it was quiet alright to wait 5 hours.

Another error occured with 5 mails. Getmail for instance would end with:

getmailOperationError error (IMAP error (command FETCH ('3049', '(RFC822)') returned NO))

And it would do so repeatedly with the same number. I assumed that something had gone awry with those mails. After pretending that the mail already had been retrieved via the oldmail-imap file getmail soldiered on.

Tragically at some point my connection went away. I had downloaded around 120,000 mails during that session.
Getmail updates the oldmail-imap file only when done (or cancelled via ctrl-C). So the next time it started I went to download the same mails again.

Even with that glitch things worked out. And I feel pretty good about having a copy of my mail now.

Having a secure copy of your data is never a bad idea.


November 16th, 2009

just found


and it is real nice and handy tool so see what is going on the network ports.

Very helpful.

ssh prime agent

September 7th, 2009

(sorry if this should not make any sense to you. This is a note for me to go back to. Even though I bring new machines online regurlarly I forget the exact steps for this)

on X
ssh-keygen -t dsa

add content of .ssh/id_dsa.pub in the end of .ssh/authorized_keys2 on Y. Only thing we need to do on Y.

after boot of X run

ssh-agent >$ssh_info_file
chmod 600 $ssh_info_file
source $ssh_info_file
ssh-add ~/.ssh/id_dsa

before login from X to Y
source /root/.ssh-agent-info-hostname

machine memory afterburner

August 22nd, 2009

sar from the sysstat modules is nice. I think it keeps about a weeks worth of history around. I’d like to have more than that. There might even be a command lines switch to do that. But often it is just faster to write what you need when you can type with reasonable speed. This script will copy all sa files into a directory called /var/log/allsa in the form saYEARMONTHDATE. So today’s sa file I can access forever via

sar -f /var/log/allsa/sa20090822

The script only cares about files that are older than a day. So it will take between 24 and 48 hours that the files appear in their final destination.


# This will keep all daily sa files readable via saw.
# It seems to be a shame to # throw them away.
# A year worth of sa files is about 113 MB for my machines
# This script is meant to run daily. It probably needs root permissions.
# use as much as you like. No Warranties or promises. Your problem if it eats your machine.
# Andreas Wacker, 090822

use strict ;

my $sourcedir = "/var/log/sa";

my $targetdir = "/var/log/allsa";

if (! -d "$sourcedir"){
die "can not find directory $sourcedir for sa files";

if (! -d "$targetdir"){
system ("mkdir -p $targetdir");
if (! -d "$targetdir"){
die "was unable to create $targetdir. $0 would need it to proceed ";

opendir (INDIR , $sourcedir) or die "unable to read directory $sourcedir";

my @allfiles = readdir (INDIR);

close (INDIR);

foreach my $file (@allfiles){
if ($file =~ /^sa[0-9]+$/){
my $completefilepath = "$sourcedir/$file";
my $mtime = (stat $completefilepath)[9];
my $dayage = (time() - $mtime ) / ( 3600 * 24 ) ;
if ($dayage > 1){
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime ( $mtime);
my $datestring = sprintf ( "%d%0d%02d" , $year + 1900, $mon + 1, $mday);
my $targetfilepath = "$targetdir/sa$datestring";
if (! -f "$targetfilepath"){
#print "$file $dayage $datestring\n";
system ("cp -p $completefilepath $targetfilepath");
if (! -f "$targetfilepath"){
die "tried to copy from $completefilepath to $targetfilepath and it did not work. This is a very bad sign!";
if (((stat $completefilepath)[7]) != ((stat $targetfilepath)[7])){
die "file sized for $completefilepath and what should have been a copy $targetfilepath did not match. Not good!";

learning from history

March 9th, 2009

shells keep a history. The default seems to be to keep 1000 lines. I have not found a reason to make this huge. And while at it time stamp it as well:


in your .bashrc will keep 100,000 lines (2MB “omg” ) of history around and will also time stamp it nicely. That and the grep command make for some nice shortcuts on memory lane.

configure: error: Kerberos libraries not found.

February 28th, 2009

When I wanted to build php with imap it did complain about:

configure: error: Kerberos libraries not found.

Turns out that this some fallout from lib64 vs. lib differences. So I found
this blog explaining exactly what was going on. Very helpful. Especially the

sh -x ./configure ...options go here...

trick can be very very helpful in the future.

vsftpd 500 OOPS error

January 31st, 2009

If your ftp client reports

500 OOPS: bad bool value in config file for: config_name_here

then that might mean that you have a “YES ” or “NO ” with a trailing space in your config file. Easy enough to fix. I got lucky and found it quickly. Just in case somebody needs to google for this.

sysdeputil.c:162: error: expected declaration specifiers or ‘…’ before ‘capset’

January 23rd, 2009

I have no freaking clue what I am doing. So be careful just following this blindly.
When I initially googled for this issue I did not find this specific solution:

When tried to install vsftpd.2.0.7 from source on a centos 5.2 64 bit machine I get the error:

sysdeputil.c:162: error: expected declaration specifiers or '...' before 'capset'

Followed by allot of similar errors. I was able to address this by

yum install libcap-devel.x86_64

At which point the linker complained:

/lib/libcap.so.1: could not read symbols: File in wrong format

I had to actually commend out the line

locate_library /lib/libcap.so.1 && echo "/lib/libcap.so.1";

in vsf_findlibs.sh. After that it compiled and seemed work.

how easy was that!

January 12th, 2009

While trailing the log files this messages showed up:

Jan 12 16:49:13 andreaswacker vsftpd(pam_unix)[20094]: authentication failure; logname= uid=0 euid=0 tty= ruser= rhost=

turns out some bot/script etc from was trying to find an ftp user with a stupid name. Would have had no luck, but I don’t like my log files to be cluttered. So it turned out that a simple

iptables -I INPUT -s -j DROP

blocks that IP address from now on. Nice. I think I will use that often now. There are lots of misconfigured systems out there. Like that Windows 98 computer in the philipines downloading the same file 5000 times yesterday. Thank you iptables.

kernel: 3w-9xxx: scsi0: WARNING: Character ioctl (0×108) timed out, resetting card.

July 23rd, 2008

while burning in a new machine with a 3ware 9560 adapter I got twice the following error messages:

kernel: 3w-9xxx: scsi0: WARNING Character ioctl (0x108) timed out, resetting card.
kernel: 3w-9xxx: scsi0: AEN: INFO Cache synchronization completed:unit=0.

Looking at the source it seems that 0×108 is TW_IOCTL_FIRMWARE_PASS_THROUGH

I had these errors only when smartd was configured to probe the drives on the 3ware card. This was under max load
during the burn in of the array. Not sure if there were any other errors etc. It seemed harmless enough, but then again
no errors are better than spurious warning etc. Once I have a couple of days of a clean running machine I probably will
turn smartd on again to see if the connection indeed exists.

old computers that still work

February 23rd, 2008

A new problem that is looking for solutions: Computers are worth replacing while they are still work fine.

Some solutions

Interesting is the theme that the OS is often the reason to kick it to the curb: It is outdated (OS9), just got slower and slower (windows used to do that, does it still? Luckily I have no idea) or it just filled with malware (That WOULD be a windows feature). The hardware might do some good. I am always surprised how little harddrives have been. “Back in the day”.

postfix: can’t create user output file

January 30th, 2008

people alerted me that they got an email bounce saying:

Final-Recipient: rfc822; andreas@interdubs.com
Original-Recipient: rfc822; andreas@andreaswacker.com
Action: failed
Status: 5.0.0
Diagnostic-Code: X-Postfix; can't create user output file

it turned out that my local mail file that I keep as a backup was bigger than 1000 Megabyte. Seems to be that postfix (or whichever program delivers the mail locally to /var/spool/mail) does not like to write to files that are bigger than that number. Scary the file grew to that size within one year.

install apple developer tools in the command line

January 9th, 2008

Since years I work on a couple of computers via command line. Since they are real unix computers it all works remarkably well. For a specific solution I need to run osacompile. AppleScript needs to get compiled. I did not find a way to distribute it as text. So finally I got a hold of an OS X machine in the internet. More on that part later. osacompile really wants to run the application that it will talk to later. Also rather odd. But, hey, we talk Apple here. A sect in disguise of a technology company. So everything is possible. Or rather impossible. Like adding a development environment. The Box happened to have no Dev Tools installed. Usually that’s maybe a bit timely but overall straight forward. Installing development tools on a unix computer.
With Apple OS X 10.4.11 it turns out that doing so via ssh is not as trivial. You can download the source code. But first you need to create a developer account with ADC. It’s free. It’s annoying. They keep forgetting my password. Once you logged in,
you could download the dmg file to your local machine. I could have done that and waited only a couple of weeks for my DSL to upload the 900+ MB file to the final server I need it on. Downloading the dmg directly did not work. I had to fake a login. Which is easier as it seems. In the browser that is logged in (firefox I assume) you look for a cookie called ADCDownloadAuth. This you copy paste into the following command line:

curl -b "ADCDownloadAuth=SomeVeryLongCookieString" -O \

At least that’s the valid file of today.
Once you have the file you attach (aka mount) it via:

hdiutil attach xcode25_8m2558_developerdvd.dmg

and navigate into

/Volumes/Xcode Tools/Packages

to then run:

sudo installer -verbose -pkg XcodeTools.mpkg -target /

Don’t run this against XcodeTools.mpkg in /Volumes/Xcode Tools directly. This results in the error message:

2008-01-09 03:47:43.889 installer[2843] IFPkg::_parseOldStyleForLanguage - can't find .info file (XcodeTools)

which does not google very sucessful.

The install seems to work, from what I can tell so far. I have gcc and make. And that’s all I cared for.

umask and uid for discreet flame

January 4th, 2008

Autodesk aka Discreet Flame Flint Inferno applications run under irix or linux. Which is great. Unfortunately it is a long standing practise of those people in montreal to seperate different versions of their application by giving them a different user. Of course that’s just plain wrong and stupid. But if you pay north of 40,000 US$ for a single software seat you stop making reasonable demands. Discreet / AutoDesk does this since more than 14 years, why should they stop?

A couple of simple commands can fix the biggest issues with this. The first one is that each install creates a new user id. The fix is to edit /etc/passwd and give the new user a common id (100 in this example). We assume it was 101 for the new install. Running the following command as root:

find /usr/discreet/NEWLYINSTALLEDVERSION -user 101 -exec chown 100 {} \;

will fix the permissions.

Another annoyance is that they set the umask in the .cshrc of each login. If you run a couple of versions side by side it’s pretty tedious to fix these flags manually. The following does so for all installed versions.

Under Linux you can use sed for this:

cd /usr/discreet
sed -i.bak.umask "s/umask 002/umask 000/" */.cshrc

For Irix you would need to turn to perl:

cd /usr/discreet
perl -i.bak.umask -p -e "s/umask 002/umask 000/" */.cshrc

This will make the umask wide open for the user running flame or one of the other Discreet products. Some people might like that everybody can now delete
and overwrite files. Others don’t.

mail fseek: Invalid argument panic: temporary file seek

March 21st, 2007

With a full mailbox mail might die with:

fseek: Invalid argument
panic: temporary file seek

of course it’s all spam. That aside it seems that


can deal with these big mail boxes.

php5 without myql for fc4 php version and pcre library 6.6

March 5th, 2007

For historical reasons I run a Fedora core 4. It turns out that php5 is configured NOT to use mysql in this default install. Which is stupid, but what can you do?
Well, for starters you could follow: helpful howto.
The php source Version that it uses is 5.1.2. And it turns out that this is needed: Trying the current 5.2.1 resulted in

configure: error: The PCRE extension requires PCRE library version >= 6.6

In other words php 5.2.1 insists on having perl compatible regular expressions with a version number higher than 6.6. We ignore the fact that Perl Regular Expressions probably have not changed in the last ten years. So I am not sure why, -o- why, php 5.2 should now be insisting on a new Version.
Over at Centos there is a how to to get pcre 6.6 installed. Only problem is, that it does not fly on fedora core 4. To new for it I suppose.

So in the end I got php 5.1.2 from here and the beforementioned how to worked like a charm. Here my mirror of that specific php Version.

distro history

March 4th, 2007

I am not a sysadmin. Ok, I adminstrate machine, but mostly so that I can go back and write some more horrible code. Coding for unix system is the most fun. I still remember when the Sun spark pizzabox showed up in the adjacent office and it came with a huge box of documentation: “They want you to know all this? Awesome!” On DOS PCs I was used to an environment where equipement makers would not share any knowledge.
After working on Intergraph work stations I then spent years with Irix. Which was nice at the time, since SGI had lots of money and even used some of it wisely. Of course the writing was on the wall. And running a web server on basically free hardware (except for electricity) was intruiging enough to try to deal with Linux. I am still trying to do that. Redhat was what came my way first. It worked ok, but at some point I got sick of rpm dependency stacks. Debian looked good with apt-get. So I built two boxes running that, and they drive me crazy. No chkconfig, ‘just’ use ’sysv-rc-conf’. Once in a while I have to deal with Suse, but new machines I build with Fedora. Yum is pretty much making me happy these days. I simply don’t understand why somebody thought it would be a great idea to rename httpd to apache (or vice versa). And there are lots and lots of these differences. You don’t notice them when you stay with one system. But switching back and forth makes this annoying. Comes with the concept of free and open software I guess. But somebody I would like to have the cake and eat it too.

The quality of software is quiet interesting: the core of things seems to work really well for linux. Not so much ‘core’ as in ‘kernel’ but rather functionalities. The fringes, the configurations, the interface to adminstrate these things is pretty horrible. The babylonic /etc/init.d/ confusion is only one example. Another one would be that sar is by default off after you installed it on debian. You have to go into /etc/default/sysstat and enable it. Trickier to find than it should be.

uncle festers bluf: called

February 25th, 2007

simple, yet powerful