At the start of this month I upgraded my last Debian Wheezy box to Jessie. I'd held out for ages, as it was my home server and had more on it than my BigV public server or my desktop systems.
There were two pain points in the end:
That's all my boxes, both real and virtual upgraded. I've also rebuilt my better half's desktop with a new motherboard & graphics card, so it's now got a significantly faster CPU and GPU, plus more RAM and a much faster main bus to play with. I even managed to freecycle the old motherboard, so that's a win all round.
Recently a friend asked me about SPF and DKIM. I was vaguely aware of them and their generally perceived uselessness. I've never bothered with them but thought I'd look into them for him. SPF was trivial to set up using my Bytemark DNS, and easy to testing using of many on-line email tools.
The challenge is DKIM, I found several sites that explained what to do - they were all variation around a theme and seemed easy enough. Generate a key pair (check), put the public half into your DNS (check), tweak Exim4 (ARGH!!!).
As far as I tell I followed what the web site said and reloaded the Exim config and it looks like it's all loaded into place, the key is readable by Exim, the path is correct but no matter what I do Exim wont add the DKIM signature to my outgoing emails...!
I recently had an incident where Dovecot didn't start properly and then the following day it did. Yesterday I had the same scenario and it was really annoying. The core process starts but none of the actual worker children do.
The problem turns out to be a NFS mount to a box that went away, which meant when Dovecot started went looking for email mbox files on the home directory of the users (not NFS mounted) it got stuck. I eventually spotted this by starting Dovecot under strace and it was instantly obvious what the problem was, and a quick force umount of the now-absent filesystem allowed Dovecot to carry on.
Yesterday I spotted a link to Mosh. I think I've seen it before but for some reason I bothered to read the whole article this time. Mosh is a whole new remote shell tool specially designed to work over mobile and intermittent networks. It's in Debian stable so I installed it and gave it ago. At the moment it's not a replacement for SSH, so you will still need SSH but only to bootstrap the tool.
You start a Mosh session by typing:
$ mosh user@server
Just like you would on SSH, in fact that's how is starts, you login to the remote server and start a mosh-server in your name (no root code). Back on the client you then connect to the most-server using the mosh-client. The two ends exchange data using UDP not TCP, and the connection is encrypted by AES-128 in OCB mode.
Each end maintains what it thinks the "screen" should look like, so the client mostly does local echo reducing lag - though there is smart stuff in there to decide when not to. As long as the client and server are still running, being UDP they will re-connect after outage and client IP change as if nothing had happened. Should you become disconnected then when you reconnect the two ends resynchronise the current state the state of the server during the outage is irrelevant and thrown away, works even after the client is suspended and wakes up on a different network.
The developers claim that they offer better UTF terminal support than most other tools, and the modular design of the whole tool makes it easier to extend than SSH and should make security auditing easier.
Anyhow it's interesting, have a look if you have time. mosh.mit.edu. It's not yet a complete SSH replacement but for lots of things it's still very useful, faster than SSH and more robust in real world use. I can't comment on how secure it is, and the authors say they are confident but they are open that it's not had the same review that OpenSSH has.
I upgraded my Debian 6.x KVM guests to 7.0 a few days before 7.0 was formally released. All went well and they were happy to run on a Debian KVM 6.x host. A few days after 7.0 had been released and I'd upgraded my desktop boxes I upgraded my home server. The the KVM problems began...
If I start a KVM guest it starts okay and is fine for a while but it is prone to consuming 100% of the host's CPU doing apparently nothing in the guest. Today I ran the guest and tools like rkhunter caused the host's CPU to hit 100%, but when rkhunter finishes the host's CPU still runs flat out. Within the guest the CPU is apparently idle...
In theory I could use cgroups to control this but Debian doesn't support hard limits, only soft ones. I would need to recompile the kernel with the right bits turned on to use hard limits. Apparently it's on in Fedora/Red Hat.
Even though I knew that there were issues with upgrading Dovecot from Debian 6.x to 7.x I upgraded my server this week. It installed a zillion Dovecot things and it refused to complete the install until I'd actually removed the and then installed them manually.
When I tried to connect from KMail it worked fine (new certificate obviously). Today when I tried from Mutt it wouldn't let me change my inbox. I had a look on the Dovecot web site and tried a few things. Now I can't actually make Dovecot start, the init script runs but only one binary is loaded into memory and it doesn't work. I've purged it twice and manually removed every file I could find but even so a full normal install and it just won't start. No errors but just doesn't do anything...
I think it's the first time a standard package has broken on a Debian upgrade, since I started with Woody. I won't be upgrading my main email server now until I figure out what on earth is going on!
UPDATE: It has started to work all on it's own, I've not changed anything and I now have no idea why it didn't work and no idea why it's now working...
I read mainstream media as well as IT specific media. In the IT media there are plenty of Mac, Windows and Linux users and while there are always a few fruit-cakes on the extreme most people seem to be able to comment in a civilised way.
Recently I've read a few articles in the mainstream press regarding IT issues that I care about. While I expected there to be some Mac/Windows banter I was surprised by the level of ignorance and vitriol directed at Linux and open source software by the Windows Pros on these sites.
It seems that every Windows user once installed Linux, hated the configuration files, couldn't get some piece of hardware installed and is now a fully qualified expert who can categorically declare that Windows is far superior and Linux will never be good for anything because it's too complicated. The fact it may be more than a decade since they tried Linux and they are clearly not comparing like with like they viciously attack any Linux user who suggests that a desktop friendly distro like Mint may be a better solution for Grandma than Windows...
That's even before they start to trot out the old FUD about only Windows has viruses because it's popular or new FUD that only Windows XP has viruses as it's all fixed in 7/8...
I said it a long time ago and I'll say it again, you can't fairly compare something you know a lot about and something you know almost nothing about. Pointing this out to people doesn't go down very well either. I also maintain that anyone can use a computer, but surprisingly few can actually set them up properly...
I've now upgraded a few systems. Mostly painless I think so far. My desktop was running testing all along so it's upgrade was automatic. The first systems I upgraded were a couple of VM clients I use for SSH, not much on them and no X or GUI stuff so that went pretty painlessly.
I next tackled a laptop, and that was mostly okay except that I couldn't connect it to the WiFi after the upgrade or mount a SD card or USB stick. That turned out to be a problem in how the ck_connector was started from PAM. All things considered it's not a very visual upgrade, KDE 4 is mostly evolutionary rather than revolutionary and the same can be said of most desktop applications too.
Once the laptop was completed it was time to do my better half's desktop system. This is the second most critical system there is so it has to be right or I get complained at. As I'd already done one GUI system I was relatively happy to do this one. Couldn't get Plymouth to work but other than that it's all happy. I do rather have a long list of orphaned and old packages to clean out still.
I've now only got two systems to go, both servers, my home sever and my hosted server, both have no GUI on them so the upgrade won't be as traumatic but they do have Dovecot on them which I gather will take some effort to migrate as the old and new configuration formats are quite different. However I've plenty of time to plan for that.
Sometime this spring (when it arrives) I will buy a new desktop system. It will probably have two block devices: a traditional SATA large capacity hard drive and a much smaller and faster flash drive.
The theory says that cheap flash drives are much faster and will even probably outlive mechanical spinning disks. Flash systems do slowly go bad so use wear-levelling software in the controller to maximise life. The other problem with flash drives is that they are relatively small, so a larger drive either in the box or on the network is required given how much space life takes up...
I've no plan to join the two drives together with LVM, it seems pointless, instead they will be kept separate and one mounted onto the other. At the moment most of my systems use ext3 except one box which uses ext3 and XFS.
If I install a new box from the Wheezy ISO I'm guessing I'll get ext4 as default. I gather this is the logical upgrade from ext3 until something fancy is really ready and it is not an all singing-dancing next generation filesystem. Does anyone know how it compares with XFS on large disks or flash disks?
I'll probably use ext4 on the flash disk (root & boot file systems) and XFS on the spinning disk (/srv) as it's where I'll dump my media files which aren't small and XFS is supposed to be good for that, unless it's not worth the effort.
Someone asked me to test the speed of some Devolo 500AV ethernet over mains units I have as compared with some older 200AV units. In preparation I ran a simple test from my desktop box (that I plan to replace) to my server showed a throughput of about 333 MBit/s over a GigE switch. My younger laptop to the same server with a more modern (but still cheap) NIC gets 727 MBit/s to the same server over the same switch.
The desktop is using the common (at the time) Marvell Technology Group Ltd. 88E8001 Gigabit Ethernet Controller (rev 13) and the
ethtool reports all is well and that it's running at 1000 Mb/s as expected, but it clearly can't manage that on a simple
Now I am planning on replacing the box anyway but just wondered if anyone knows and good tuning tips for Gig Ethernet?