This site is now 100% read-only, and retired.

XML Logo

Posted by philcore on Wed 20 Jul 2011 at 03:02
Tags: none.
sweet. v6 access time is better than v4.

[pdyer@lightfoot ~]% ping www.debian-administration.org
PING www.debian-administration.org (89.16.161.98): 56 data bytes
64 bytes from 89.16.161.98: icmp_seq=0 ttl=50 time=140.329 ms
64 bytes from 89.16.161.98: icmp_seq=1 ttl=50 time=141.378 ms
64 bytes from 89.16.161.98: icmp_seq=2 ttl=50 time=138.095 ms
^C
--- www.debian-administration.org ping statistics ---
4 packets transmitted, 3 packets received, 25.0% packet loss
round-trip min/avg/max/stddev = 138.095/139.934/141.378/1.369 ms
[pdyer@lightfoot ~]% ping6 www.debian-administration.org
PING6(56=40+8+8 bytes) 2001:470:e49b:2::1fff --> 2001:41c8:10:62::1
16 bytes from 2001:41c8:10:62::1, icmp_seq=0 hlim=54 time=99.905 ms
16 bytes from 2001:41c8:10:62::1, icmp_seq=1 hlim=54 time=98.745 ms
16 bytes from 2001:41c8:10:62::1, icmp_seq=2 hlim=54 time=98.458 ms
16 bytes from 2001:41c8:10:62::1, icmp_seq=3 hlim=54 time=96.574 ms
16 bytes from 2001:41c8:10:62::1, icmp_seq=4 hlim=54 time=98.868 ms
16 bytes from 2001:41c8:10:62::1, icmp_seq=5 hlim=54 time=98.655 ms
16 bytes from 2001:41c8:10:62::1, icmp_seq=6 hlim=54 time=98.942 ms
16 bytes from 2001:41c8:10:62::1, icmp_seq=7 hlim=54 time=97.709 ms
16 bytes from 2001:41c8:10:62::1, icmp_seq=8 hlim=54 time=97.295 ms
16 bytes from 2001:41c8:10:62::1, icmp_seq=9 hlim=54 time=97.758 ms
16 bytes from 2001:41c8:10:62::1, icmp_seq=10 hlim=54 time=100.314 ms
^C
--- www.debian-administration.org ping6 statistics ---
11 packets transmitted, 11 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 96.574/98.475/100.314/1.045 ms

[pdyer@lightfoot ~]%

 

Posted by philcore on Mon 22 Oct 2007 at 02:18
Tags: , ,
So I just installed Ubuntu 7.10 gutsy gibbon on a laptop. I have to say, it's nice. Things really do "just work". Sound, compiz, all without a hitch. Way to go, Ubuntu!

 

Posted by philcore on Mon 6 Aug 2007 at 19:56
Tags: none.
I've had the need to implement bandwidth throttling on a debian router at a colo. We are connected to a 100Mbit pipe, and we are charged for anything over 1Mbit/sec in or out. (95th percentile). I found all kinds of helpful hints for throttling outbound bandwidth, but I found throttling inbound traffic a bit more touchy. I also needed to be able to throttle from traffic behind the router as well as traffic initiated from the router itself. Here's what I came up with. (Works very well, btw).

Anybody have any suggestions or better solutions?

#!/bin/bash
tc qdisc del dev eth1 ingress
tc qdisc add dev eth1 ingress
tc filter add dev eth1 parent ffff: protocol ip prio 10 u32 match ip dst 0/0 \
   police rate 1024kbit burst 10kb drop flowid :1
tc qdisc del dev eth1 root
tc qdisc add dev eth1 root tbf rate 1024kbit burst 10kb latency 25ms


The colo is connected to our main office via an ipsec tunnel. the only issues I have with this is some ugly error logs, I assume complaining about the packet police dropping packets to get speed down to an acceptable rate.


Aug  6 12:11:48 fw-rich kernel: klips_error:ipsec_xmit_send: ip_send() failed, err=-1
Aug  6 12:11:48 fw-rich kernel: klips_error:ipsec_xmit_send: ip_send() failed, err=-1
Aug  6 12:11:49 fw-rich kernel: klips_error:ipsec_xmit_send: ip_send() failed, err=-1
Aug  6 12:11:49 fw-rich kernel: klips_error:ipsec_xmit_send: ip_send() failed, err=-1
Aug  6 12:11:49 fw-rich kernel: klips_error:ipsec_xmit_send: ip_send() failed, err=-1



 

Posted by philcore on Sat 26 Nov 2005 at 15:11
Tags: none.
Just made the switch to etch. Couple of things broke due to new configs. In sarge, there is postfix and postfix-tls. These have been merged in etch.

So anyway, my last problem is phpldapadmin. The config file is completely changed, and I can't for the life of me get it to work. Anybody got any pointers?

 

Posted by philcore on Fri 25 Nov 2005 at 15:27
Tags: , ,
Until sarge was released with the new installer, it wasn't really easy - or maybe even possible to install Debian on a RAID1 mirror. If you installed woody and don't want to reinstall the OS and wipe your existing partitions, you can migrate to RAID by adding another disk and without losing your existing data. This is a guide aimed at getting RAID1 working on an existing sarge system.

I suggest reading the following documents:
http://www.doorbot.com/guides/linux/x86/grubraid/
http://www.linuxsa.org.au/mailing-list/2003-07/1270.html

First, of course, you need to get a second hard drive installed and recognized on your system. Installing a second drive is beyond the scope of this document

The second drive ideally should be identical to the original drive, although it doesn't need to be. It should be the same size or bigger than the original.

For my setup, I have two identical Seagate 73GB Ultra320 SCSI drives. They appear on my system as:

/dev/sda == original drive with data
/dev/sdb == new 2nd drive.

If you have IDE drives, your device will be /dev/hda and /dev/hd[bc] depending on how you install it. You should definitely install the second drive on a second controller if possible.

Before you start, back up your system! I assume no responsibility for any data loss! Also, if you are modifying a system with multiple users, you will probably want to reboot into single user mode.

OK, so let's get started. First install the raid tools.

apt-get install mdadm
Now change the system types on partitions you want to mirror on the old drive to fd (raid autodetect) using your favorite disk tool. Make note of which partition is your swap partition. We'll need that later.
fdisk /dev/sda

Command (m for help): t
Partition number (1-8): 1
Hex code (type L to list codes): fd
Command (m for help): t
Partition number (1-8): 2
Hex code (type L to list codes): fd
and repeat for all your partitions including swap. You can view the disk info by using the "p" command from within fdisk. When you are done, your disk should look something like this.
Command (m for help): p

Disk /dev/sda: 73.5 GB, 73557090304 bytes
255 heads, 63 sectors/track, 8942 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
   /dev/sda1   *           1         243     1951866   fd  Linux raid autodetect
   /dev/sda2             244         486     1951897+  fd  Linux raid autodetect
   /dev/sda3             487         608      979965   fd  Linux raid autodetect
   /dev/sda4             609        8924    66798270    5  Extended
   /dev/sda5             609        1824     9767488+  fd  Linux raid autodetect
   /dev/sda6            1825        4256    19535008+  fd  Linux raid autodetect
   /dev/sda7            4257        4378      979933+  fd  Linux raid autodetect
   /dev/sda8            4379        8924    36515713+  fd  Linux raid autodetect

When everything looks good. Write the changes with the "w" command.
Command (m for help): w
It will complain about disk in use and partition table will change at next boot or something. That's OK. Now we need to copy the partition information to the new disk. sfdisk makes it really easy. Remember to substitute your drive information for the below command if you are not using /dev/sda and /dev/sdb!
sfdisk -d /dev/sda | sfdisk /dev/sdb
Now view the partition layouts to see if they match.
sfdisk -l /dev/sda

Disk /dev/sda: 8942 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
   /dev/sda1   *      0+    242     243-   1951866   fd  Linux raid autodetect
   /dev/sda2        243     485     243    1951897+  fd  Linux raid autodetect
   /dev/sda3        486     607     122     979965   fd  Linux raid autodetect
   /dev/sda4        608    8923    8316   66798270    5  Extended
   /dev/sda5        608+   1823    1216-   9767488+  fd  Linux raid autodetect
   /dev/sda6       1824+   4255    2432-  19535008+  fd  Linux raid autodetect
   /dev/sda7       4256+   4377     122-    979933+  fd  Linux raid autodetect
   /dev/sda8       4378+   8923    4546-  36515713+  fd  Linux raid autodetect

sfdisk -l /dev/sdb

Disk /dev/sdb: 8924 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
   /dev/sdb1   *      0+    242     243-   1951866   fd  Linux raid autodetect
   /dev/sdb2        243     485     243    1951897+  fd  Linux raid autodetect
   /dev/sdb3        486     607     122     979965   fd  Linux raid autodetect
   /dev/sdb4        608    8923    8316   66798270    5  Extended
   /dev/sdb5        608+   1823    1216-   9767488+  fd  Linux raid autodetect
   /dev/sdb6       1824+   4255    2432-  19535008+  fd  Linux raid autodetect
   /dev/sdb7       4256+   4377     122-    979933+  fd  Linux raid autodetect
   /dev/sdb8       4378+   8923    4546-  36515713+  fd  Linux raid autodetect

OK. If things look good, we're ready to create the raid arrays. Things to watch out for here: Be sure to match up your physical drives correctly, and change the physical drive letters to match your setup. What I want to do here is create raid1 device /dev/md0 with the two partitions that will make up that array: /dev/sda1 and /dev/sdb1. To avoid destroying the good data on /dev/sda1, I tell mdadm to create the array initially using only the new empty drive.
mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/sdb1
repeat for the remaining raid volumes md1,md2, etc....
mdadm --create /dev/md1 --level 1 --raid-devices=2 missing /dev/sdb2
mdadm --create /dev/md2 --level 1 --raid-devices=2 missing /dev/sdb5
mdadm --create /dev/md3 --level 1 --raid-devices=2 missing /dev/sdb6
mdadm --create /dev/md4 --level 1 --raid-devices=2 missing /dev/sdb7
mdadm --create /dev/md5 --level 1 --raid-devices=2 missing /dev/sdb8
mdadm --create /dev/md6 --level 1 --raid-devices=2 missing /dev/sdb3
now create filesystems for the raid devices. My example shows using ext3, but pick the fs of your choice, assuming you have kernel support for the fs type. Also make a swap partition. On my system, swap lives on /dev/md6, which is currently made up of /dev/sdb3.

You did make a note above about where your swap partition was, right?

mkfs.ext3 /dev/md0
mkfs.ext3 /dev/md1
mkfs.ext3 /dev/md2
mkfs.ext3 /dev/md3
mkfs.ext3 /dev/md4
mkfs.ext3 /dev/md5
mkswap    /dev/md6
Now we're ready to mount the raid devices and copy data over to them. Before you do this, you need to make sure you understand which filesystems are currently mounted on which partitions. Use the df command to get a picture of your layout.

df

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1              1921036    341452   1482000  19% /
/dev/sda2              1921100    293104   1530408  17% /var
/dev/sda5              9614052   3900804   5224880  43% /usr
/dev/sda6             19524672    729020  18795652   4% /home
/dev/sda7               964408     16444    898972   2% /tmp
/dev/sda8             36497820  12779348  23718472  36% /data

So for example, I have my / filesystem mounted on /dev/sda1. I have sda1 as part of md0. (Well, only sdb1 is currently in the mirror, but sda1 is the other half and will be added after we get our data off of it). So we mount /dev/md0 somewhere and copy everything from the / filesystem over to it.
mount /dev/md0 /mnt
cp -dpRx / /mnt
Now copy the remaining partitions. Be careful to match your md devices with your filesystem layout! This example is for my particular setup.
mount /dev/md1 /mnt/var
cp -dpRx /var /mnt
mount /dev/md2 /mnt/usr
cp -dpRx /usr /mnt/
mount /dev/md3 /mnt/home
cp -dpRx /home /mnt
mount /dev/md4 /mnt/tmp
cp -dpRx /tmp /mnt
mount /dev/md5 /mnt/data
cp -dpRx /data /mnt
Now edit /mnt/etc/fstab to use the md devices instead of the raw drive partitions.
proc         /proc           proc    defaults                   0  0
/dev/md0     /               ext3    defaults,errors=remount-ro 0  1
/dev/md1     /var            ext3    defaults                   0  2
/dev/md2     /usr            ext3    defaults                   0  2
/dev/md3     /home           xfs     defaults                   0  2
/dev/md4     /tmp            ext3    defaults,noexec            0  2
/dev/md5     /data           xfs     defaults                   0  2
/dev/md6     none            swap    defaults                   0  0
/dev/hda     /media/cdrom0   iso9660 ro,user,noauto             0  0
/dev/fd0     /media/floppy0  auto    rw,user,noauto             0  0
now edit /mnt/boot/grub/menu.lst and add an entry to boot using raid. and a recovery mode in case the first drive fails.
title       Custom Kernel 2.6.11.7
root        (hd0,0)
kernel      /boot/vmlinuz-2.6.11.7 root=/dev/md0 md=0,/dev/sda1,/dev/sdb1 ro
boot

title       Custom Kernel 2.6.11.7 (RAID Recovery)
root        (hd1,0)
kernel      /boot/vmlinuz-2.6.11.7 root=/dev/md0 md=0,/dev/sdb1 ro
boot
install grub on the second drive so if the first drive fails we can still boot.
grub-install /dev/sda

grub
grub> device (hd0) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)
grub> quit
Copy the GRUB configuration and fstab files to the old drive:
cp -dp /mnt/etc/fstab /etc/fstab
cp -dp /mnt/boot/grub/menu.lst /boot/grub
At this point, cross your fingers and reboot. once the system comes up, you should see the mounted md devices.
df

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0               1921036    304552   1518900  17% /
tmpfs                   193064         4    193060   1% /dev/shm
/dev/md1               1921100    206768   1616744  12% /var
/dev/md2               9614052   2948620   6177064  33% /usr
/dev/md3              19524672    741140  18783532   4% /home
/dev/md4                964408     16448    898968   2% /tmp
/dev/md5              36497820   6683308  29814512  19% /data
So now your system is running raid1 off of the new drive. Verify that you have data in all your partitions. If so, you can safely add the original drive to the array. Again - pay attention to what you are doing. You need to add the correct partition to the correct array!
mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
... repeat for remaining partitions.
check /proc/mdstat for the skinny on what's done and what's not.. when everything is done, all the devices should show [UU]. Don't reboot until it's done synching the drives.
cat /proc/mdstat

Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
      1951744 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
      1951808 blocks [2/2] [UU]

md2 : active raid1 sdb5[1] sda5[0]
      9767424 blocks [2/2] [UU]

md3 : active raid1 sdb6[1] sda6[0]
      19534912 blocks [2/2] [UU]

md4 : active raid1 sdb7[1] sda7[0]
      979840 blocks [2/2] [UU]

md5 : active raid1 sdb8[1] sda8[0]
      36515648 blocks [2/2] [UU]

And that's it.

If you are running a stock debian kernel with initrd, I've heard that you will have problems after reboots with adding the first drive to the array. Some issue with mkinitrd. If that is the case, read this article.

http://piirakka.com/misc_help/Linux/raid_starts_degraded.txt

 

Posted by philcore on Tue 25 Oct 2005 at 01:01
Tags: ,
OK, I've just migrated my home box to use ldap for posix accounts/groups, samba. Already had email contacts in an ldap directory. It was one of those things that I kept putting off. Glad I did it. Prolly write up a ~HOW[NOT]TO shortly, once I've figured out what I shouldn't have done.

 

Posted by philcore on Sun 18 Sep 2005 at 19:34
Tags: none.
Has anybody got a good doc on using openswan with KLIPS on a 2.6 kernel? I've got it working with NET_KEY, but I do really miss the ipsecX interfaces. I've tried the latest openswan (2.4.0) and 2.6.13 kernel, but no luck.

 

Posted by philcore on Fri 9 Sep 2005 at 23:51
Tags: none.
Just found this site after a discussion on debian-isp@ about ppp and radius authentication. I think this is great. I'm sure it will become my place to go for those, "man, how in the heck am I supposed to.." days.

Good job, Steve.