Fedora 16 missing H.264 decoder plugin (fixed)

If you installed Fedora 16 and added the RPM fusion repositories and you still can’t watch .mkv videos because of a missing H.264 decoder plugin.

Totem Movie Player gives you this error message

Here is how to fix it.

You are still missing a required package.  Add the following package…and you should be good to go.

sudo yum install gstreamer-ffmpeg

However, If you need full instructions for enabling the RPM fusion repositories here they are also.   The first two lines add the necessary repositories.  The last line installs the necessary plugins for variety of video formats as well as viewing DVDs.

sudo yum localinstall --nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm

sudo yum localinstall --nogpgcheck http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm

sudo yum install gstreamer gstreamer-plugins-good gstreamer-plugins-bad gstreamer-plugins-ugly gstreamer-ffmpeg libdvdread libdvdnav lsdvd

Enjoy.

Alternatively, you can also install and use vlc or smplayer to view mkv videos–both can decode H.254 .

Advertisements

Fedora 15: Enable your laptop fingerprint reader

If your laptop has a built-in fingerprint reader, here is how to enable it for logins under gnome3.  In this example, I’m using a Lenovo T60 laptop.

Start the Authentication application.

On the advanced options tab,  enable the fingerprint reader.

Next open a terminal window.

Run fprintd-enroll with your username.  To record your fingerprint scan, it will ask you to swipe your index finger three times successfully.

That is all.  You can test by logging out and logging in again.  On the gdm login screen, select your username and select the icon for the fingerprint.  Then scan your right index finger and it should log you in.

Installing VirtualBoxAdditions on Fedora 14

To install VirtualBoxAdditions on Fedora 14, you need to install a few a few additional packages.   In this example, I use a fresh install of Fedora 14.

Here are the packages that you will need.


sudo yum install perl dkms gcc kernel-devel kernel-headers

Now is not a bad time to update your kernel (though not required)


sudo yum update kernel

I reboot to the updated kernel.  Then I mount the VirtualBox Additions volume.


cd /media/VBOXADDITIONS_3.2.10_66523

sudo ./VBoxLinuxAddtitions-x86.run

It should install without issue.  You will need to reboot to enjoy the new additions.

Growing your RAID array

This is part 3 of a series of posts regarding setting up a linux RAID 5 disk array.  Let’s say you started your disk array with three drives and now you have more funds and you purchased an additional drive to increase the size of your disk array.  Adding a fourth drive to a three drive RAID 5 array will increase your storage space by 50%.  So you get a lot of bang for your buck.

Let’s continue with my example.  I started with three 1 TB SATA drives and now I’m adding a fourth 1 TB drive.  Before I had approximately 2 TB of storage, and after adding this fourth drive, I will end up with approximately 3TB of storage space.

Here are the basic steps that I will cover:

  • Partition the bare drive
  • Add the drive to your array as a spare
  • Grow your array to include this spare
  • Extend your file-system to recognize this additional space
  • Save your new configuration

First lets create an “auto-detect” partition on the bare drive.  You have seen this step before, I’ll include it again to refresh your memory.  This step is essential.

$ sudo fdisk /dev/sde
Command (m for help): n
Command action
    e   extended
    p   primary partition  (1-4)
p
Partition number (1-4): 1
First cylinder (1-133674, default 1);
Using default value 1
Last cylinder, +cylinder or +size{K,M,G} (1-133674,  default 133674):
Using  default value 133674

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid  autodetect)

Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

$ sudo fdisk -l
Disk /dev/sde: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbe84c178

Device Boot       Start         End      Blocks   Id  System
/dev/sde1               1      133674  1073736373+   fd  Linux raid autodetect

Now we will add this disk, to the our array as a spare drive.  If you merely want to add a hot spare drive to your array, you can stop after this next step.  A hot spare is a drive that is automatically used, whenever you have a failure of one of your active drives.


$ sudo mdadm --add /dev/md0 /dev/sde1

Now in this next step we will initiate the process of growing the existing array to include this new drive.  I say initiate, because this is a very lengthy process.  The data on your existing array is rewritten so that it is spread across four drives instead of three.  This will take several hours.  The good news is that your array remains online during this process.  You don’t have to endure any downtime.


$ sudo mdadm --grow /dev/md0 --raid-devices=4

You can monitor the progress with the file /proc/mdstat.  Periodically “cat” the contents of /proc/mdstat and you can see the percentage of progess.  When it has finished, you will have a four disk array, but your mounted file-system will still show its old size.  It will not automatically recognize the additional space.  You to resize your file-system so that it recognizes the additional free space.  Fortunately this step is very fast and takes only a few minutes.


$ sudo resize2fs /dev/md0

When this is done, you can do df -h and see that your file-system now has the additional free space.  What’s cool is you added lots of free space without downing your box all afternoon.

Please don’t forget to update your array configuration file.  Since you now have a four disk array instead of three, you need to let the system know to expect to see four drives instead of three. 

If you forget this step, you array will not mount after you reboot.


$ sudo mdadm --detail --scan | sudo tee  /etc/mdadm.conf
ARRAY  /dev/md0 metadata=0.90 UUID=622146c2:61b0872d:6bbacb7a:b6d31587

That is all.  Enjoy.

Creating a software RAID5 array

This is Part 2 of a series of posts where I describe building a DIY external RAID5 array.  In Part 1 I talked about the hardware components that I used.   In this post  I will show you how to configure the software RAID5.

To implement RAID5 you need a minimum of three drives.  I begin with three 1TB drives.  Each drive is 1TB in size.  When we are finished we will have a device that is 2TB in size and is fault tolerant.   If one drive should fail, we will be able to replace the failed drive and rebuild our array without losing data and continue on.

  • /dev/sdb
  • /dev/sdc
  • /dev/sdd

The first step is to partition your drives.  Linux software raid uses a special type of partition called a “Linux raid autodetect” partition (type 0xfd).  More about the “autodetect” partition later, but for now lets begin partitioning the drives.  I’ll show you how to the first drive /dev/sdb.  Repeat the same steps for drives /dev/sdc and /dev/sdd.

$ sudo fdisk /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-133674, default 1);
Using default value 1
Last cylinder, +cylinder or +size{K,M,G} (1-133674, default 133674):
Using default value 133674

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Repeat for drives /dev/sdc and /dev/sdd.  When you are done you should have three drives which look like the following.

$ sudo fdisk -l
Disk /dev/sdb: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbe84c178

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      133674  1073736373+  fd  Linux raid autodetect
Disk /dev/sdc: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xf70d7ba7

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      133674  1073736373+  fd  Linux raid autodetect
Disk /dev/sdd: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x681afb7b

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      133674  1073736373+  fd  Linux raid autodetect

I can now combine these three partitions to create the RAID5 device /dev/md0.


$ sudo mdadm --create --verbose /dev/md0  --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: size set to 1073736256K
mdadm: array /dev/md0 started.

The previous command merely started the process of combining the three drives into a single RAID5 device.  This process will take a while to complete.  You can check it’s progress by displaying the special file /proc/mdstat.

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
21474725512 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[========>.............]  recovery = 35.0% (376037544/1073736256) finish=128.6min speed=90402K/sec

usused devices: <none>

From the information above, you can see that the process can take a long time depending on the size and speed of your drives.  It is 35% complete, and the ETA is 128.6 minutes.  You can continue to “cat” the /proc/mdstat file to check the progress.  When it is finished, you should see something like the following.

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[2] sdc1[1] sdb1[0]
      2147472512 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

The raid device /dev/md0 is now ready for use.  I can now format it with a filesystem.  I choose to format it as an ext4 filesystem.  For this example I will mount it as /srv/d1.  You can mount it however you wish (i.e. /usr/local, /opt).

$ sudo mkfs.ext4 /dev/md0
$ sudo mkdir /srv/d1
$ sudo mount /dev/md0 /srv/d1

I’ll edit my /etc/fstab file and add the following line.

/dev/md0     /srv/d1   ext4      defaults      0 0

I am almost done.  However we must now save the configuration information of our array so that it can be discovered at boot time.  The following command can be used to examine your raid array and also to save its configuration to a file to be used at boot time.

$ sudo mdadm --detail --scan | sudo tee /etc/mdadm.conf
ARRAY /dev/md0 metadata=0.90 UUID=622146c2:61b0872d:6bbacb7a:b6d31587

We are now done.  The raid device has been created and mounted.  We saved its configuration so that it can be discovered and remounted after each boot.

$ df -h
Filesystem               Size  Used Avail Use% Mounted On
/dev/mapper/vg0-lv_root  9.7G  2.2G  7.5G  23% /
tmpfs                    501M  348K  501M   1% /dev/shm
/dev/sda1                194M   23M  162M  13% /boot
/dev/md0                 2.0T  199M  1.9T   1% /srv/d1

In Part 3, I will deal with growing the array by adding a disk to the array to make it a four disk array.