Bug 15041 - [FR] raid5 partition border alignment
Summary: [FR] raid5 partition border alignment
Status: CLOSED FIXED
Alias: None
Product: Sisyphus
Classification: Development
Component: evms (show other bugs)
Version: unstable
Hardware: all Linux
: P1 normal
Assignee: Олег Соловьев
QA Contact: qa-sisyphus
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2008-03-23 23:25 MSK by Michael Shigorin
Modified: 2016-10-27 21:00 MSK (History)
7 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Michael Shigorin 2008-03-23 23:25:38 MSK
Было бы крайне здорово уметь автоматически отбивать для RAID (критично для 5/6,
хотя не помешает и другим) разделы с кратным chunk size началом.

Переписывая с http://wiki.centos.org/HowTos/Disk_Optimization (thx vvk@):

Raid Math

The biggest performance gain you can achieve on a raid array is to make sure you
format the volume aligned to your raid stripe size. This is referred to as the
stride. By setting up the file system in such a way that the writes match the
raid layout, you avoid overlap calculations and adjustments on the file system,
and make it easier for the system to write out to the disk. The net result is
that your system is able to write things faster, and you get better performance.
To understand how the stride math actually works, you need to know a couple
things about the raid setup you're using.

    * Type of RAID you're doing to use (RAID 0,1,5,10 etc)
    * The number of disks in the array
    * The chunk size of the RAID array
    * And lastly, you need to know the fileystem block size (4k blocks for ext3
for example). 

The drive calculation works like this: You take the number of disks and multiply
it by the chunk size of the raid array. This gives you your stripe size. Then
you take the stripe size, and divide it by the number of blocks in the
filesystem. This gives you the stride value to use when formating the volume.
This can be a little complex, so some examples are listed below.

For example if you have a 4 drive raid 5 and it is using 64K chunks, your stripe
size will be 256K. Given a 4K filesystem block size you would then have a stride
of 64 (256/4). If it was 4 disk RAID0 array, than it would be 64(4x64k/4k=64).
If it was 4 disk RAID10 array, than it would be 32 ((4/2)*64k/4k=32)

When you create an ext3 partition in this manner, you would format it like this

mkfs.ext3 -E stride=64 -O dir_index /dev/XXXX

The dir_index listed above is the last tweak mentioned here. The dir_index
option allows ext3 to use hashed b-trees to speed up lookups in large
directories. It's not a big gain, but it will help.

См. тж.:
http://www.freesource.info/wiki/HCL/XranenieDannyx/SoftwareRAID#h4072-3
http://www.pythian.com/blogs/411/aligning-asm-disks-on-linux

Вместе с решением вопроса одновременной синхронизации массивов на одних и тех же
дисках результат может оказаться тупо лучшим на сегодня вариантом организации
массивов из инсталера ;-)
(можно ещё про mdadm -b internal отдельный FR повесить, спасибо mrkooll@ за хинт)
Comment 1 Michael Shigorin 2010-11-06 12:56:20 MSK
Похоже, мы так и будем тормозить со штатной разбивалкой не только на RAID5, а и на SSD с 4k HDD.

Серж, ты уже не собираешься трогать alterator-vm и для решения этой проблемы требуется писать что-нить другое, e.g. с (lib)parted?
Comment 2 Anton Farygin 2011-07-13 21:51:49 MSK
Миша, SSD давно починили. с RAID проверь пожалуйста.
Comment 3 Michael Shigorin 2011-07-14 22:38:59 MSK
По идее, сделанного stanv@ в evms-2.5.5-alt17 должно хватить.
Постараюсь проверить на стенде при удобном случае.
Comment 4 Michael Shigorin 2016-10-27 21:00:30 MSK
Текущий баг на эту тему:
https://bugzilla.altlinux.org/show_bug.cgi?id=26925
(состояние: primary починили, но вместо logical выравниваем extended)