So What Cluster Size is Optimal for Win2K

I have heard now that FAT32 is faster than NTFS, So my Question is: When formating FAT32 (I Like Using Partition Magic:p) what cluster size makes for faster running of the OS. My mindset is that larger cluster sizes will decrease fragmentation because files have more slack around them, known as cluster overhang.

Customization Tweaking 1789 This topic was started by ,


data/avatar/default/avatar13.webp

137 Posts
Location -
Joined 2001-07-26
I have heard now that FAT32 is faster than NTFS,
So my Question is:
 
When formating FAT32 (I Like Using Partition Magic:p) what cluster size makes for faster running of the OS.
 
My mindset is that larger cluster sizes will decrease fragmentation because files have more slack around them, known as "cluster overhang".
 
Since NT supports 64k clusters would that be the best bet?
 
Is there any disadvantage in large cluster sizes, where in this day and age where free disk space is no longer a problem with 20+G HD's.
 
I'm thinking 5G for Win2K, 2G For all Apps & 500Mb Primary infront of 5G partition (the start of the HD) for the Pagefile.
 
Any suggestions?

Participate on our website and join the conversation

You have already an account on our website? Use the link below to login.
Login
Create a new user account. Registration is free and takes only a few seconds.
Register
This topic is archived. New comments cannot be posted and votes cannot be cast.

Responses to this topic


data/avatar/default/avatar19.webp

3857 Posts
Location -
Joined 2000-03-29
I am not sure what you are talking about with respect to "cluster overhang" and whether or not it is desireable. From what I have read, you want as little free space as possible between files anyway (isn't that fragmentation?). And, I have found NTFS to be faster AND way more reliable than FAT/FAT32.
 
Now, as for cluster size, I tend to use 4K on partitions less than 4GB, and 8K on partitions that are larger. If I plan on creating a partition to store large files (like ISOs or ZIPs of installable program images) I will go to 16K. This may be the reason why I have found NTFS to be faster, as that I tend to change cluster sizes depending on my needs.

data/avatar/default/avatar13.webp

137 Posts
Location -
Joined 2001-07-26
OP
Cluster overhang:
Say you have 8k cluster sizes, well if you have 100 files in that partition using 1k each, then you will use 800k of hardisk space, regardless of the fact that in total the files are only containing 100k of information.l
Cluster overhang is when you have a file that is 9k in size and it uses up one cluster and extends to the next, if the next is full then it goes to any other spare cluster on the Hard Disk.
 
The smaller the cluster size, the more fragments any given file is allocated across the Hard Disk, whether the fragments are contiguos or spread all over the HD(fragmentation).
 
Try this in Win2K, select a couple of files (lots of small files would be a good example) then press alt + Enter, you will see the properties 'Size' & 'Size on Disk', These will be very different if you have large cluster sizes and small files.
 
The only disadvantage i can see with using large clusters is it uses more space, (Cluster overhang is wasting the space, esp with lots of Temporary Internet Files).
 
The reason there is less fragmentation with large clusters for some files is that, frequently updated files, such as log files, ini & system files can increase and decrease in size slightly without needing to find a spare cluster.
 
Anyway, thats my understanding,
 
Anyone else?

data/avatar/default/avatar19.webp

3857 Posts
Location -
Joined 2000-03-29
OK, while I am familiar with the situation you speak of, I have never heard of it termed as "cluster overhang". Now, as for performance, I wouldn't use large clusters on a disk that would contain many small files (like text files, for instance) because the MFT in a NTFS drive will actually store the data directly if the file is less than one cluster in length (hence my decision to only use larger clusters in cases where I store large files only). I have found 8K to be a great median for most systems, as I don't have wasted overlap (or "overhang") with a large cluster just simply not being large enough for my files, and having barely filled clusters all over the place.