Clearing page file at shutdown option

This is a discussion about Clearing page file at shutdown option in the Customization Tweaking category; Messing around in the registry, I noticed the option to ClearPageFileAtShutdown. It is set by default to 0 (no). What is the purpose of enabling this feature? Are there any performance advantages/disadvantages in doing do?.

Customization Tweaking 1789 This topic was started by ,


data/avatar/default/avatar13.webp

149 Posts
Location -
Joined 2001-09-02
Messing around in the registry, I noticed the option to "ClearPageFileAtShutdown". It is set by default to "0" (no). What is the purpose of enabling this feature? Are there any performance advantages/disadvantages in doing do?

Participate in our website and join the conversation

You already have an account on our website? To log in, use the link provided below.
Login
Create a new user account. Registration is free and takes only a few seconds.
Register
This subject has been archived. New comments and votes cannot be submitted.
Sep 24
Created
Oct 12
Last Response
0
Likes
1 hour
Read Time
User User User User User User User
Users

Responses to this topic


data/avatar/default/avatar30.webp

45 Posts
Location -
Joined 2001-03-01
Quote:Originally posted by AlecStaar
Why?

They add up!
No. They do not.

If an I/O is not split on a fragmented file then defragmenting that file will not make that I/O any faster.

Quote:Tweaking's cumulative... the more 'tiny' savings you make, the faster you will run overall & at many times!

(That's how I look at it at least... seems to work for me!)
There is no saving to be made. If the I/O isn't split, then it can't be made any faster.

Quote:An example:

DosFreak & my systems for instance (basically the same) beat the snot out of Dual Athlons of 1.4ghz & Dual Palominos of 1.2ghz not too long ago... and, also many kinds of 'High-End' Athlons of 1.4ghz & up to 1.7ghz overclocked in fact!

AND? We only run Dual Pentium III's, overclocked to 1121ghz & 1127ghz!
There is no /way/ that a PIII at 1100-odd MHz can match an Athlon at 1.4 GHz, with a few exceptions (such as using SSE on the PIII but using a pre-SSE AMD processor).

Quote:In a forum FULL of VERY VERY FAST single cpu rigs, also, we basically dusted them across the boards!

This was tested on 3 separate benchmark tests: WinTune97, Dr. Hardware 2001, & SiSoft Sandra 2001. It made NO sense we should win nearly across the boards on most all categories, but we did!
Tell me how defragging makes a non-split I/O on a fragmented file any faster.

Quote:I think taking 'little bits' of speed in tweaks like defragging well, system memory tuning & registry hacks for all kinds of things (as well as using the most current drivers etc.), adds up!
Not if there's nothing to add.

data/avatar/default/avatar30.webp

45 Posts
Location -
Joined 2001-03-01
Quote:Originally posted by AlecStaar
Your whole premise rests on that grounds... now, if I do not defrag my disk for a year, and have say... my Quake III Arena data files all fragmented all over the disk? You are telling me that this does not slow the system down??

That the disk head has to make alot of swings/passes to assemble that file does not take place???
If it's only reading 64 kbytes at a time, then it doesn't matter -- because it would make those multiple head movements anyway.

If you attempt to do a large contiguous read, it gets split up anyway, regardless of whether the file is fragmented. That's the nature of disk controllers. They can't read an arbitrarily large amount of information at once; instead, they have to split large requests into multiple small ones. This is even the case for large reads of a contiguous file. It's unavoidable.

Yes, the extra seeks have to happen -- but if they happen between reads anyway, they make no difference.

Quote:It depends on the degree of fragmentation! Your "IF" cuts both ways...
The "degree" of fragmentation isn't important. The size of the fragments is somewhat important. The average transfer size is even more important. And the number of Split I/Os is the true statistic that demonstrates if fragmentation is having any effect at all.

Quote:There's your entire big "IF" again... what if it is? What if a large database is in pieces all over a disk from constant inserts to it via Insert queries? Same idea as a Quake III game data file from above! It will take ALOT longer to load! No questions there!
If the database is only being read 64 kbytes at a time, it won't make a blind bit of difference. Similarly, if it's only being written 64 kbytes at a time. And most I/Os are less than this size.

Quote:Again, this depends on the degree of fragmentation, pretty common sense! But, I see your point also... but, try to see mine! I've seen databases SO torn apart by deletes & inserts, that defrags & internal compacts/reorgs to them inside of them? Made them speedup, bigtime!
I see the point you're trying to make, but I know from experience that split I/Os are rare, even on highly fragmented files on highly fragmented disks. And if I'm not getting split I/Os then the fragmentation does not matter.

Quote:Don't ask me then! Ask DosFreak! He saw & participated in tests I ran & that others ran as well! 3 different testing softwares, 3 different testers conducted the tests. I was amazed when my Dual CPU Pentium III 1121ghz beat a Dual Athlon @ 1.4ghz, & also a Dual Palomino @ 1.2ghz! No reason to lie here believe me... ask DosFreak! His machine is a HAIR faster than mine!
It's quite simple, actually. A 1.4 GHz Athlon is faster than a 1100 MHz PIII, ceteris paribus. Give the Athlon an old RLL drive and then run a disk benchmark and obviously, it'll suck. But the processor and its memory subsystem are both faster on the Athlon (no question about this).

Quote:Tell me how a fragmented large database reads slower (touche)... your argument depends on that single premise. It falls apart in the light of heavily fragmented disks!
No, it doesn't. Heavily fragmented disks don't suddenly start needing to do larger I/Os. They still only do small (<64 kbyte) I/Os, and those still don't get split.

Quote:I never said anything of the kind that "defragging a non-split I/O on a fragmented file" would be faster... don't try put words in my mouth! A whole file is just healthier for the system,
I can guarantee that my OS cares not whether files are contiguous or fragmented.

Quote:and the drive itself. Less head motion used to read it, and only 1 pass used.
Except that it doesn't work like that, except for very small files (and they would be read contiguously even with fragmentation). Large files can't be read in a single I/O transfer. It's always "read a bit, wait a bit for the OS to move the buffered data somewhere else, read a bit more, etc..". The nature of disk controllers.

Quote:On your points above:

1.) Speedisk from Norton/Symantec, is probably NEVER going to break the filesystem as you state!
Yes, it could, and it would be quite easy. It relies on the FS working in a particular way, but the FS is not guaranteed to work in a particular way.

Quote:They are in tight with Microsoft & always really have been! I am sure they are appraised of ANY changes coming from MS regarding this WELL beforehand!
Actually, they *aren't*, and this is one of the problems I have.

Quote:Unless Microsoft wants to get rid of them etc. as a business ally! Fat chance!
Of course they do. MS are in bed with Executive Software.

Quote:2.) Not everyone can afford a cluster of boxes like MS can do thru the old Wolfpack clustering 2 at a time or newer ones... so uptime? IS a plus!
If you require high availability, then you make sure you can afford a cluster. If you do not require high availability, then you can manage with a single machine, and downtime doesn't matter.

Quote:3.) On defraggers? The original subject?? I keep Norton Speedisk & Execsoft Diskeeper around! They both have merits. I would like to try PerfectDisk by Raxco one day, just for the sake of trying it though!
It has a horrendous interface, and, like Diskeeper and O&O (Speed Disk isn't going anywhere near my computer, so I can't comment on it) is a bit buggy.

Quote:PeterB/DrPizza, you hate Speedisk? Then, you might not like the new Diskeeper 7 then...
If it doesn't use the defrag FSCTLs then it has no place on any production machine. If it can only defrag partitions with >4kbyte clusters on XP/Win2K2, then that's fine, because XP extends the FSCTLs so that they can defrag the MFT (and other bits of metadata, I think), and so that they work on clusters larger than 4 kbytes.

Quote:It must be patching the OS again (as old Diskeeper 1.0-1.09 I believe, had to for NT 3.5x to use them).
I don't think that it is, not least because WFP won't let it.

Quote:Why do I say that? Well, I cannot defrag a volume here that is using 64k clusters using Diskeeper 6.0 Se... but I can with Speedisk!
That doesn't require patching the kernel, it merely means not using the FSCTLs provided by the FS drivers.

Quote:Diskeeper 7? It can now do over 4k NTFS clusters! It MUST be patching the OS again like old ones did! Unless, they completely blew off using that functionality in the API they sold to MS in current models of Diskeeper! I am guessing, because PerfectDisk is also one that uses native API calls for defragmentations? It too, is limited on NTFS defrags on volumes with more than 4k NTFS cluster sizes!
My guess is that this ability is restricted to XP/2K2, because those expand the capabilities of the defrag FSCTLs to work with >4 kbyte clusters. To abandon the built-in FSCTLs would be a strange move indeed.

I can't find any mention of this new ability on Executive Software's web page, and I'm not running a WinXP machine to test. Where can I find more information about it?

One thing to note is that MS might update the NTFS driver in Win2K to match the one in WinXP (as they did in NT 4; I think that SP 4 shipped what was in effect the Win2K driver, so that it could cope with the updated NTFS version that Win2K uses). This might serve to retrofit the ability to defrag partitions with large clusters.

data/avatar/default/avatar30.webp

45 Posts
Location -
Joined 2001-03-01
Quote:Originally posted by AlecStaar
Consider the head movements are NOT all over the drive, but in the same general area, correct, on a contiguous file, right?
This assumes, normally incorrectly, that the heads don't move in-between.

Quote:Less time than swinging say, from the middle of the disk (where the original file is striped out) to the nearer the end for the next part until it is all read!

Thus, a contiguous file is read faster, correct?
Generally, no, it isn't. Rapid sequential reads/writes of large chunks of file are rare. The only things I can think of off the top of my head where this happens are playing DVD movies, and hibernating/unhibernating. It's a remarkably rare action (PerfMon does not lie, though software companies often do).

Quote:That happens on heavy fragmented files man, ones that are scattered allover the drive, usually on nearly full disks or ones over 70% full using NTFS!
Except that it really doesn't.

Quote:You see they do occur, & you concede this! Those tiny things add up! In terms of detrimental effects & positive ones! They make a difference, I beg to differ here! Especially on near full or over 70% full disks that are fragged already.
The fullness is not a major concern (the number of files is more important). And if the seeks happen between reads/writes, then no, they do not matter.

Quote:Not true! On a heavily fragged nearly full disk? The degree of that fragmentation can cause MORE of it & why I used the examples of slowdowns I have seen on HUGE databases because of fragmentations! You bust that file up all over a disk? ESPECIALLY ON A DISK THAT IS PAST 70% or so full already? You see it get slower.
In artificial benchmarks (which tend to do lots of rapid sequential reading/writing), sure. In real life (which doesn't), no.

Quote:If the file IS scattered allover a drive? Fragmentation INTERNALLY w/ alot of slack in it & EXTERNALLY on disk from record deletions & inserts. Inserts are what cause the fragmentation externally if you ask me! They cause it to grow & fragment on disk, slowing it down & busting it into non-contiguous segments all over the drive! Not in the same contiguous area as the filesystem works to place that data down & mark it as part of the original file w/ a pointer.
I don't know about the database that you use, but the ones I use (mostly SQL Server and DB2) aren't so simplistic as to work like that. They won't enlarge the file a row at a time (as it were). They'll enlarge it by a sizable chunk at a time. Even if they have to enlarge the file often, they do so in a way that does not greatly increase the number of split I/Os.

Quote:On a nearing full disk? This REALLY gets bad! The system has to struggle to place files down & does fragment them!
It doesn't have to "struggle" to place files down. It doesn't actually *care*.

Quote:&, Microsoft said NTFS was frag-proof initially. Well, so much for that I guess! The proof's in the pudding now!
Actually, they said that NTFS didn't have its performance damaged by fragmentation, except in the most extreme cases (where average fragment size is around the same as average I/O transfer size).

Quote:You're saying fragmentation does not hurt system performance? I remember Microsoft saying NTFS would be 'frag proof', this is not the case!
No, they said its performance wouldn't be damaged. This isn't the same statement as saying it doesn't get fragmented -- it does.

Quote:I want to know something: Did you get your information about this from an old Microsoft Tech-Ed article? As good as MS is, they are not perfect. Nobody is.
Which information?

Quote:Again: When a disk is over 70% or so full & you have for instance, a growing database due to insertions? & alot of the data is fragmented from other things already? You WILL fragment your file & then the disk will be slowed down reading it!
Only if the fragment size is around the same size as the average I/O transfer size, and that is an extremely rare situation.

Quote:Heavy frags on near full drives with fragmentation slows a disk down & it is pretty much, common sensical anyhow, ESPECIALLY ON DISKS NEARING OVER 70% full capacity w/ fragmented files on them!
No, it doesn't. You can't speed up an unsplit I/O by making the rest of the file contiguous.

Quote:Ever play the card game "52 pickup"? Think of it in those terms. A nicely stacked decked is alot simpler to manage than a 52 pickup game!
The only part of the system that even knows the file is fragmented is the NTFS driver. Nothing else has a clue. And the NTFS driver doesn't care if a file is in one extent or a hundred.

Quote:I have not seen that to date yet! It won't happen. Not a wise business move to lose an ally! Microsoft helps Symantec make money & vice-a-versa via license of Symantec technology & royalties no doubt paid for it! MS still uses WinFax technology in Office 2000 for Outlook if you need it! A revenue source for both parties, & a featureset boost!
Office is developed by a different group to the OSes (and the 9x group was a different group to the NT group). Some collusion in one area does not suggest collusion in another area. Hence the Office people making software for platforms that compete with the OS people (Office for Mac, etc.).

Quote:Yes, & Symantec (note my winfax lite technology licensed example in Office 2000 above). Business: the arena of usurious relationships & fair-weather friends!
Except that helping Symantec with their defragger yields no benefits, because they're already relying on Executive Software to do the work there.

Quote:Everyone requires it, but not ALL can afford it!
No, not everyone requires it. If the fileservers in our office were turned off at 1700 on a Friday and not turned on until 0800 on a Monday, it would make not a bit of difference to us. We don't need 100% availability, or even close.

Quote:Pretty simple matter of economics really! Not everyone can afford to build a cluster of Dell PowerEdge rigs you know! Downtime DOES matter, to me at least!
If downtime matters to you/your business, you get a cluster. It's that simple.

Quote:That's relative & a matter of opinion! Some guys I know are married to some REAL DOGS, but to them? They're beautiful... beauty is in the eye of the beholder, don't you think?? I found no bugs in it to date & lucky I guess! I have the latest patch for it via LiveUpdate!
No, I didn't say ugly -- I said horrendous. It uses hard-coded colours that renders portions of the UI unusable with non-standard colour schemes (certain icons are rendered near-invisible, for instance). That is to say, the UI is *broken*.

Quote:Hmmm, why omit the feature then? Bad move... it limits their own defragger as well apparently, Diskeeper up to 6se for sure I know of, & most likely? PerfectDisk by Raxco since it uses those system calls that Diskeeper does (& that Execsoft created & was licensed by MS).
What feature is being "omitted"? And I'm not sure the Win2K defrag APIs are quite that simple. Not least because the FSCTLs are also available for FAT32, which NT4 didn't support.

Quote:It patched a critical file PeterB/DrPizza. I am almost CERTAIN it was ntoskrnl.exe in fact, it was many years ago! You can research that if you like. I'd have to dig up old Cd's of Diskeeper 1.01-1.09 for NT 3.5x around here still to tell you which file exactly! Why they did not use it at MS or Diskeeper of all folks that API's inventor, is a STRANGE move!
I'm not sure what you're talking abuot. Why who didn't use what, when?

Quote:Agreed, a strange move! BUT, one w/ benefits like not having to take down the OS to defrag directories, pagefiles, MFT$, etc. Uptime is assured, & not every company can afford failover clustering setups man.
Then that company does not, ipso facto, require high availability. Without specialized hardware, you can't get high availability from a single machine. If you want it, you *need* redundance. This is unavoidable.

Quote:Just a financial fact unfortunately & how life is! Heck, maybe they can, but every try getting a raise? Or asking your IT/IS mgr. for money for things you don't REALLY need?
If you REALLY need high availability, then you REALLY need more than one machine. Having a single machine means that it can't ever have hardware swapped out and it can't ever have software installed/updated. Neither of these constraints is workable.

Quote:Was wondering that myself & I looked as well! I heard it from Dosfreak in this post above I believe! Take a look, he is generally pretty spot-on on things, & I take his word alot on stuff!
See, if it's XP-only, then it's no surprise. The FSCTLs in XP are more fully-featured. They work on certain metadata files, they work with large clusters.

Quote:That's a VERY good "could be", depends on the mechanics of defraggers & how dependent this is on the calls that do the defrags! I don't know about that part you just stated I am not that heavy into keeping up on "will be's" here only more into the "now" stuff until the new stuff appears & is tested thoroughly usually!
Well, allowing the FSCTLs to work with larger clusters won't break anything (older defraggers might not be able to defrag partitions with large clusters all the same, but they won't break). This, you see, is why the FSCTL approach is the best (and why I won't touch Speed Disk). It ensures that you won't be broken by changes in the FS, and by sticking to a published API, you aren't relying on the OS's internals working in a particular way. And you're ensuring that your software will continue to work in new OS versions.

data/avatar/default/avatar19.webp

3857 Posts
Location -
Joined 2000-03-29
Well, it's nice to see such a wealth of info from you guys on this topic. As for me, I only have a few things to say about it.
 
1. A scattered file on a harddrive takes longer to load than a contiguous load, period. MS states it here, and it's been a pretty widely accepted by product of the destructive and scattered behavior of hard disk systems. The only effect this has on the OS is that it will only slow down on reads (and writes if there's a small amount of freespace to write files to), and that's about it. However, I have heard of NT boxes that would grind to a hault due to extreme fragmentation and wound up formatting and reinstalling this system. Yeah, it seems odd, and I would think it had more to do with corruption than fragmentation, but my friend and his crew determined this on a few workstations at his office.
 
This behavior is the same thing that is exhibited in any database that has had numerous reads and writes to it, and then had many rows/columns (or entire objects) removed. Once you defragment it, performance will increase and return to normal. Many database system exhibit their own defragmentation schemes (such as Exchange 5.5/2K) that run auto-magically, while others can perform "offline" and/or manual defragmentation/compression runs (MS Access). The hard disk will work in the same way as a database with the main R/W portion being the tables/rows/columns and the MFT$ (for NTFS) as the index. The index will keep up with the location of data regardless of its location on the disk, but moving all the empty space to one area, and moving all your data to another helps a great deal.
 
Funny thing about databases though, is that they always tell you not to run defragmenters and virus scanners on the same partitions as the DBs themselves (unless they are specifically designed for them, like McAfee Groupshield and such). I can only imagine that the R/W behavior of each product tends to clash with most DB software.
 
2. Symantec is evil. They just HAVE to write their "software" (read: crap) SO specifically for an OS version (I would imagine by bypassing the MS APIs) that they break easily with OS patches/service packs/new versions. This happened with WinFax Pro 8 (the damn thing wouldn't let my PC reboot when I performed my brand-new fresh install of Win98 when it came out. Of course, it worked great in Win95 and needed patches to work again in Win98. Then, PCAnywhere v9.0/9.1 had this WONDERFUL ability to keep Win2K boxes from rebooting again upon installation due to a NETLOGON (awgina.dll in this case) filter to allow the software to interact with NT/Domain accounts. I saw this on CDs that indicated that they were "Windows 2000 compliant" as well. After a nice visit to the Symantec site, I found out about this file and they had a fix for it provided you either had a FAT 16/32 disk, or an OS/Utility that could provide access to the system partition if it was NTFS. Besides that, Speedisk killed my dad in 'Nam. Bastards...
 


data/avatar/default/avatar30.webp

45 Posts
Location -
Joined 2001-03-01
Quote:Originally posted by AlecStaar
I've seen that before man... <snip> I must be missing your point!
I suspect the main reason is that certain companies saw this as a way of making money, and so spread sufficient FUD that people believed defragging to be important. They constructed some benchmarks using atypical disk access patterns to demonstrate their point (even though real-life usage showed no such problems) and started raking in money.

They were aided somewhat by the free cluster algorithm used by MS DOS and Windows 95.

Those OSes wrote new data to the first free cluster, even if the subsequent cluster was occupied. Under those OSes, even with a relatively empty hard disk, it was easy to split a file into blocks smaller than the average transfer size.

NT doesn't do this, and Windows 98 doesn't do this (unless it happens that the only free space is a single cluster, natch).

Fragmentation isn't the problem. It's split I/Os that are the problem.

Quote:What about burst writes when the system is pressuring the drives to do those?
The OS lazy writes anyway, so it doesn't matter. It's even less of a problem with properly designed applications (if your application uses overlapped [non-blocking] I/O then it doesn't get slowed down (at all) because it doesn't have to wait for disk writes to finish, and can do something else whilst waiting for disk reads).

Quote:Test it yourself, tell me you don't see an increase in speed...
I have tested it myself. Quite extensively. Real world tests, not synthetic.

I could only get performance to be noticably damaged by inflicting truly horrendous (and completely unrealistic) fragmentation on the drive.

That is, I filled the drive with 4 kbyte (single cluster) files. I deleted alternate 4 kbyte files (so the largest free space was a single cluster). Then I stuck a database onto the disk. Then I deleted the rest of the 4 kbyte files, and stuck data into the database so that it used the remaining disk space (this gave the interesting situation of occupying the entire disk but being as discontiguous as possible).

And, yeah, performance went down the toilet. Split I/Os went through the roof, because virtually every I/O on the disk was split.

A disk will never get that bad in real life, even a really full disk.

(I did similar tests with larger files (8, 16, 32, 256, 1024, 4096, 8192 kbyte), with similar deletion patterns, and also with mixed sizes. Above 256 kbytes, performance was mostly normal, above 4096 kbytes, almost completely normal).

I also tested other things (not just the database); for instance, installing the OS to the drive (with half the files removed). Again, similar story. As long as each fragment was more than 256 kbytes in size, performance was not noticably different (timing with a stopwatch).

Quote:if you can fragment up a drive, <snip> slower man!
Not IME. And I've tested it a *lot*.

Quote:I guess what <snip> use your box.
I know where the problems lie -- split I/Os -- and I also know that the vast majority of transfers on my disk are tiny in comparison to the size of disk fragments -- of the order of 64 kbytes or so.

Quote:Ok, let's go with what you said... What if seeks don't happen between those reads/writes? On a dedicated box that performs one task with only 1 database on it?
I haven't ever used a database that's big enough to have a dedicated server but that also only services one query at a time. There's a lot of head movement because there's a number of things going on at once.

Quote:What about burst writes, that capability & capacity is built into most modern disks!
Bursting normally occurs to and from the cache anyway.

Quote:Really? What <snip> more time!
A "massive commit" involves simply telling the transaction log, "Yep, that's done". Not I/O intensive.

Quote:I used Access <snip> to them as well.
I haven't ever seen an Access database where the bottleneck was caused by something other than it being an Access database. I can't *wait* until MS ditches Jet and uses the SQL Server engine across the board.

Quote:Consider the Access .mdb example (you don't need SQL Server for alot of smaller clients & applications you know, lol) & not everyone can even afford the licensing SQL Server or Oracle full models they need that anyhow!
For a lot of smaller applications, we use MSDE. It's based on SQL Server (with restrictions on database size and concurrent users), and can be distributed royalty-free with applications developed in Visual Studio and/or MS Office Developer Edition. Even the low-end doesn't need Access.

Quote:You admit fragmentation DOES affect performance then, as does MS! It does!
Not practically, no.

Quote:Just plain physics of the heads having to move all over the drive.
Which they would do anyway.

Quote:If the heads of the disk have to move all over more than one pass to pickup a file (BECAUSE OF THE FILE BEING FRAGMENTED & ALLOVER THE DRIVE, not because of other programs I/O requests), you are telling me it does not affect it?
Yes, because that happens so rarely.

Quote:If I have a deck of cards to read in my hand, nicely organized, & this is physical world, like the disk deals in!

For me to look at them is a matter of picking up the deck & looking thru it. It's in ONE CHUNK!

If they are scattered ALL OVER MY ROOM? I have to pick them up first, then read them. More time, simple! Can't you see this?
Yes, I can.

What you've ignored is that the disk controller physically isn't capable of picking up the entire deck at once, no matter how it's organized. It can pick up a few cards, then it has to wait, then a few more, then it has to wait, and so on. And it only rarely has to pick up the entire deck anyway. Most of the time, it only wants a couple of cards from the middle, and if that's the case, the only issue is, "are they in the MFT, or are they somewhere else"? As long as they're "somewhere else" it doesn't matter if they're contiguous or not.

Quote:PeterB, it was pretty much acknowledged they stated it would be fragment resistant/immune by MS. It was big news when NT first released & considered a selling point... you said it above, not me, so using our OWN words here:

"Software Companies Lie"

They erred. Not the first time marketers did that.
Except that in this case, in real-world usage, they didn't lie.

Quote:I never said you could!

Please, don't put words in my mouth! Where did I EVER SAY THAT?

(Quote it verbatim if you would, thanks)
You said it implicitly, by suggesting that fragmentation mattered a damn.

Quote:You look at this TOO software centric, & not as an entire system w/ BOTH hardware & software!

The performance hit of fragmentation is disk head movements, or extra ones that in unfragmented files, is not present!
No, this is just it. I've tested it considerably (speccing up servers for a client; we needed to know if defrag tools were worth including (and if so, which ones)). After considerable tests (of the sort mentioned earlier) the conclusion was fairly clear -- fragmentation was a non-issue (if the database was such that it was really being harmed by fragmentation, it had probably outgrown the disks it lived on).

Quote:Microsoft is not divided into SEPARATE companies & will not be. They are STILL MS, Office 2k still uses WinFax SYMANTEC/NORTON technology.
They are not a cohesive company. There are divisions within MS, and the aims of different parts of the company quite separate.

Quote:Uptime to many companies IS the crucial factor... especially when EVERY SECOND OF IT IS MILLIONS!
No. Not uptime. Availability. Uptime is a dick size contest. Availability is what makes you money.

Quote:You SHOULD if you can afford it! Like getting a raise, most IS dept's world wide don't get the monies say, Marketing does for example!
If you *NEED* something, you make sure you can afford it.

Quote:There you go again, YOUR opinion. Pete, it's not the only one man!
This isn't "opinion". A UI that puts black text on a black background is objectively a bad UI.

Quote:The ability or presence of the ability in Diskeeper to defrag NTFS disks with over 4k clusters in versions 1-6se.
It's not important; the only version that made such partitions with any regularity was NT 3.1.

Quote:Really? I kept an NT 4.0 box running 1.5 years, without ANY maintenance, running a WildCat 5.0 NT based GUI BBS on it in Atlanta Ga. when I lived there... it can be done, hence 'Microsoft's 'insistence' on running servers with dedicated BackOffice apps one at a time per each server on them.
Yeah, it can be, but you can't guarantee it. That machine [obviously] needed service-packing, for instance.

Quote:And, PeterB? Even IBM System 390's are not perfect! They can do the 4-5 9's ratings & go without fail for as long as 20 years, but they DO crash!
*Exactly*. This is precisely why a company that *NEEDS* uptime *cannot* afford to have only a single computer.

Quote:If it fits more conditions for you? Then, naturally, for your use patterns?? It's better FOR YOU! Pete, there you go again man: Your ways & tastes are NOT the only ones! I value system uptime here, I hate reboots!
So do I. But ya know what? If I had to reboot the machine each time I had lunch, it wouldn't be the end of the world.

Quote:(Want to know why? I have a pwd that is over 25 characters long & strong complexity characteristic of pwd is engaged here mixed case & alphanumeric! Try that sometime tell me you don't dislike uptime then!)
Type faster.

data/avatar/default/avatar19.webp

3857 Posts
Location -
Joined 2000-03-29
Welp, DrPizza seems like a pretty sharp guy, and this part:
 
"Yes, I can.
 
What you've ignored is that the disk controller physically isn't capable of picking up the entire deck at once, no matter how it's organized. It can pick up a few cards, then it has to wait, then a few more, then it has to wait, and so on. And it only rarely has to pick up the entire deck anyway. Most of the time, it only wants a couple of cards from the middle, and if that's the case, the only issue is, "are they in the MFT, or are they somewhere else"? As long as they're "somewhere else" it doesn't matter if they're contiguous or not."
 
seems like a brilliant point that he is making. I can understand his perspective, and it is indeed a good one. I have just seen fragmentation take performance into the toilet on many systems, and I know the importance of staying on top of these issues. In many cases, I rarely have to defragment my primary workstation as I don't add and remove large files and/or large numbers of files. Therefore, there are not as many "holes" scattered about being annoying. Using his logic, there would not be any perceivable decrease in performance of the average "power user" that might upgrade or simply reformat his/her system once every 1 or 2 years. I have had workstations that I have defragged after 18 months of operation that don't get a huge performance boost since there wasn't a large amount of defragmentation to begin with. On partitions with large amounts of file activity (like testing/development workstations and servers) steady control of fragmentation does help out with performance greatly on my systems. But, it is nice to see a well written argument to this, and I hope to see more of this activity from DP in the future.

data/avatar/default/avatar30.webp

45 Posts
Location -
Joined 2001-03-01
Quote:The difference? I don't see it at work in something I can use. He does not do GUI development as he stated above, showing me he is limiting himself based on his principles alone. GUI is where the money is, GUI is what users want!
ha ha ha.

It's funny that I make more money than the guys doing the work on the front-ends, then, isn't it?

Quote:We've seen it, evidently, he has not. He is as you say, probably working with workstations, exclusively, & not large transaction processing based systems is why & ones that create ALOT of temp tables etc. & scratch areas. Creating & Deleting files is a killer, & causes this in conjunction with append or insert queries in the database realm at least from my experience.
No, 'fraid not. I gave rough details of the testing I've done; doing real queries on real databases (amongst other things).

Large sequential transfers are incredibly rare, pure and simple.

data/avatar/default/avatar19.webp

3857 Posts
Location -
Joined 2000-03-29
Welp, this has been interesting (and long). The only thing I was going to reply to is from APK:
 
"I know you are also a member at Ars Clutch."
 
I am not. I haven't been there in a couple of years or so. The last time I was at Ars Technica was to read a hardware review.

data/avatar/default/avatar18.webp

57 Posts
Location -
Joined 2001-07-25
Sorry folks. Missed putting my two cents worth into this thread - I was out of town on business...
 
Couple of confirmations and clarifications:
 
PerfectDisk - because it uses the Microsoft defrag APIs - is limited on NT4 and Win2k to defragmenting NTFS partitions with a cluster size of 4k or less. This is enforced by the the defrag APIs. With Windows XP, this restriction goes away. Any defragmenter that can online defragment partitions with a cluster size >4k is NOT using the MS defrag APIs. If Diskeeper 7.0 is able to defragment partitions on non Windows XP systems where the cluster size is >4k, then Diskeeper is no longer using the MS defrag APIs - something that they (and to be honest - we as well) have been vocal about in positioning product.
 
In all of the "discussions" about fragmentation, what most people have lost sight of is that defragmenters work at the LOGICAL cluster level - NOT at the PHYSICAL cluster level! The file system works at the LOGICAL cluster level. The hard drive controller works at the PHYSICAL cluster level and does all of the LOGICAL to PHYSICAL mapping. All any defragmenter can do is to ensure that a file is LOGICALLY contiguous - which means that only 1 LOGICAL request has to be made to access the file. That is where the performance benefit comes into play. Even though a file is LOGICALLY contiguous is no guarantee that it is PHYSICALLY continguous on the actual hard drive.
 
Fragmentation is only an issue when you go to access a fragmented file. If a file is fragmented and is never accessed, then who cares!! However, it is easy to prove that fragmentation causes slower file access times.
 
For those that are interested, defragmentation has been identified as a KEY issue in terms of performance for Windows XP. Microsoft is recommending frequent defragmentation in order to keep WinXP running at peak speed.
 
- Greg/Raxco Software - maker of PerfectDisk. I work as a systems engineer for Raxco Software