Clearing page file at shutdown option

This is a discussion about Clearing page file at shutdown option in the Customization Tweaking category; Messing around in the registry, I noticed the option to ClearPageFileAtShutdown. It is set by default to 0 (no). What is the purpose of enabling this feature? Are there any performance advantages/disadvantages in doing do?.

Customization Tweaking 1789 This topic was started by ,


data/avatar/default/avatar13.webp

149 Posts
Location -
Joined 2001-09-02
Messing around in the registry, I noticed the option to "ClearPageFileAtShutdown". It is set by default to "0" (no). What is the purpose of enabling this feature? Are there any performance advantages/disadvantages in doing do?

Participate in our website and join the conversation

You already have an account on our website? To log in, use the link provided below.
Login
Create a new user account. Registration is free and takes only a few seconds.
Register
This subject has been archived. New comments and votes cannot be submitted.
Sep 24
Created
Oct 12
Last Response
0
Likes
1 hour
Read Time
User User User User User User User
Users

Responses to this topic


data/avatar/default/avatar39.webp

3867 Posts
Location -
Joined 2000-02-04
Security.
 
The pagefile may contain the data that you were working on. Clearing it a shutdown makes it harder to find the data. I would not enable it. The pagefile will be remade on bootup re-fragmenting your files.

data/avatar/default/avatar15.webp

183 Posts
Location -
Joined 2000-09-15
That registry entry is controlled by the Local Security Policy named "Shutdown: Clear virtual memory pagefile". It doesn't remove the pagefile, but merely wipes it. So the pagefile doesn't have to be re-created at boot if the option is turned on. It might be worthwhile option to enable if you keep encrypted data on the system and don't want anyone to be able to snoop the pagefile. If you don't use encrypted data, I'm not sure why anyone would bother to use it. If the pagefile is big, this option can make shutdown take quite a while.
 
Regards,
Jim

data/avatar/default/avatar15.webp

183 Posts
Location -
Joined 2000-09-15
I'm not sure I see the point you're making. The pagefile is NOT deleted by this security setting. The contents are wiped. Fragmentation will not result from the use of this setting. That's all I was saying.
 
The pagefile and registry hives are defragged by the Sysinternals utility, Pagedefrag. However it doesn't touch the MFT or metadata. O&O makes a decent defragger that performs a defragging operation of all of this stuff at boot time, and in very little more time than it takes for Pagedefrag to run. However, the versions that do boot time defragging are not freeware.
 
Regards,
Jim

data/avatar/default/avatar15.webp

183 Posts
Location -
Joined 2000-09-15
Quote:Did I say it was deleted by clearing it above?

It can be deleted, by filesystem corruptions!

That wasn't the topic under discussion. I was merely trying to be certain that it was understood that the security setting being discussed would NOT delete the pagefile itself, and therefore would not result in a file system fragmentation issue, in and of itself.

Quote:(I gain speed by housing the pagefile/swapfile onto another disk... on EIDE a second one on another EIDE I/O channel, & on ScSi on another drive device on the chain. So, when one drive is seeking/reading/writing for me? The swapfile & temp. operations take place on another.. simultaneously! Makes for good performance sense!)

* Understand now?

APK

P.S.=> You are bringing in the possibility of MFT$ defrags now? Diskeeper from Executive Software also does the same as well... not a freeware one, & not in their LITE versions either! I told folks abotu a FREEBIE they can use for PageFile & Reg file defrags above! apk

I pointed out the differences in cost in my own post. For the information of anyone who's interested in the differences, the Executive Software product has to be set each time to perform the boot time defrag, whereas the O&O product can be set to perform it automatically at each boot.

As for you, APK, you might want to have that ego checked. Your voluminous posts speak volumes about you but more, I think, about a presumptuous nature than about knowledge.

data/avatar/default/avatar15.webp

183 Posts
Location -
Joined 2000-09-15
I don't really need confirmation of my opnions from others, but I've seen the comments about your "contributions", and they are far from unaniously slanted in your favor. Take a hint. You are presumptuous, and no one needs a doctorate in psychology to see that.
 
I guess you're at least relatively safe with your puffery online. Hard to get away with it in real life, isn't it?

data/avatar/default/avatar13.webp

149 Posts
Location -
Joined 2001-09-02
OP
O.K you two, let it go. Besides, you're both talking over my head now anyway. Thanks for initial replies - they're most appreciated. If you're gonna continue the mud slinging, I'm gonna delete this thread.

data/avatar/default/avatar15.webp

183 Posts
Location -
Joined 2000-09-15
Ron_Jeremy,
 
Sorry for the unpleasantness.
 
Regards,
Jim

data/avatar/default/avatar18.webp

57 Posts
Location -
Joined 2001-07-25
"The pagefile and registry hives are defragged by the Sysinternals utility, Pagedefrag. However it doesn't touch the MFT or metadata. O&O makes a decent defragger that performs a defragging operation of all of this stuff at boot time, and in very little more time than it takes for Pagedefrag to run. However, the versions that do boot time defragging are not freeware."
 
Part of this statement is correct and part is incorrect. The correct part is that Sysinternals doesn't provide a mechanism to defragment the Master File Table ($MFT) or related metadata.
 
The incorrect part is that O&O's defragger will defragment the MFT and metadata. O&O defragments the $MFT only - it doesn't defragment the $Logfil, $Bitmap, $Upcase, etc... There is only 1 defragger available that will defragment these metadata files - PerfectDisk - it is also the only defragger that tells you how badly fragmented these metadata files are. Defraggers like O&O Defrag only tell you how badly fragmented the $MFT is.
 
- Greg/Raxco Software
 
Disclaimer: I work for Raxco Software, the maker of PerfectDisk - a competitor to O&O Defrag, as a systems engineer in the support department.

data/avatar/default/avatar19.webp

3857 Posts
Location -
Joined 2000-03-29
Well damn ghayes, took ya long enough to get here!
 


data/avatar/default/avatar15.webp

183 Posts
Location -
Joined 2000-09-15
Quote:The incorrect part is that O&O's defragger will defragment the MFT and metadata. O&O defragments the $MFT only - it doesn't defragment the $Logfil, $Bitmap, $Upcase, etc... There is only 1 defragger available that will defragment these metadata files - PerfectDisk - it is also the only defragger that tells you how badly fragmented these metadata files are. Defraggers like O&O Defrag only tell you how badly fragmented the $MFT is.

- Greg/Raxco Software

Disclaimer: I work for Raxco Software, the maker of PerfectDisk - a competitor to O&O Defrag, as a systems engineer in the support department.

Sorry, I should have been more careful / precise. Have you examined the "Select Additional Files" feature on the Boot Time Defragmentation dialog in O&O? Once you have performed one full defragmentation of a drive, you have the option to add the files that couldn't be defragged with the GUI online by using the Add Exclusive feature. I won't pretend to know whether or not that comprises all the metadata, but that is some or most of it, isn't it? I mentioned it because it's a feature that I've seen many users / evaluaters of O&O overlook. Anyway, once you add the exclusively locked files, they also get defragged at boot time.

In addition to the manual Action | Boot-Time Defragmentation settings, the Executive Software Product does have FragGuard which can be set to run when fragmentation exceeds certain levels on the MFT or registry hives (but without mention of any other items), but I didn't see evidence that it could defrag the "unmovable" files on an NTFS partition.

BTW, I tried out Perfect Disk about a year-and-a-half ago when I was evaluating defraggers for use with Win2K. (I've been using Windows only since a couple of months before the advent of Win2K.) I thought it was generally a good product, but I had some problems with the user interface on a notebook with an ATI graphics subsystem that I couldn't resolve with tech support and had to resort to O&O.

Regards,
Jim

Edit: I asked you if the "additional files" comprised any significant portion of the metadata but didn't tell you what they were. DOH! I'd be glad to PM or e-mail the list to you.

data/avatar/default/avatar18.webp

57 Posts
Location -
Joined 2001-07-25
Jim,
 
"Sorry, I should have been more careful / precise. Have you examined the "Select Additional Files" feature on the Boot Time Defragmentation dialog in O&O? Once you have performed one full defragmentation of a drive, you have the option to add the files that couldn't be defragged with the GUI online by using the Add Exclusive feature. I won't pretend to know whether or not that comprises all the metadata, but that is some or most of it, isn't it? I mentioned it because it's a feature that I've seen many users / evaluaters of O&O overlook. Anyway, once you add the exclusively locked files, they also get defragged at boot time."
 
I can state with utmost certainty that O&O Defrag does NOT do any of the metatdata besides the $MFT. Even if you go into the Boot Time defrag options and select Additional Files, you are not presented with a way to select any of these other metadata files from their interface (do you see a file called $MFTMir or $Logfile or $Upcase?).
 
 
AlecStaar:
 
Diskeeper also doesn't defragment these other metadata files. The interesting thing about Diskeeper is that even if the $MFT is actually in 1 piece, Diskeeper will always show it as being as in 2 pieces. Why? Because they count the $MFTMirr - one of the metadata files - as a fragment of the $MFT - even though it is a separate file.
 
This is easier to see on an NT4 NTFS partition.
 
- Go to a MSDOS prompt and go to the top level of a NTFS partition.
- Issue the following command:
Attrib $MFT
Attrib $MFTMirr
Attrib $Logfile
 
These are just 3 of the NTFS metadata files.
 
If you try to find out non-$MFT fragmentation information in any other defrag product, it can not be found.
 
The reason SpeedDisk can sometimes only get the $MFT down to 2 pieces is that SpeedDisk can't move the 1st records of the $MFT. This means that if the beginning of the $MFT is not at the top of the logical partition, then SpeedDisk has to leave it where it is - but may put the remainder of the $MFT at the top of the logical partition.
 
Even though I work for a competitor, I do know quite a lot about other defrag products and what they can and cannot do :):
 
- Greg/Raxco Software

data/avatar/default/avatar18.webp

57 Posts
Location -
Joined 2001-07-25
A little bit of info about NTFS metadata...
 
NTFS is a self-describing file system. This means that all of the information needed to "describe" the file system is contained within the file system itself - in the form of metadata.
 
The $MFT is where all of the information about files are stored - in the form of file id's. A file ID is comprised of a 64bit number - of which 2/3 is the actual FileID and the remaining 1/3 is a sequence number. When files are deleted from an NTFS partition, the file id isn't immediately re-used. Only after hundreds of thousands of files are created is the sequence number incremented and the "empty" file id re-used. That is why the $MFT continues to grow and grow and grow. It is also why the $MFT Reserved Zone exists - to allow the $MFT to grow "into" it - hopefully in a contiguous fashion. Very small files can be stored "resident" in the $MFT. As much of the $MFT as can fit into memory is loaded when the partition is mounted.
 
The $MFTMirr is an exact copy of the 16 records of the $MFT. The first 16 records of the $MFT contain files 0 - 15. File 0 is the $MFT. File 1-15 are the remainder of the metadata (not all used btw...). The $MFTMirr is NTFS's "fallback" mechanism in case it can't read the 1st 16 records of the $MFT.
 
The $Bitmap is exactly that - a file containing a bit for each logical cluster on the partition - with the bit either being set or clear depending if that logical cluster is free or used.
 
The $Logfile is NTFS's transaction log - all updates to disk first go through the transaction log. This transaction log is what provides for NTFS's recovery (roll back/forward transactions) when the operating system is abnormally shutdown/crashes and provides for enhanced file system integrity.
 
$Upcase is used for Unicode information (foreign language support, etc...).
 
These are just a few of the NTFS metadata files and what they are used for. Windows 2000 introduced new metadata files (i.e. $Usnjnl and $Reparse).
 
Regarding SpeedDisk:
 
SpeedDisk is the only commercial defragger that does NOT use the defrag APIs provided by Microsoft as part of the NT/2000/XP operating system. These APIs are tightly integrated with the Windows Memory Manager, caching system and file system and take care of all of the low level I/O synchronization that has to occur to allow safe moving of files online - even if the files are in use by other users/processes. The APIs impose some restrictions, however. Pagefiles can't be defragmented online, (nor the hibernate file under Win2k), directories can't be defragmented online under NT4 (FAT and NTFS) and Win2k (FATx). The $MFT and related metadata can't be defragmented online as well. In order to get around these restrictions, SpeedDisk "wrote their own" stuff to move files - it has a filter driver that gets installed/run. This is why SpeedDisk can be service pack/hotfix dependent. Depending on the changes that MS makes to the Memory Manager and file system, SpeedDisk may have to be updated to safely run. That is why (for example), if you have Windows 2000/SP2 installed and run SpeedDisk, it displays a warning message about not being compatible with that version of the operating system and proceed at your own risk...
 
I know HOW SpeedDisk is doing what they are doing. However, knowing what can happen if they calculate things incorrectly, makes me a bit wary. However, SpeedDisk is alot better product - in terms of actually being able to normal data files - than some of the other defrag products out there.
 
- Greg/Raxco Software

data/avatar/default/avatar19.webp

3857 Posts
Location -
Joined 2000-03-29
Nice post Greg.
 


data/avatar/default/avatar29.webp

1778 Posts
Location -
Joined 2000-01-18
true, a nice read, but long as h.ell.

data/avatar/default/avatar39.webp

3867 Posts
Location -
Joined 2000-02-04
Diskeeper 7 can now defraf 4K+ clusters. (Took 'em long enough)

data/avatar/default/avatar30.webp

45 Posts
Location -
Joined 2001-03-01
Quote:
Messing around in the registry, I noticed the option to "ClearPageFileAtShutdown". It is set by default to "0" (no). What is the purpose of enabling this feature? Are there any performance advantages/disadvantages in doing do?
The purpose is to permit NT to gain C2 security evaluation.

It has no notable impact, other than slowing shutdown times, and, of course clearing the pagefile.

It would be theoretically possible for sensitive information to be left in the pagefile, and hence recoverable. C2 has strict rules on the re-use of resources, and so to prevent this kind of behaviour it has this option available to you.

data/avatar/default/avatar30.webp

45 Posts
Location -
Joined 2001-03-01
Quote:
Security.

The pagefile may contain the data that you were working on. Clearing it a shutdown makes it harder to find the data. I would not enable it. The pagefile will be remade on bootup re-fragmenting your files.

I don't believe this is true; it clears the pagefile, not deletes it (as I understand it; this was the NT 3.51 behaviour, and I don't see any reason for it to be different).

data/avatar/default/avatar30.webp

45 Posts
Location -
Joined 2001-03-01
Quote:
Diskeeper 7 can now defraf 4K+ clusters. (Took 'em long enough)

It wasn't their fault.

The defrag FSCTLs provided by the NTFS driver didn't work for clusters greater than 4 kbytes.

data/avatar/default/avatar30.webp

45 Posts
Location -
Joined 2001-03-01
Quote:Originally posted by AlecStaar
(jwallen brought that up and another defragger PerfectDisk AND O&O defrag...)

So, I mentioned Executive Software's Diskeeper! AND Diskeeper DOES DO MFT$ work at boottime, here is a quote from their products features page on it:

"Frag Guard :registered: (Windows NT and 2000 only). Online prevention of fragmentation in your most critical NT/2000 system files: the Master File Table (MFT) and Paging Files.

Fragmentation of the MFT can seriously impact performance as the operating system has to go through the MFT to retrieve any file on the disk. If the MFT is already fragmented when Diskeeper is installed, Diskeeper can defragment it with a boot-time defragmentation feature, then maintain this consolidated state.

Frag Guard works much the same way with the Paging File. This is a specialized NT file on the disk which acts as an extension of the computer’s memory. When memory fills up, the system can utilize this file as virtual memory. A fragmented Paging file impacts performance when it reads data back into system memory. The greater the fragmentation, the slower vital computer operations will perform."

http://www.execsoft.com/diskeeper/about/diskeeper.asp

Now here's the problem.

Executive Software's most famous product is Diskeeper. Diskeeper is a disk defragger.

As such, it's in Executive Software's interest to make a really big deal about the performance degradation caused by fragmented files.

The thing is... it's not that big a deal.

To determine the number of file reads that are being hindered by fragmentation, go into PerfMon, and add the counter PhysicalDisk\Split I/O/sec, for the disk or disks that you're interested in.

This counter measures the number of reads/writes made to the disk that have to be split into multiple operations (there are two reasons to do this; large I/O operations, and I/O operations that are split by fragmented files).

If that counter stays on zero (or close to it) then your level of fragmentation is irrelevent -- because it's not fragmenting I/Os.

(OK, I'm glossing over some details; for instance, whenever a piece of information is read from a file the list of clusters that make up that file has to be read. With FAT 12/16/32, it doesn't matter if the clusters are contiguous or not, it still has to read the whole cluster chain. With NTFS it is *very* slightly quicker to read this information when the file is in a contiguous block. The reason for this is that NTFS is extent-based (the file entries say, "This file starts at this cluster and continues for the next X clusters", rather than "this cluster, then this cluster, then this cluster, then this cluster ... then this cluster, that was the last cluster"). And it is *very* slightly faster to read one extent than it is several. Given that the extents themselves are listed contiguously, the overhead is likely unmeasurable except for the most extreme cases (like, a 1 Gbyte file made of 250,000 single cluster extents))

The other thing to look at is the counter PhysicalDisk\Avg. Disk Bytes/transfer. This gives a rough indication of how small a fragment has to be in order to cause a problem. At the moment, mine is at about 10 kbytes/transfer. Let's round it up to about 16 kbytes. Now, let's imagine that my pagefile is split up into fragments each of 1 Mbyte each.

The way Windows' VM works on x86 is to use 4 kbyte pages; each read into (or out of) the pagefile will be done in 4 kbyte chunks (I say "on x86" because the page size is platform-dependent; for instance, IA64 uses 8 kbyte pages). My 1 Mbyte fragment of pagefile thus contains 250 individual pages within it. Let's say a running program makes a request that causes a hard pagefault (i.e. it has to read some information back in from the pagefile), and it requires 16 kbytes information (the average transfer size) and they're located within my 1 Mbyte fragment. Assuming each location in the fragment is equally likely, then as long as the read starts at one of the first 246 page boundaries, then it won't require a split I/O. If it begins on one of the last 3 boundaries, it'll require a split I/O (because the end of the request will be in a different fragment). That's only a 1.2% probability that fragmentation of my pagefile will require a split I/O. And a 1 Mbyte pagefile fragment is pretty small.

That's only a rough calculation, but it's fairly representative of the truth. Split I/Os are rare, single transfers of more than about 64 kbytes are rare. If your fragments are all much larger than this (say, nothing smaller than a megabyte or two) then fragmentation is highly unlikely to be the cause of any measurable performance degradation.

Executive Software won't tell you such a thing, of course -- there'd be no money in pointing this out. But that doesn't make it untrue.

Quote:P.S.=> Norton Speedisk by Symantec? It does pagefile defrags DURING Win32 GUI Operations, only one I know that does! BUT, it has a nasty habit of snapping the MFT$ into 2 parts... always! This is why I keep Diskeeper around additionally, to take care of that! apk
The chance of a two-piece MFT mattering a damn is tiny, and would require extremely bad luck.

I wouldn't touch Speed Disk with a bargepole. It has the (unique) ability to be broken even by the tiniest change to the underlying disk format or internal mechanisms. Simply put, I have no trust in it, and no faith in what it does. The Windows documentation explicitly warns against making certain assumptions, but Speed Disk makes them anyway.

data/avatar/default/avatar30.webp

45 Posts
Location -
Joined 2001-03-01
Quote:Originally posted by AlecStaar
Yup, sounds JUST LIKE HPFS for Os/2, ext2 for Linux & previous versions of NTFS as well... Extended atributes data stored for files, like last access time & date stamps for example, as well as NTFSCompression & NTFSEncryption attributes as well as STREAMS data!
Actually, the only FS of those that I know works in this way is "previous versions of NTFS". I believe those other FSes have structures "outside" the filesystem (rather than the NTFS way of files *within* the filesystem, despite the NTFS driver hiding their existance) -- I know that HPFS certainly does (it has bitmaps interspersed throughout the disk to describe the 8 Mbyte bands in which data can be stored).

Quote:PLUS HardLinks (2 files have the same name (another scary one, potentially!) HardLinks are when the same file has two names (some directory entries point to the same MFT record). Say a same file has the names A.txt and B.txt: The user deletes file A, file B still is online. Say the user goes and deletes file B, file A remains STILL. Means both names are completely equal in all aspects at the time of creation onwards. Means the file is physically deleted ONLY when the last name pointing to it gets deleted.)!
Reference counted filenames are a POSIX requirement, and are quite useful. Though their behaviour can be initially disconcerting.

Quote:Yup, as I heard & mention above, $MFT can store files in its contents...
That's because all features of a file on NTFS are stored as attributes of that file, be they data or metadata. Any attribute can reside within the MFT entry, and any attribute (I *think* including the name) can be made non-resident (i.e. stored as an extent on the non-MFT portion of the disk) if it grows beyond a certain size. The data streams of a file are no exception to this.

Quote:I think a very dangerous 'bug' exists, because of this & zero-byte files creation: potential for disaster, you may wish to run this by your programmers:
A quote from another developer:
"Each file on NTFS has a rather abstract constitution - it has no data, it has streams. One of the streams has the habitual for us sense - file data. But the majority of file attributes are also streams! Thus we have that the base file nature is only the number in MFT and the rest is optional. The given abstraction can be used for the creation of rather convenient things - for example it is possible to "stick" one more stream to a file, having recorded any data in it - for example information about the author and the file content as it was made in Windows 2000 (the most right bookmark in file properties which is accessible from the explorer). It is interesting that these additional streams are not visible by standard means: the observed file size is only the size of the main stream contains the traditional data. It is possible for example to have a file with a zero length and at its deleting 1 GByte of space is freed just because some program or technology has sticked anadditional stream (alternative data) of gigabyte size on it. But actually at the moment the streams are practically not used, so we might not be afraid of such situations though they are hypothetically possible. Just keep in mind that the file on NTFS is much deeper and more global concept than it is possible to imagine just observing the disk directories. Well and at last: the file name can consist of any characters including the full set of national alphabets as the data is represented in Unicode - 16-bit representation which gives 65535 different characters. The maximum file name length is 255 characters."
I think that the file name length is an API limitation, not a filesystem limitation, though I could be wrong. AFAIK, file names aren't "special" attributes, though they're the ones that the system defaults to sorting by (it would be in principal be possible to have a directory whose contents were listed according to, say, size, or some user-defined attribute, by altering the directory information so that it listed the other attribute as the one to sort by).

Quote:type nul > Drive:\Folder\Filename.Extension can create zero byte files, take up no room,right? WRONG!

$MFT knows they're there & creates metadata surrounding them & forces itself to grow, small growth for each one, but growth! Do that long enough?? TROUBLE!

E.G.-> A program creates zero byte files with diff. names on them (1.txt, 2.txt... n.txt) in an endless LOOP? Watch what happens to the $MFT: grows until there is NO MORE ROOM left for anything else! Reservation zone might stop that & disk quotas, but I am not sure! PERSONALLY, I think it'd keep growing & growing until the disk is full... I do not believe the OS enforces Quotas on the $MFT nor does the NTFS drivers!
Disk quotas can be enforced on USERS in Explorer.exe security tab... I don't know if they can be imposed on SYSTEM user or NTFS driver itself!

Yep, there is the potential for causing a problem here. The few hundred bytes occupied by MFT entries aren't counted towards my disk quota (and I'm not sure named streams or custom file attributes are, either), and it'd be possible to use all disk space in this way. Similarly, it'd be possible to use all the disk space with zero length files on an inode-based filesystem, by simply using all the inodes. This class of problem isn't restricted to NTFS.

Quote:To change the amount of space NTFS reserves for the MFT:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\FileSystem

Add NtfsMftZoneReservation as a REG_DWORD value & of a range from 1 - 4.

1 is min percentage, 4 is max % used.
It's exceedingly rare to be worth bothering with this.

Quote:* Scary eh? Top that off w/ each of those files possessing a hidden filestream... & you compound the danger! Food for thought for your companies next upgrade... Watch for alternate datastreams in zero byte files & alternatedatastreams period!
There are a number of tools for listing streams in a file (the "normal" Win32 file APIs can't, but the Win32 backup APIs can), so it's not an unsolvable problem.

Quote:Yes, it is a bitmapped filesystem that MS uses in any of them they use! Ext2 on Linux is same... bitmapped filesystem, most defraggers & people call it "Volume Bitmap".
It isn't a "bitmapped filesystem". It uses a bitmap of the disk to speed the process of locating free space on the disk, but if anything, it's an extent-based filesystem. The use of some kind of bitmap is almost mandatory (as it's too expensive (though perfectly possible) to build the information from file entries each time a free cluster needs to be found.

Quote:I am not aware those currently! I have read about "reparse points" but not about $Usnjnl... is it the 'hidden' folder named "System Volume Information"? I can see it in Explorer.exe but cannot access its contents (must change Explorer's properties to see it)!
No, they aren't. I don't remember where they are (I think in a directory $Extend, but I don't remember). System Volume Information contains the files made by the Content Indexing service.

Quote:Works fine on SP2 2k, & previous ones... & that IS how they skate around patches! They do it independent of the MS defrag API calls.
This is why I wouldn't ever trust the product.

Quote:Good read on that @ sysinternals.com also! The API's MS uses?? Came from Executive Software code! In NT 3.5x you had to patch the kernel to use Diskeeper... MS licensed that code & integrated it into their kernel, & the native defrag in 2k/XP is a VERY BASIC watered-down Diskeeper.
That makes little sense, as the mechanisms that the native defragging uses are not part of the kernel.

Quote:Note, they both run from .msc console shortcuts extensions to Computer Management as well? First time a Symantec Product was NOT the native defrag in a Win32 based OS!
The defragger in Windows 98/98SE/ME is as much an Intel product as it is a Symantec one.

Quote:It's good stuff, has merits others don't... mainly? System Uptime should appeal to Network Admins!
Uptime is for pissing contests. Availability is all that matters, and if you demand high availability, you have a cluster, and so can afford to take a machine down for maintenance.

Quote:No taking down a server for maintenance when Speedisk works... uptime is assured, defrags can take place & users still access their data!
A non-issue, IMO. I'd sooner have a product that is guaranteed not to be broken by minor FS updates or kernel changes (i.e. on that uses the built-in FSCTLs) than one that doesnt' require me to reboot occasionally.