jwl812 0 Posted June 29, 2002 I have heard that partitioning a large drive under XP using NTFS is not a good idea, that it should be kept as one partition. I have a 40gb quantum drive. Share this post Link to post
Davros 0 Posted June 29, 2002 Nope, that's utter bs. Where do all these weird NTFS myths come from? NTFS is extremely stable, secure, and flexible. Do whatever you want with your partitions, it will handle it very nicely. Share this post Link to post
Brian Frank 0 Posted June 29, 2002 Yeah. I run a couple rigs with NTFS partitions, and for a while even ran a combo of NTFS/FAT32 partitions (not both on each partition, mind you). If you plan on formatting it as one big drive, NTFS would be your best bet. Share this post Link to post
pbuckne 0 Posted June 29, 2002 Hear some weird stuff like that before. NTFS is WAY more stable than FAT anything. Period. Share this post Link to post
ghayes 0 Posted July 19, 2002 "One thing to lookout for: Crossing over 4096k cluster/sector sizes! Some disk defraggers (like XP/2k native one) won't cut it with that size! (Maybe the latest defraggers from Diskeeper, PerfectDisk, and O&O account for this shortcoming, but last I knew & tried hands on? Speedisk from Norton's the ONLY one I knew of that worked on over 4096k sector/cluster mapping sizes on NTFS formatted disks!)" This 4k cluster size limitation and defragmenters is a restriction in Microsoft's defrag APIs under NT4 and Win2k. Under WinXP, Microsoft's defrag APIs fully support defragmenting NTFS partitions with a cluster size less than or equal to 64K. SpeedDisk, under NT4 and Win2k, does NOT use Microsoft's defrag APIs which is how it is able to get around this restriction. - Greg/Raxco Software Disclaimer: I work for Raxco Software, the maker of PerfectDisk - a commercial defrag utility, as a systems engineer in the support department. Share this post Link to post
ghayes 0 Posted July 19, 2002 According to Symantec, under Windows XP, SpeedDisk is now using Microsoft's defrag APIs. Following is from Symantec's web site: "Situation: You are running Speed Disk under Windows XP. Speed Disk may seem to run slower than the 2001 version, and after completion, the drive still shows significant fragmentation. Solution: Speed Disk for Windows XP does not use the native Speed Disk driver. Instead it uses the Microsoft MoveFile API. This results in less functionality and less thoroughness for Speed Disk, though moves are now handled in a "Microsoft-approved" manner. As a result of the migration away from the Norton Speed Disk driver, higher amounts of fragmentation may remain on the drive after Speed Disk completes. Speed Disk for Windows XP does not touch system files, system folders, or Master File Table (MFT). In addition, some of the fragmented files are unmovable, such as the _Restore files and the Pagefile. Therefore, higher fragmentation rates may be reported, especially for the System Volume Information folder. However, fragmentation will still be much lower than it was before running Speed Disk, and file placement will be optimized." Please note that under Windows XP, there is no reason that system files, system folders and the $MFT (all but the first 16 clusters) can NOT be moved online - Microsoft's defrag APIs fully support it. Because SpeedDisk under Windows XP is using Microsoft's defrag APIs, there are certain files that it will not be able to defragment because SpeedDisk doesn't have the ability to perform a boot time defrag: directories on FATx partitions, the pagefile, the hibernate file and non-$MFT metadata on NTFS partitions. "use MS/Execsoft API's..." Just a point of clarification. The Microsoft Defrag APIs are not and never have been Executive Software defrag APIs. Executive Software never wrote them and doesn't maintain them - for NT4, Win2k OR WinXP. The story of how Executive Software "wrote" the defrag APIs is an urban myth. The person who helped write those defrag APIs (one of the original developers of the NTFS file system) gets quite a chuckle out of this myth. In regards to "short stroking" disks... The file system deals in logical clusters and has no idea of the underlying disk technology. The file system doesn't know how many platters your hard drive has or how many read/write heads or how much onboard cache it might have. It doesn't know if it is IDE or SCSI. It doesn't know if it is RAIDx or anything else. All the file system knows is that each and every partition starts at logical cluster number 0. Whether it is possible to create a partition so that it is located on only 1 platter of the hard drive and that it is located at the fastest part of the platter - I have no idea. - Greg/Raxco Software Share this post Link to post
Alien 1 Posted July 19, 2002 Quote: According to Symantec, under Windows XP, SpeedDisk is now using Microsoft's defrag APIs. Following is from Symantec's web site: "Situation: You are running Speed Disk under Windows XP. Speed Disk may seem to run slower than the 2001 version, and after completion, the drive still shows significant fragmentation. Solution: Speed Disk for Windows XP does not use the native Speed Disk driver. Instead it uses the Microsoft MoveFile API. This results in less functionality and less thoroughness for Speed Disk, though moves are now handled in a "Microsoft-approved" manner. How is that a "solution"? An explanation, perhaps, but it's not a solution. Isn't there anyway to use the Speed Disk driver? I tried fiddling about with the files from the 98 v., but I couldn't get it to work under XP. It says that the 2002 v. runs slower than the 2001 v. - does this mean that 2001 will run on XP? I think I still have 2001 somewhere. Share this post Link to post
Admiral LSD 0 Posted July 19, 2002 I don't think so... I believe M$ made a few subtle changes to NTFS between 2k and XP so Speed Disk 2001 is likely to completely screw your disk over. Share this post Link to post
ghayes 0 Posted July 19, 2002 I agree. It's more of an "excuse" than a solution. One thing to keep in mind. Symantec is getting out of the defrag business. You currently can't purchase a version that will install/run on a server OS. Eventually, the workstation version will go away as well as Symantec is moving toward strickly being a security company. I'm not sure if SystemWorks 2001 will install/run sucessfully on WinXP. From what I have seen in other forums, people have had quite a bit of trouble getting it to install/run. The other thing to consider is that bypassing Microsoft's defrag APIs requires SpeedDisk to handle on its own the I/O syncronization that occurs between the file system, caching system and memory manager to allow files that are in use to be moved. The version of SpeedDisk in SystemWorks 2001 may not handle things correctly and there could be issues. If you succeed in getting SystemWorks 2001 installed on WinXP, I'd strongly suggest making sure you have a good backup of things prior to trying to run a defrag pass. - Greg/Raxco Software Share this post Link to post
CyberGenX 0 Posted July 20, 2002 Hasn't FAT32 been killed yet, yeesh what a dated file system!!! Yes you can partition the heck out of NTFS and it IS more stable. Plus it doesn't adhere to the 4GIG file size limit that FAT32 has. Share this post Link to post
Alien 1 Posted July 20, 2002 Quote: I don't think so... I believe M$ made a few subtle changes to NTFS between 2k and XP so Speed Disk 2001 is likely to completely screw your disk over. I don't recall saying that I was using NTFS [ok, so on 1 drive/partition I am, but that's beside the point ], infact in numerous posts previously I have stated that I am using FAT32. I think the 1st time I brought this point up about Speed Disk I said that I could understand if I was using NTFS, but I'm not. Share this post Link to post
clutch 1 Posted July 20, 2002 Hey APK, I think what the g-man is saying is that "short-stroking" shouldn't (and most likely doesn't) work, nor pay off. There's a major difference between logical and physical drive layouts. I have been learning this more and more lately because of the move MS is making toward unifying their Exchange, SQL, and AD database systems to one single store type that will be based on the next release of SQL (code named "Yukon" right now). They will probably move even file management to that storage layout as well, and I would imagine that data access and manipulation will be much faster and reliable (less translation overhead, and fewer things to get patched/updated between the application and the actual file). It would seem that defragmenters just move and shift how the OS "sees" the files and gets to them rather than physically moving them across a platter or multiple platters (not to mention if you have extended stripe sets or unusual mounting point configs using Dynamic Disks in Win2K or XP). So, defraggers work and work well, but I am not entirely sold on the short stroking concept either. But I guess if you limit how much can be stored on a harddrive to 50% of capacity, then it might actually run pretty fast since there's not much data to go through. Again, if I'm wrong on your illustration ghayes, then let me know. I think I am starting to pick it up though... Share this post Link to post
clutch 1 Posted July 22, 2002 Umm, cool. There are 2 reasons why I follow ghayes' line of thought: 1. He works in the field (or it would definately appear so with his responses in the past, and 2. It just makes more sense to me. I know what you are talking about with the outer edge rotating faster (that isn't a new concept to me), however it would seem to me that the data is simply being given to the drive (physical layout now) by the OS (logical layout) to work with. If the OS was actually instructing where to start partitions and such at a physical level, I could work with the what you are stating. But right now, I'm not quite there. Share this post Link to post
clutch 1 Posted July 22, 2002 Quote: Again: This is assuming engineers & designers of harddisk logic in the controller firmware designed it so that HDD's work from outermost/faster/larger circumference disks is where partition 0 (first one) starts at... I have faith in them, especially today where HDD performance is a PRIME concern, they in fact did design thus. This is not rocket-science level use of physics. I believe they'd spot that, & especially in today's performance minded world & in an industry they have specialty in: HARD DRIVE DESIGN. * APK P.S.=> Bit long winded & repetitive, but I want the point to strike home... apk Bingo. That is what I am talking about; whether or not the partition truly starts on the outside across all the platters AND if any software package can actually *MOVE* a file to the outer edge (this was mentioned earlier). As for being pros, we are all pros in differing respects in the computer industry, but I yield more to mr hayes because his job is centered around disk defragmenters in particular (my original meaning). Share this post Link to post
ghayes 0 Posted July 22, 2002 "I WANT TO MAKE SURE THE INFORMATION I HAVE HEARD & BELIEVED OVER THE YEARS IS ACCURATE, ESPECIALLY REGARDING MICROSOFT USING EXECSOFT DEFRAGGER CODE RATHER THAN SYMANTEC STUFF ON 2k/XP (I am certain that's NO rumor) & ALSO THAT IF THE NTFS FILESYSTEM CAN DO THINGS LIKE YOU SAID, WHY DID EXECSOFT USED TO HAVE TO PATCH NT 3.5x TO USE DISKEEPER 1.01 AT THE NT SYSTEM FILES LEVELS?" There were no native defrag APIs until NT 4.0. Previous versions of NT required a patching the operating system/file system to support defragmenting of files - this is what Executive Software did prior to NT4.0. They had a source code license to the operating system and patched the os kernal. This caused all sorts of problems as they really weren't supposed to replace the OS kernal End result was that if MS released a sp/hotfix, it would break executives stuff or ever worse - corrupt data. What ended up happening is that MS and ES got together and said "this isn't working very well. What can we do". End result is that ES worked with MS on the defrag API specifications (how to call, what information to return) and MS actually wrote the defrag APIs. How this got translated into the urban myth that it is - who knows. The fact that ES tells people that they wrote them probably has something to do with it In regards to the built-in defragmenter under Win2k. At the time that Win2k was in development (YEARS before it was released, there really was only 1 player in the defrag market for NT - ES. That's why MS partnered with ES to include a stripped down version in the operating system. In regards to WinXP, ES helped to write the built-in defragmenter - to MS' specifications. MS has sole control/ownership of the code and over future direction of the built-in defragmenter. - Greg/Raxco Software Share this post Link to post
ghayes 0 Posted July 22, 2002 Okay - a challenge for someone. You have 4 logical partitions on a single hard drive (C:, D:, E:, F If you have file that resides on the E: partition starting at logical cluster 100,000, at which physical cluster on the hard drive does it start? Furthermore, on which platter of your hard drive does reside? If you have an answer, please provide details on how this information was gathered. - Greg/Raxco Software Share this post Link to post
ghayes 0 Posted July 22, 2002 "(Granted, it does show it creates "hooks" to some API calls (To the API itself from MS in the filesystem... BUT, it is itself, an API in itself since other programs utilize it: Remember, Applications Programming Interface/API is just that, a hook to functions you can use. Yes, semantics & about definitions, but still an API... one used in defragmentation by MS products no less & others))" As you are probably aware, there is a difference between writing an API specification and actually writing the code that does the work According to the ES web site, they co-developed these APIs - which is true. What is NOT true is that they actually wrote the code that does the file move itself (yes I've heard the "we wrote the APIs claim myself manytimes and have had numerous customers also ask Raxco if it was true). I know of a certain defrag company that claims that their latest version is certified for Windows 2000 but actually isn't. Most people wouldn't know how to verify this claim and wouldn't know that this claim was not true... The end result is to not take anything on faith. Verify for yourself. - Greg/Raxco Software Share this post Link to post
ghayes 0 Posted July 22, 2002 "Which companies' that? Just curious... Norton/Symantec & their Speedisk?" Nope. Eventually you'll narrow it down. Here's a clue... http://www.veritest.com/certified/win2000server/CDIOnLine.ASP?WCI=wcIndex&INDEX=INDEX Share this post Link to post
ghayes 0 Posted July 22, 2002 "The MoveFile API won't do it by itself... that's my point. It needed its apiary extended apparently with Execsoft API calls to make it safe & make it work. Extending the existing API with their work. SO, that given, execsoft DID WRITE PARTS OF SAID API, & sold it to MS." The defrag APIs are actually part of the file system. ES didn't write the file system - didn't sell any technology to MS for use in the file system. Defrag APIs also tightly integrate with the memory manager and caching system. That's all MS code - nothing from ES there as well... - Greg/Raxco Software Share this post Link to post
ghayes 0 Posted July 22, 2002 "Did Execsoft create/write an API that MS bought..." Where did you get this nonsense???? Listen, it is obvious that you really believe that ES wrote the code that performs file moves - and you are certainly free to continue to believe that. I've stated what has been made known to Raxco by the head of the file system development team at Microsoft as well as one of the actual developers of MS's defrag APIs. Let's just agree to dis-agree on who actually wrote the actual file move code - Greg/Raxco Software Share this post Link to post
ghayes 0 Posted July 22, 2002 Well, I'm scrolling through this crazy conversation looking for where I said that MS bought Executive Software code and I'm not seeing it... I also believe strongly in something - that this conversation is going nowhere - so for me, this conversation is done... - Greg/Raxco Software Share this post Link to post