Jump to content
Compatible Support Forums
Sign in to follow this  
pr-man

Question about swap files and XP

Recommended Posts

ok I have 512 megs of ddr memory, so I really dont need a swap file per say. Can I just set it to the lowest possible size?

Share this post


Link to post

Instead of reading that lonnnng trip down "what I would do" lane, understand this; You need a pagefile no matter how much ram you have. Almost all software and most OSes will need and use this space. You can choose not to use one, but at some point you will understand why you should be using one.

I always wanted to some how make a RAM drive and use it for the swap file.... ;(

Share this post


Link to post

yep yep

some apps... specially games require a nice big pagefile to play without getting all **tchy about virtual memory.

hero's4 is a pain

 

some of the better apps like pro/e and sdrc use a pagefile as a last resort

Share this post


Link to post

Ok well what do most people set theyre fixed swap file to if they have 512 megs ram already. My system ran so much smoother and faster when i ran with no swapfile at all but I think after awhile that was the cause of a file corruption problem i had.

Share this post


Link to post

if u havent already turn off your pagefile executive

 

that should keep the hd from using it until its nessesary

Share this post


Link to post
Quote:
if u havent already turn off your pagefile executive

that should keep the hd from using it until its nessesary


I have heard that this tweak does nothing for 2k and XP. It supposedly only applies to NT4. Which makes perfect sense because if i use this tweak it doesn't do a god damn thing.

Share this post


Link to post

no i havent heard that. im gonna do some research

turning it off killed my hd from constantly caching so i assumed it still worked.

the ibm drives arent near as bad as the seagate drives for that annoying click click, but my hd's never do anything 'cept load programs.

 

of course my scsi controllers dont work as well in xp so that may have something to do with it.

:x

Share this post


Link to post

WinXP hammers if you disable (and no, it will not forcibly create one for you) the pagefile. However, Photoshop 6 will not even start if you don't have one. So, what I do is use static pagefiles with fixed min and max sizes that are equal to one another, and then use these equations:

 

< 128MB RAM = 2xRAM for swapfile

128MB-256MB RAM = 1.5xRAM for swapfile

> 256MB RAM = 256MB for swapfile

 

Most of my applications tend to run in RAM anyway, and I usually get at least 512MB of RAM for any serious workstations (1GB for the last 9 that I have brought in). So a 512MB or more swapfile would be a waste of space that would simply get excessively fragmented over use.

Share this post


Link to post

well i was not thinking of the L2 thing i know that that is old news. But i do remeber a post that listed the most common tweaks that people used and why they didn't do jack shit. I only have 512 megs of ram which is not nearly enought for what I do so i keep my pagefile static at 2gigs. I don't think that any tweak is going to make me use the pagefile less since my load is usualy about 800 to 1200 meg.

Share this post


Link to post

i like to work on photo shop a lot while running premire and about 15-20 separate iexplore processes with outlook xp and doing lots of ftp downloads and whatnot plus music and a bunch of other crap. It adds up quick. One process of ie alone will grab 18-60 megs

Share this post


Link to post

I have a gig of RAM, and I run SQL Database Server 2K, Progress Database Server 2K, Solidworks 2001 Plus, Outlook XP, IE, VS.NET, and several Adobe products. During this, I haven't had a reason to go beyond 250MB of swap file space. Maybe it's just luck then, because it keeps running.

Share this post


Link to post

The complication of the join has little to do with it, since many stored procedures are going to dump everything into a temp db and manipulate it then. Afterward, it's canned. Also, the size of the DB would have to come into play, and unless you have 500MB of data being returned on the query (which you shouldn't, as a properly designed and executed query should return relatively few records) you shouldn't have the need for such a huge pagefile. DB servers should have a ton of RAM and be setup to actively discourage HD usage (e.g. proper configuration of the buffers to return somewhere around 80-95% buffer hits). If you were moving anything of that size, it should be done through data mining techniques and then again many of those servers don't rely on swap space as they may have in excess of 10GB of RAM.

Share this post


Link to post

i just set up an external scsi raid drive for my system page file and my photoshop scratch disks today and so far it is kickin ass. The raid drive is an all in one unit intended for video editing but since it is only 32 gigs and I allready have a 150 gig internal raid array dedicated as a video drive I just use it for paging and scratch. I am wondering if there is anything else i can do with it cause i am using like 2 gigs out of 32 seems kinda wasteful. But then again i have 330 gigs total in my system now so it is not like I really need to use that space.

Share this post


Link to post

APK,

 

It looks like you have several things confused into one single idea. So here's a couple of things for you to think about:

 

1. SQL Server has changed a GREAT deal since you were working with it directly, especially in the area of memory management.

 

2. You were developing applications against the server, rather than actually working on the server locally in more recent years. I have been doing both over this time, and it is the job of the developer to limit the traffic that is moved around during application activity (and to limit the damage the user can do, especially with update queries ;)).

 

3. Per SQL books online, the server defaults to running as many operations as possible directly in memory. This is why SQL server doesn't rely so heavily on the perception of free memory that the OS has, but rather "thinks" for itself. It handles memory differently in each of the OSs that are supported. For this reason, SQL server can be (and is) specifically designed to reduce and in many cases eliminate the need for the pagefile. Here's a quote:

 

Quote:

Having a lot of physical I/O to the database files is an inherent factor of database software. By default, SQL Server tries to reach a balance between two goals:

 

Minimizing or eliminating pagefile I/O to concentrate I/O resources for reads and writes of the database files.

 

Minimizing physical I/O to the database files by maximizing the size of the buffer cache.

By default, the SQL Server 2000 editions dynamically manage the size of the address space for each instance. There are differences in the way Windows NT, Windows 2000, Windows 95, and Windows 98 report virtual memory usage to applications. Because of this, SQL Server 2000 uses different algorithms to manage memory on these operating systems.

 

Also, the topic of VMM (Virtual Memory Management) expressively states not to configure the VMM in SQL Server to exceed physical RAM as the server will take a large performance hit, and states that leaving it to assign memory dynamically is a better idea. However, I have some systems where that isn't possible (such as SQL and Exchange server on the same box as with the SBS edition from MS) and I have to configure the memory usage manually for both SQL and Exchange (btw, SQL's memory manager is a lot better Exchange's; Exchange doesn't know how to let go).

 

So, you might want to try working with the more current products, and then evaluate your memory needs. Also, try using the performance counters with everything installed and running under load for about a week, and then you can close up the pagefile (or add RAM) from there.

 

The pagefile was only used because systems needed more memory to work with than what was physically available (4GB in a regular 32bit environment, not withstanding special memory managers). If a system could run SQL server with 512MB of RAM and a 1.5GB swapfile, then why not drop or greatly reduce the swapfile if you go to 2GB of RAM? Why in the world would you want to *increase* it to 2GB+ since you added RAM? In fact, the only specific need that was listed for a large swapfile on smaller systems was if the Full Text Search Engine was installed with SQL server, which would make sense to some degree. But that's it, and I imagine that can still be offset with proper memory management.

Share this post


Link to post

The tempdb is its own db, and as such has "zero" to do with the pagefile. I am lost as to why you would think otherwise, but that's no matter. It is a separate db, just like the master or any other one and as such can be configured and tuned. You can set it to grow as needed, or lock it down (all of the other maintenance options don't apply as the system handles this one on the fly). For instance, on my main server I keep an IIS Log db that all my IIS servers report to. Right now it's at 3.5GB but the temp db is just under 20MB. Since the tables are built and dropped as needed, it isn't such a big deal. So, here's another quote:

 

Quote:
tempdb holds all temporary tables and temporary stored procedures. It also fills any other temporary storage needs such as work tables generated by SQL Server. tempdb is a global resource; the temporary tables and stored procedures for all users connected to the system are stored there. tempdb is re-created every time SQL Server is started so the system starts with a clean copy of the database. Because temporary tables and stored procedures are dropped automatically on disconnect, and no connections are active when the system is shut down, there is never anything in tempdb to be saved from one session of SQL Server to another.

 

By default, tempdb autogrows as needed while SQL Server is running. Unlike other databases, however, it is reset to its initial size each time the database engine is started. If the size defined for tempdb is small, part of your system processing load may be taken up with autogrowing tempdb to the size needed to support your workload each time to restart SQL Server. You can avoid this overhead by using ALTER DATABASE to increase the size of tempdb.

 

How's that? Also, you might want to consider an MSDN subscription so you can keep up with this software if your clients cannot. Because one day you will find someone that is up to date and you will be running on out dated information.

Share this post


Link to post

Ack, I figured the paragraph in combination with what you already knew would handle it, but I guess not. So, here's the long and short of it:

 

1. You can lock the size (as I mentioned earlier, but was ignored) of the tempdb, which is the same thing that you would do in a pagefile. Also, since it writes the file in a sequential fashion using a B-Tree layout (for faster tablescans) the dropping of the new tables should leave much more linear spots to put new tables in. And, since SQL server regulates the file growth aggressively, can be manually controlled, AND the tempdb is recreated every time the service is restarted most of your comments are moot. This is why some dbas have automated means of stopping and restarting the service occasionally to drop any orphaned connections/sessions, to manually free up any RAM (which isn't too much of an issue with this version), and to can the tempdb and rebuild it.

 

2. You can adjust the leading buffer percentage to whatever you want.

 

tempdb.gif

 

Now, taking a look at all of this, I think this illustrates how using a large amount of physical RAM and proper configuration of a system can lead someone to reduce or eliminate the pagefile in many cases.

Share this post


Link to post

Dude, you're going in loops. The image I posted *IS* the default autotune function. As you can see, there are *MANUAL* functions as well. Now, if you were to restart the db, it would delete the existing file and create a new one. That's it, nothing confusing about it.

Share this post


Link to post

Alright, let me guide you through this...

 

Remember the post where I put the little picture on there? Well, it had 2 responses on it (both numbered). And then, even though I stated that you *could* manually alter the page buffer space, I later stated that this was the default AUTOMATIC setting. So, there's your answers from me. If you don't like them, fine. I'm fairly sure that I have covered this topic incredibly thoroughly and you are simply continuing this just to have something to debate over. Your initial point was to defend your position on the use of pagefiles, and now it has gone to the use of the tempdb in a version of SQL server that you have never even seen (let alone administered) before. If you want to know more about cutting edge db software, join me on the SQL server beta team and then you can get all the new toys when they come out.

Share this post


Link to post

Look, I did answer them. Working with you on this has been a very trying process. If for some reason you *STILL* insist that I am somehow avoiding your questions, then use the following for an answer:

 

GREEN

 

That's right. So, in reference to both of your questions, my new answer is GREEN. GREEN is the way that SQL server does everything, and we have been keeping it a secret for a very long time. However, I am giving this information out from our secret lab in a trailer park in South Carolina (that doubles as a meth lab, you might have seen me on "COPS" a few times drunk and with my shirt off, but if you can't remember here's my picture):

 

redneck-phones3x.jpg

 

That's me going over the wrap up code with Bill G. via a secure line. Oh sure, it *looks* like I'm just taking a dump outside of our "trailer", but that toilet is just a cover for our air supply to the lab (this was a practical joke, the programmers didn't think it was so funny).

 

So there you have it, and remember to think GREEN.

Share this post


Link to post

Hey all

 

well, i have read almost all of these post, and some GREAT! info in it.

 

Now,

 

@ my work i will be gett'n a new system

 

Specs will be:

 

Pentium® 4 Processor at 2.40GHz with 533MHz system bus/ 512K L2 Cache

 

512MB PC800 RDRAM

 

DUAL (2 of them) 19 in (17.9 in viewable,.24-25AG) P992 FD Trinitron® Monitor

 

128MB DDR NVIDIA GeForce4™ Ti 4600 Graphics Card w/DVI and TV-Out

 

80GB Ultra ATA/100 Hard Drive

 

3.5 in Floppy Drive

 

Microsoft® Windows® XP Professional

 

Logitech® Optical USB Mouse

 

10/100 PCI Fast Ethernet NIC

 

56K PCI Telephony Modem

 

24x/10x/40x CD-RW Drive

 

Integrated Audio with Soundblaster Pro/16 Compatibility

 

Harman Kardon HK-395 Speakers with Subwoofer

 

DVI-VGA Adapter to connect 2 CRT Monitors to Ti4600 or Ti4200 Video Card

[/list:u]

 

 

Now, lately i have been doing ALOT of gfx work, with PS , 3d studio.

, even though i will have 512mb of PC800, and an 80g ATA 100 HD

 

Would it be possibly a good idea to get a separate SCSI controller, and perhaps a small SCSI drive (5g or 10g), and make JUST that my swap / VM disk? and perhaps as well my PS scratch disk???

 

Would i really benefit alot as opposed to making a 5g partition off of the 80g drive?

Share this post


Link to post

With 512MB of RAM, you might as well make a small fixed-size swapfile. I use a GB of RAM on my dev boxes and try to pitch the swapfile altogether. Also, if you have access to the separate controller and drive then that might be a good idea. But, if you have to spend a fair amount of money on then I probably wouldn't bother as the performance increase you would see might be negligible.

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×