blackpage
Members-
Content count
120 -
Joined
-
Last visited
Never
Everything posted by blackpage
-
howdy zenarcher, I informed you via PM that bits of the hardware for the new box have already arrived. If everything goes well, I should be ready willing and able to tackle the RAID issue over upcoming weekend. So stay tuned cu bp
-
greetings folks Our most valuable and most beloved friends from Redmond/US of A have launched a short movie competition in the UK, entitled "Thought Thieves", addressing the hot issue of theft of intellectual property (read about it here). Don't ask me why, but the mere thought of M$ launching competitions 'bout theft of intellectual property triggered my "mockerythalamus" deep within my barren cranial wasteland In a nutshell, I stitched together a little short film about the matter, but M$/UK wouldn't let me enter the competition as I'm not a UK resident. Anyway: it's still good for a lil entertainment, to be enjoyed here. Hope you like it In case it's necessary for the linux guys out there: enable your Flash-player plugin. greets blackpage p.s.: I don't have anything against Windows, I just consider M$ the top-prime-A#1 thief of intellectual property of 'em all. So Windows-worshippers: keep cool, it's just irony
-
heya zenarcher curses curses, I have to say. I didn't think that could possibly be such side effects that might happen when you "mv" folders like "/usr/bin", "/bin" and "/sbin". As far as I can see, you actually did create the symbolic reference "/usr" which points to "/raid/usr/". Given that I'm a bit clueless as to why MDK couldn't find the "mv"-command (which is normally under "/usr/bin"). A possiblility could be that MDK doesn't "follow" symbolic links to special file systems like RAID-volumes for system-critical folders (just like e.g. the Apache webserver doesn't like sym-links to its web-root-directory on FAT32-volumes). A possible solution could be to add the new locations to the path before beginning the move-procedure. Open a console and verify your current paths this way ... Code: user@box# $PATH This will show all current search paths. All you have to do is to add the new locations to the path at of start of this string. This can be accomplished by a command like this ... Code: user@box# export PATH="/raid/bin:/raid/usr/bin:/raid/usr/local/bin:/raid/sbin:/raid/usr/sbin:$PATH" It's important that the new paths are added at the beginning of the string, as this ensures that Mandrake will first look in these new locations when it searches for the commands you need. Please, keep in mind, that I'm guessing at this point, and I truly hate it to have no box around at the mo to verify my babbling (all my spare machines are currently configured as render-cluster for some huge 3D-Blender project). But as I said, as soon as the new "baby" arrives (Athlon DualCore, yay! ) I will give it a good go with SW-RAID. As it goes for the rcX.d-scripts. We ain't that far yet So it's possible that some more problems are lurking there, waiting to just "make your day" good luck
-
gidday zenarcher You can create the RAID array any time. That is: create them once the OS is installed properly and boots flawlessly, which seems to be the case now. Just boot into Mandrake and combine the partitions sda6 and sdb6. It should be no prob anymore now. have a good day
-
heya zenarcher, little thing, big trouble The partition numbering scheme in linux depends on what part-type a respective partition is. 1st Primary Partition on drive SATA-0 is called "sda0", the second one is "sda1" and so on ... so I assumed at least. With SATA all seems to be a bit different, and so the 2nd and 3rd part. can be numbered differently (didn't dig into that too much as I consider SATA a loss of an approved standard and therefore pain in the you know where . Or maybe the funky numbers come from the VIA controller, I have no idea, but the good thing is: it's no big deal, keep your HDD-setup as it is with the partitioning scheme as layed out. Disk 1 1 GB primary /dev/sda1 Swap swap 9 GB primary /dev/sda5 Ext3 / 67 GB primary /dev/sda6 Ext3 none Disk 2 1 GB primary /dev/sdb1 Swap none 9 GB primary /dev/sdb5 Ext3 none 67 GB primary /dev/sdb6 Ext3 none If Mandrake thinks it's got to be sda/b5 and 6 then just let it As long as the partition type is set to primary (which it is) we're cool. About MDADM error msg Well, as said above: You will have to obey the will of the mighty operating system here. That means to the explanation in my last post: "sda0" becomes "sdb1", "sda1" becomes "sda5" and "sda2" becomes "sda6" on your puter. Same goes for the "sdb"-partitions. The mdadm-command to create the RAID array would therefore be: Code: # mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda6 /dev/sdb6 According the this command that produced an error in your case ... # mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb2[code] Error Msg: "You haven't given enough devices (real or missing) to create this array." That message is true, you specified that the array consists of 2 devices ("--raid-devices=2") but only specified one, and even that was invalid due to MDKs SATA-partition numbering scheme. So as I said, it's no big ting. Just use the partition numbers Mandrake has assigned instead of the ones I used in my last post. Those were meant more "logically" (to have any kind of numbering) than real-world numbers. Keep up the good work, it looks quite good so far and keep us informed
-
heya zenarcher Ad MDADM: This package is the "frontend" for the linux software-RAID driver "md". The latter one is part of the kernel ever since kernel version 2.6.x so you should be able to use MDADM on your Mandrake box and configure some sweet software-RAID clusters with it. Here's a nice page on mdadm, its invocation from commandline and its options ... Link: MDADM explained in easy terms and depth I think the best news is that this tool seems to have replaced the "raidtools" and setting up raid-drives is now supposed to be a whole lot easier, using mdadm instead. If you take a look at the above site you will notice that you can easily test the created RAID arrays as mdadm lets yu start and stop the arrays from the shell too. In your case a procedure as follows is required ... Note: Before we go any further ... it would strongly advisable to have some sort of "Live-CD" at hand that would allow you to boot your machine from in case something goes wrong. With a live CD you could still edit the various config files and reset your Mandrake to a usable state. 1. Resetting disk-config in the VIA-controller Even though you already have setup your box nicely and have Mandrake up and running, I'd suggest you start from scratch again, beginning with the deletion of the raid array in your controller BIOS. TIP: You can, of course, try to boot the machine after you have deleted the RAID-setup in your VIA-controller proggie. As it doesn't work properly anyway, chances are good that Mandrake is still there and alive after this step which would save you at least the time of re-installing the OS again. 2. Partition your drives If the system still boots, you will have to re-partition your drives so that you can play around with software RAID a bit. If the system won't boot ... well, in that case you'll have to start all over again and do the partitioning from within the Mandrake installer. 2.1 Partition layout In all cases I'd recommend a "failsafe"-partitioning. That means to start out with a pretty basic disk-setup that Mandrake can surely use. You can launch and tweak software RAID later on. Using whatever partitioning tool is available create something like that (assuming your both SATA drives are "sda" and "sdb") ... Code: Part/Disk 1:[b]NR SIZE TYPE[color=#e5e5e5].......[/color]DEVICE[color=#e5e5e5].....[/color] FILESYS. MOUNT POINT[/b]01 1 GB primary .. /dev/sda0 . Swap ... swap02 9 GB primary .. /dev/sda1 . Ext3 ... /03 70GB primary .. /dev/sda2 . Ext3 ... none (for now)Part/Disk 2:[b]NR SIZE TYPE[color=#e5e5e5].......[/color]DEVICE[color=#e5e5e5].....[/color] FILESYS. MOUNT POINT[/b]01 1 GB primary .. /dev/sdb0 . Swap ... none (for now)02 9 GB primary .. /dev/sdb1 . Ext3 ... none (for now)03 70GB primary .. /dev/sdb2 . Ext3 ... none (for now) Note: we will not use /dev/sdb1 for a RAID volume. It's gonna be a backup-storage space for the upcoming procedures. 3. Installing the OS After you have created the partition structures as laid out above, install Madrake onto the small 9GB partition, named "sda1". Also, keep in mind to install the boot-loader into the MBR of a disk and not at the start of any partition (the installer should use the MBR anyway). After this step you have the OS with all the fancy folders and files installed on "sda1", and the system should boot easily as no RAID is involved yet. 4. Creating the RAID volume Boot into your Mandrake system and create some software-RAID volumes, using the mdadm utility. In fact you only have to create one volume ... Code: RAID volume to create[b]NR USING PARTITIONS[color=#e5e5e5]......[/color]SIZE[color=#e5e5e5]..[/color]RAID TYPE[color=#e5e5e5]..[/color]MOUNT POINT[/b]01 /dev/sda2 + /dev/sdb2 140GB RAID-0 ... none (for now) The command to issue to create the RAID device would be ... Code: # mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda2 /dev/sdb2 If there are no errors reported you can continue to step 4.1 4.1 Storing RAID info in mdadm.conf As pointed out on the above mentioned page you should utilize a configuration file named "mdadm.conf" which is located either in "/etc" or "/etc/mdadm". You will have to check where it actually is after you have installed the MDADM-package. In case there is none, check if there's at least a folder called "mdadm" under "/etc". If so, create the file there (you can make symbolic link to that file under /etc later). In the next step, open a suitable text-editor (kate or whatever floats your boat) and add the following text .. Code: DEVICE /dev/sda2 /dev/sdb2 This starts a "RAID device description" for mdadm and all you need is to add the specs for the drive-array. This is accomplished by opening a console and issuing the command ... Code: # mdadm --detail --scan Copy the line beginning with "ARRAY ..." and add it as second line in the text-file. After that, save the file under "/etc/mdadm/mdadm.conf". Just to be on the safe side, create a sym-link to this conf-file unter "/etc" by running the command ... Code: # ln -s /etc/mdadm/mdadm.conf /etc/mdadm.conf 4.2 Starting the RAID device At this point you have created a RAID-device and stored info about it in a config file. All you need to do now, is to start the array with the following command ... Code: # mdadm -As /dev/md0 5. Integrating the RAID device into your system If all goes well up to here, you can begin to make your RAID-volume availbale at boot time. Unfortunately I'm not absolutely sure as to what is necessary for this. It could be that a simple entry to the "/etc/fstab" file is sufficient (and you need to add an entry there anway). And it could also be that you need to add startup scripts in the "/etc/rc.d" or "/etc/init.d" directories. 5.1 Mounting the RAID-device For the time being let's start out with creating the mountpoint and the entry for "/etc/fstab". Open the file and add this line ... Code: /dev/md0 /raid ext3 defaults 0 0 While editing the fstab-file, add another entry for the backup partition which will be used later ... Code: /dev/sdb1 /save ext3 defaults 0 0 ... and save it. In the next step create the folders to mount the raid volume and the backup-volume to. Open the console again and launch the commands ... Code: # mkdir /raid /save At this point you should be able to "mount" your backup-partition and your RAID array with the commands "mount /dev/sdb1" and "mount /dev/md0". If you get errors about the drive not being formatted properly, it could be necessary to re-format the device. Use whatever tool Mandrake offers for that task or issue the command "mkfs.ext3 /dev/md0" in the console and re-mount the array with the above mentioned "mount"-command. 5.2 Testing the RAID-device Now is the time to start the first tests with the new RAID-device. As your first go, copy somthing over to the RAID volume and see if that works out ok ... Code: # cp -R /usr /raid This will make a copy of /usr on the RAID array. After the files are copied, do something very "Microsoft-ish": reboot your box to check if the RAID is available already when the system boots (due to the entry in /etc/fstab). If you can access all the files and folders under /raid/usr properly, you can begin to copy over all the folders from your root-drive to the RAID volume. If all folders are copied, you can begin to populate the RAID-volume with the files 'n folders from the root partition. Do this folder-by-folder and create a symlink to the new location after each folder has successfully been saved and moved. IMPORTANT: Do not copy or move folders that represent special filesystems (e.g. "/proc", "/dev", "/initrd" or "/mnt"). It's perfectly ok if you stick to the following folders: /bin, /etc, /home, /lib, /opt, /root, /sbin, /tmp, /usr/ and /var These represent all performance-critical sections of the OS anyway that would benefit from being placed on a fast RAID-0 array. As an example, the command sequence for the folder /usr would look like ... Code: # cp -R /usr /save# mv /usr /raid# ln -s /raid/usr /usr What this procedure does is in brief words: The "cp"-line copies the /usr-folder to your backup partition "sdb2" (remember the entry in fstab). The "mv"-cmd moves the /usr-folder to the RAID volumen, thus "making space" for the sym-link, which is then created with the "ln- s"-command. Simple as that. After all the copying you should have a system that still boots from an unproblematic SATA drive, but all the system and user files are stored on the fast RAID-0 volume. 6. Bad things that might happen Possible problems in the above procedure might arise when the system doesn't recognize and start the RAID-volume at boot-time. In such a case the aforementioned addition of a RAID-start-script to an "/etc/init.d/rcX.d"-folder or the file /etc/inittab" might be necessary. I don't want to discourage you, but in the case the RAID-volume fails to initialize at boot-time via the fstab-entry things could get a bit complex (determining WHAT rcX.d-folder is the right one to store a startup script into etc.). So if you run into problems, take a break at that point and keep us informed. In a fortnight or so, my new workstation should arrive anyway which would give me the opportunity to investigate the Software-RAID on a practical level with a quick MDK-test installation that I could then document well with screenshots and all that. 6.1 Restoring an operational status-quo If you stumble into problems, this solution - combined with a Live-CD - will allow you to restore your old system again in almost no time. To do so, boot the Live-CD, mount your regular root-partition (the non-RAID thingie), delete the symlinks to the folders on the RAID-volume and move the saved folders back to "/". 7. "Homework" If you succeed wih the procedure, you can do likewise with the 2 swap partitions by combining them to a single and fast RAID-0 array. If you need to re-format, use "mkswap /dev/sdb0" and run the "mdadm"-procedure again. Don't forget to alter the entry for the swap partition in "/etc/fstab". I hope this lengthy sermon will lead you to a usable RAID system. I truly would not want you to have to use your wife's puter. We all know what that means: desktop backgrounds, dynamically loaded every 2 minutes from the internet, showing the bubbly behinds of male models, pink or mauve window title-bars and "Shelly Allegro" at "18pt/italic" as menu font. That - especially in combination with the usual amount of fluffy toys placed on top and at least a dozen of screaming yellow "post-it" notes all around the monitor - is more than a man could possibly handle ) Hope that helps
-
greetings zenarcher The short version first: 1: Status of your current setup Yuppers, if you would switch from your current RAID setup to Software RAID, you would have to delete the RAID set in the controller-BIOS (results in complete data loss) 2: MDK tools for RAID setup I don't know what RAID setup utilites come with MDK these days, but you can take it as granted that the kernel comes with SW-RAID support. As it goes for the relation between drives/partitions, logical volumes and all that ... (lean back, relax, this is going to be lengthy What you have found on the MDK forum is about what I was talking about in my first post ("LVM"/"Software RAID"). In fact, "onboard hardware RAID" - as postulated by the motherboard manufacturers - is in no way a hardware solution. It's a marketing buzz-word, and indeed: you'd probably be better off with a genuine linux software-RAID solution. The linux SW-RAID-technique is what you might know from Windows as "Dynamic Volumes". Compared to the cheapo-onboard solutions SW-RAID has some major advantages. The most interesting one is that you can use partitions for your RAID-volumes instead of whole drives. Time for some rotten ASCII-art, I say HARDWARE RAID WITH ONBOARD CONTROLLERS Let's assume you have 2 disks (D1 and D2), and both disks are attached to an onboard controller (CTRL), like your VIA thing. The device chain would be as follows: Code: [b][D1][/b][color=#e5e5e5]····[/color]\[color=#e5e5e5]····[/color][CTRL]--->[DRIVER]--->[OPERATING SYSTEM][color=#e5e5e5]····[/color]/[b][D2][/b] In this setup the CTRL is only used to have something to attach the disk-cables to. The main work is done by the software layer "DRIVER". This driver translates all requests from the operating system to the controller and vice versa. The result is (or should be) a "logical volume" which size depends on the RAID-type (D1-size + D2-size for a RAID-0, or the size of a single (or the smaller) disk for a RAID-1 setup). Also, "onboard RAID" has the limit that can only stitch together complete and total physical units (= drives). The DRIVER-layer that is mentioned above is where your current problems originate from. An example: In your current RAID-1 config the disks are still accessible as 2 seperate drives. This tells us 2 things: a) the DRIVER-layer seems to not work properly and the VIA "RAID"-controller is not too much different from any average non-RAID SATA-controller as the disks as still available, even though the driver is not working. So basically, whatever you create in your controller setup-BIOS: it's all just some structures and stuff the DRIVER-layer is supposed to use to fake the impression that there is indeed some RAID-cluster in the background. The main conclusion we can draw from that is that what is called "onboard hardware RAID" is indeed just a "software RAID" as all the RAID-related work is accomplished by a driver anyway. SOFWARE RAID If onboard RAID is "software" too, the question arises why not use genuine Software RAID instead?. The principle of linux-SW RAID is similar to HW-RAID, with some major differences though. The most important is that you can use partitions (= logical units) instead of whole drives. For the upcoming ASCII-art, let's assume we have two 40GB drives ... Code: DISKS 01 : [||||||||||||||||||||||||||||||||||||||||]SIZE: 0--------10--------20--------30--------40 02 : [||||||||||||||||||||||||||||||||||||||||] With a partitioning utility you could create 2 primary partitions on each drive. Partition 1 (P1, size: 2GB) would be the swap space ("S") and partition 2 (P2, size: 38GB) would be the data partition ("D") that will hold the operating system and the user files. This results in a disk layout as follows ... Code: DISKS[color=#e5e5e5]·····[/color]|[color=red][b]P1[/b][/color][color=green]<---------------- [b]P2[/b] ---------------->[/color]| 01 : [[color=red]SS[/color][color=green]DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD[/color]] 02 : [[color=red]SS[/color][color=green]DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD[/color]]SIZE: 0--------10--------20--------30--------40 With the Linux software RAID-tools you could now build "logical volumes" from out of the 4 partitions we have. Let's assume we want to build 2 RAID sets as follows ... Code: [color=red][b]D1[/b][/color][color=#e5e5e5]··[/color][color=red][b]P1[/b][/color]---> [color=#7c4183]<[b]log.vol. 1, "SWAP", RAID-0[/b]>[/color][color=#e5e5e5]··[/color]| [color=red][b]P2[/b][/color][color=#e5e5e5]··[/color]+ [color=#e5e5e5]·[/color]+-> [color=#284a9b]<[b]log.vol. 2, "DATA", RAID-1[/b]>[/color][color=#e5e5e5]··[/color]| [color=green][b]P2[/b][/color][color=#e5e5e5]··[/color][color=green][b]P1[/b][/color][color=green][b]D2[/b][/color] As you can see: not only can you "glue" together partitions quite freely (as long as the sizes match), you can also vary the RAID-type of the resulting logical volumes. Besides that, the biggest advantage of linux SW-RAID is that is embedded seamlessly within the linux-kernel and therefore quite reliable. You can, of course, create as much partitions on each drive as you like and build handful of logical volumes with these partitions. E.g. could you create some more partitions on the drives in the above example and combine those to an error-tolerant RAID-5 volume while the RAID-0 and RAID-1 volumes stay fully intact and unaffected. In terms of performance a software RAID solution is in no way slower than an onboard-RAID setup. Given that, all the poster you quoted said is true. Finally, one question is to be answered too. "If Linux SW-RAID is soo groovy. Why don't we all use it?". Well, because it's not super-easy to set up. A brief overview about the complexity that awaits you can be found on this website I hope this clears a few things about "partitions" and "software RAID" as I have mentioned it in my first post. good luck! p.s.: Nope, I'm not having too much time, but we have a day off over here and so I thought it'd be just grand to abuse the Esselbach forum software as Desktop Publishing system:)
-
'lo again, I see from your lsmod output that the driver module is indeed loaded and also in use. Brings up the question why there are still 2 seperate drives available. Anyone? Hello? Just out of interest: Do you still rememeber what drive/partition you had chosen to boot from after your first attempt to install MDK on the RAID-0 set? For the moment I have to dig the net about further infos concenrning the VIA RAID chip. Til now all I have found is either VIA-bashing or LVM-related (the latter one is interesting though but not our current problem . I'll be back (and I'm allowed to say that as I'm Austrian too
-
heya zenarcher, first of all, I have to apologize for bits and parts of ym first post. I have written it after 16+hrs of hardcore programming, and obviously my first post lacks one or two key infos Firstly: It sounds as if it was possible to combine logical partitions to RAID volumes with the RAID-controller-BIOS. The VIA chip we are talking about is most probably not capable of that. What I was talking about (and which I simply forgot to mention with at least a single word is a technique called "LVM", aka "Software RAID". But back to that later ... The VIA southbridge-chip "VT8237" (as used on your mobo) is only the first part in the onboard-RAID chain. To be able to use this feature at all, a secondary "software-layer" (a driver) is needed. Under Windows this is usually no big deal. Under Linux this driver-layer is a profound source for major setup problems. And if this wasn't bad enough already: Almost any onboard RAID chip (be it SIL, VIA or whatever) is error-prone under linux, but the VIA controllers are legendary for a fundamentally horrible linux-compatibility, and at that: the VT8237 is one of the worst. But no use in bashing a rotten onboard chip. Let's see what can be done now ... Check the loaded modules First thing to do in your current situation is to check if the VIA driver is loaded properly. Just because the setup proggie says "installed" doesn't mean it's actually "installed properly". Open a console (as root) and type ... Code: root@comp# [b]lsmod[/b] This command produces a list of all loaded modules and also provides info about how the modules are utilized. In your case check for list entries that contain the string "sata_<something>", where <something> is the name of the sata-chip vendor. In your case the module to check for should be named "sata_via". Result A: module is loaded If you indeed have this module listed, do check if it is referenced/used by any other module. You can determine this by the number and names in the column "used by". If no module uses the via-sata-module, your onboard RAID setup is very close to uselessness. If the module is used and you still see 2 seperate drives instead of one large RAID-1-volume, then something else went astray. Unfortunately I don't have a clue as to how to solve this then. Result B: module is not loaded If no sata_via thing is found in the lsmod output, then you can try to insert it. To do so, enter the command ... Code: root@comp# [b]modprobe sata_via[/b] Two things can happen after this: a) the computer hangs To circumnavigate this it would be a good thing to boot the OS with the noapic-parameter and try the modprobe-command again you get a lot of error messages Which will either tell you that the module wasn't found or could not be loaded due to some freaky reason. Whatever may come up, keep us informed so that we get a clearer picture about the status your puter's in now. There are still a few other things one could do, like installing a third drive and install the system onto this disk. With such a setup you could compile a custom kernel with the via support built in, which might enable you to add the main drives as RAID volumes later. In the meantime you might want to inform yourself about this "LVM" thing. If you ask me, it would be the better option anyway, as it is just as performant as the onboard solution but much more stable and less error prone.
-
gidday zenarcher I'm not really much of a guru when it comes to onboard-SATA-RAID issues, as I only get in contact with them good old DPT-SCSI RAID boards or dedicated SATA controller cards, of which we have a good number in the machines here. But anyway ... The infamous "9"s ... This usually indicates that the linux-loader (lilo or grub) isn't happy with the partition type/filesystem it wants to read its startup code from. In your case, the bootloader seems to considers the RAID0 stripe set as something "invalid". Why does it work on a RAID-1 mirror-set then? Basically, my guess is that your installation doesn't utilize SATA-RAID at all. Due to the specs of "mirroring" the disks are nevertheless available ... or at least one disk is. If the mirror set would be properly set up you wouldn't even see your second drive in a regular disk manager (though I admit that this could be possible with SATA-drives; dunno). What to do now? Foremost I strongly advise you to NOT install anything important onto a plain RAID-0 stripe set! RAID-0 is only a valid option if you also mirror the stripe set (RAID0+1). This, of course needs at least 4 disks (2 x for RAID0, 2 x to mirror the RAID0-set). It's also a good idea to use "plain" boot partition with a standard file system on it (ext3 recommended). This boot part. doesn't need to be overly large, a handful of gigs are more than enough. Let's say you have 2 x 80GB drives, then a partitioning as follows might be handy (all mentioned partitions are to be created on both drives) ... partition 1: 1-2GB (will be stripe-swap-space later) partition 2: 1-2GB (ext3, boot partition) partition 3: the rest (will be the RAID0- set later; or RAID1) (all partitions as "primary") In your RAID controller BIOS, set up a RAID 0 set by combining both partitions #3. If you're the adventurous kind of person, you can also stitch together the parts. #1 as a RAID-0 swap-volume. Leave the boot-partitions (on both drives) as they are, format those as EXT3 during the OS-setup, and have the OS-setup proggie install the boot-loader to partition 2 on drive 1 (the Mandrake setup-prog should ask you about this towards the end of the setup). As mountpoints you use "swap" for the RAID-set created with both part. #1, and "/" is the mountpoint for the RAID0/RAID1 set that you created from both parts. #3. This way the bootloader can read it's data from a "real", "non-logical" volume, while the system itself resides on whatever RAID-volume you created (keep in mind that RAID0 doubles the chances for complete and total data-annihilation) With a setup like this, you should at least be able to avoid the "9"s. If the system will make use of the RAID set? Who knows Hope that helps
-
gidday saura did you install the svgalib packages from within the package-manager? For MDK10 you will need to install ... - svgalib-1.9.18-1mdk.i586.rpm (the base package) - libsvgalib1-1.9.18-1mdk.i586.rpm (the libs) -- (optional) libsvgalib1-devel-1.9.18-1mdk.i586.rpm Search in the package-manager if those RPMs are tagged as "installed". If not, install them. If MDK10 doesn't come with the RPMs, just d/l the files from one of the many MDK-mirrors (e.g SUNET.SE/MDK mirror). The above link also contains the packages for the newer Mandrake releases (10.1). keep us informed
-
howdy jarves finding info and the manual for this mobo should not be too difficult. the board model and the vendor name (e.g. ASUS, ABIT, MSI, Gigabyte etc.) should be somewhere on the mobo. Once you have the vendor name you can go to the respective website and download the manual for the right board. In case it is some sort of OEM-board, you will need to boot it once and write down the info that appears on screen at startup. These infos usually contain the motherboard model. <edited> A quick search braught me to This Website. Your mobo seems to be a PCChips-thingie. </edited> </edited2> This Page also seems to have useful infos. "Model: MB-571 TX Pro II Chipset: SiS 5598 Chipset OEM Name: MB-725, 726 & 729, PC100 BXcel" </edited2> Even though, you should be able to run any P3 cpu with this board, or at least those P3s for which the mobo has the right CPU-slot/socket and frontside bus frequencies for. P3s used to have an FSB-freq. of 100 (the "B" and "EF" steps) and 133 MHz (the P3-"E" model if I remember correctly). This also includes CPUs faster than 450. As you have the slot-version you might also want to check for something called the "Slot 1 -> Socket"-adapter, as most of the faster CPUs came as (F)PGA/socket version (i think; someone curse them acronyms). Your major prob will be to _FIND_ a p3 processor at all We still have a dual P3 database server here, and I think we baught the last 2 P3s in Europe last year On the other hand (and if you don't mind fearsome encounters): This might be a good chance to check around on eB(etr)ay hope that helps
-
howdy deadxerxes lemme start with the brief version of my reply ... 1: forget about any "onboard"-RAID setup. You can't use RAID 5 with almost any known onboard chip anyway 2: use dedicated SATA-RAID controllers like the Highpoint RocketRaid 1640 for your RAID-setup (supports also RAID level 5) 3: try to keep the overall number of disks as low as possible (thermal issues and power supply issues) Now, let's move on to the very, very verbose version of the reply where a few things are to be explained ... A: DISTROS Web-server capable Linux distros are in fact as countless as daily hits on an average pr0n site As it goes for preferences: the distros we are running on our servers are Debian (Woody), Mandrake 10 (mail and ftp only), Mepis and Suse 9.2. Mepis: The most load to handle has Mepis powering a webserver that delivers no less than up to 6 GB/day and as any decent vortegaunt would say: "The Mepis excels at all tasks" Still though, it did not detect the SATA-RAID setup properly, though an installation on the non-RAIDed drives was painlessly possible. Mepis PROS: + easy setup + easy to manage + all requred apps are available (apache, php, mysql) + good package management Mepis CONS: - SATA-RAID detection was not at it's prime (around Dec'04) Debian: The reliable working horse. Many distros are in fact Debian based and if you wil ever see the uptime statistics of a Debian box you will know why. Debian PROS: + super stable + excellent package management + all requred apps available Debian CONS: - limited hardware support in the stable version (Woody) Suse 9.2 Not that I would ever set up a suse server myself, but we have a couple of rented servers abroad that all came with Suse 9.2. Lemme pin that down: I don't really like Suse as it creates a directory structure that can be very confusing ... but, it has a few super-huge advantages: It's stable and it runs perfectly together with server management software like PLESK or CONFIXX. As you won't have physical access to the server, and as you are new to linux, something like PLESK would make your days a whole lot more easier. So, please, do consider this when you choose a distro. Btw: SW like PLESK also runs with Fedora from what I know. But I'm only guessing as Fedora is the only distro I've tried that I could never get running on a variety of server setups (FC3). B: HARDWARE B.1: Motherboard Almost any kind of hardware will do for a web server. Given that the only requests will come via the LAN interface, the server won't need to be a prime number cruncher. My recommendations: Don't use Intel-CPUs (not only because of the CPU but because of the mobo-chipset). The Last socket 478 mobo I had contact with was the Asus P4P800-E and it was a rather troublesome experience. A nice mobo is the MSI K8N Neo2 Platinum for the Athlon 64+ CPUs, which comes with dual Gigabit controllers and load of goodies like support for ECC/non-ECC RAM (up to 4GB). B.2: RAID setup The RAID thing will be an essential question. Not only do some distros refuse to detect the onboard SATA chips properly (as you could see during your Mandrake installation), most controllers do not even support more than 4 drives. And if this would not be enough: RAID level 5 is totally unkown to most onboard controllers (they only support levels 0 and 1, the better ones maybe also level "10" (which is a mirrored stripe set)). Your options in this field are to either ... 1: spend a fortune on a SATA controller that handles up to 8 drives and supports RAID 5 hardware-wise (e.g. the Adaptec 2810SA for standard PCI-slots, which is ~ US $500,- or the Intel® SRCS16 which handles up to 6 drives for around US $320,-) 2: setup Software-RAID which I don't really recommend, as the setup procedure can get _REALLY_ hairy. But if you manage to get it up and running it will work smoothly 3: use a seperate controller card like the Highpoint RocketRaid 1640 (RR 1640 specs) which supports up to 4 drives, bootable arrays and also does RAID 5 (though it's not mentioned on the above site). Here's a suggestion for a setup that might run nicely: nr. of disks: 4 -) 4x200-400GB disks on the RocketRAID So what you need to find is some distro that supports e.g. that RAID controller out of the box. B.3: Other considerations I realize the above suggestion only uses 4 drives instead of the planned 6 drives, but give it a thought: You said you need some case that can be rack mounted. 19" cases are spiff with one minor flaw: they can't house too many devices. This is not only a space-specific question, it's more power supply related. Most 19" rack-cases come with only a 300W PS and a powerful CPU, 2 gis of RAM AND 6 drives can lead to troubles. Also consider that the drives will need power cables and the outlets of those 300W thingies are very limited (cables for 4, maybe 5 devices; and you would need 7 (6xdrives, 1xDVD/CD). C: SOFTWARE As it goes for the availability of the required packages you won't run into any kind of troubles. All major distros do have packages for Apache, PHP and MySQL. On thing though: If you haven't set up a web server yet, prepare for one of THE MOST TAXING learning experiences you can imagine. "Web server" not only means "Apache + PHP", it also means "SSH", "iptables", "logrotate", "access and transfer statistics", "Pro- or VsFTP", and - last but not least - the most taxing of all: POP/SMTP/IMAP-server setup. And those are just the very BASICS. If you also plan things like "WebMail" or "WebDAV" you will find your linux-noob-brain spinning at 20000 rpm in no time flat. A web server's not to be considered proper if it's just stable, it is also cruical that it is secure. Cruical because you can be held responsible if some baddies crack your box and maybe use it as spam relay, warez or p0rn server. I'm not saying this to scare you, but you seem to be unexperienced in server setups. So I'm just mentioning that the innocent days of the internet where we used to plug a server to the net and watch funny looking GOPHER pages are long gone. If you are interested I can show you the contents of the file "/var/log/messages" of our least accessed server and you wouldn't believe how massive the attempts to break the box are. As it goes for configuration questions for the "game server": Unfortunately I have no idea what is to be considered in that field as all I know is the client side of "Counter Strike" from back then before it became "Steam". A lot to read, I admit, and at that it's just 0.000005% of what you will have to read on docs and HOWTOS in the near future ) hope it helps at least a wee bit
-
Computer EXTREMELY slow, but shouldn't be. HELP!
blackpage replied to OldSpiceAP's topic in Everything Linux
gidday ppl I would almost place a bet on martouf's last two lines. booting into "noapic"-mode could indeed solve the slow response times (I dimly remember this issue from my Mandrake times). good luck, and if it helps, buy martouf a beer -
gidday Jimxugle re: clustering RAID controllers To give it away quite frankly: I have no idea if something like could possibly work. "Feeling"-wise I'd say "no". When it comes down to logical volumes you always need some kind of "supervising" instance. In most cases this is the RAID controller which controls the attached drives. In your suggested setup several drives would be attached to independant RAID controllers. And to make the whole logical volume accessible as one large partition, the 2 RAID controllers would also need some "supervising" instance. The only thing that could possibly handle your setup is a "single-card-multiple-RAID-channels"-controller. LSI e.g. produces RAID controllers with 4 independant RAID channels. So maybe you'd also like to peek into those products. As it goes for the drive types: The storage system I've mentioned in my last post runs with IDE drives. Just the data-connection to the server uses the SCSI protocol. As far as I know these Transtec thingies come in all flavors, supporting almost any kind of drives (not sure about PATA). hope that helps
-
howdy Jimxugle that's some interesting theory you're pointing out there with your "multiply cloned PCI-controllers and USB hubs" machine. Alas, as fine as teh results of your computations may sound, there are some limitations: 1) Total capcity vs. partition capacity As BSchindler pointed out correctly there is virtually no limit for the number of block devices you run under an *x-operating sys. Still though, unless you "combine" the individual drives to some big virtual drive ("logical volume", keyword: RAID), you will just have a flock of drives where each partition on each drive needs its dedicated mount point in the *X-file system. Also, the total capacity might be extreme, but the maximum disk space you can access "in one block" will be the size of the largest partition of the largest drive. Not that an average disk size of -let's say- 250GB is to be considered "small", the problem will be to determine WHERE (on which drive/part.) to store a copied DVD. And for that purpose a check of all mounted drives/part. will be necessary (sooner or later). The solution to this would be to use a RAID-thingie that lets you combine multiple drives to one logical storage unit. I'm almost 100% sure that setting up a somewhat usable RAID set with ~450 harddisks on 90 cloned PCI channels, on 5-port USB hubs with take you around 200 years. Don't worry about that timespan, as the mere attempt to undergo such an adventure will make you immortal amongst the *X-community 2) PCI capacity I'm not even mentioning the overhead of data that it would take to control 450 drives, but I could imagine that just the "start/stop disk-unit"-commands alone will exceed the bandwidth of the standard PCI as well as the PCIe/x interface. Keep in mind that harddisks are extremely "chitty chatty" with their host controllers, and 450 drives can only be compared to something really, really "beyond" like 18 wimmin at a cafe, talking about their hubbies Well, and apart from the above mentioned "management data" there is still "user data" to shoot through the PCI channels. Even if you would use PCI-X 2.0 with a (theoretical) bandwidth of up to 5GB/s you would still get a super slow storage system as the drives would outmaneuver themselves permanently by requesting and occupying the PCI bus. Also not to mention: the PCI arbiter-chip will probably go nuts and start to organize with other controller chips in unions 3) Failure tolerance Normally they say "the more the merrier". Unfortunately this doesn't apply to harddisks. With the proposed 450 disks chances are good that you will probably face around 5-20 defect disk per year. Even if you stick to a much lower number of disks: the chances for a complete and total loss of a disk are superlative (even 40 disks mounted as individual drives will make you go crazy). Solutions The only thing really helpful would be to use ATA/SATA-, or SCSI-RAID storage systems, either rack mounted or server-cases). These systems easily go up to a couple of Terabytes (we run a Transtec 4 TB solution here for all of the companies backup purposes), they integrate smoothly into almost any kind of environment as they come with their own operating system that offers the logical volume to Windows, Linux, Unix, Mac OSX etc. The data transfer from and to the system is done via dual SCSI lanes, and it has the advantage of being "hot pluggable" (disk kapoot? get it out and stuff in a new one while the system is up). Putting it all together, my advice would be to peek into those dedicated storage systems, stack a few of them and thusly obtain the TBs you aim at. hope this littany helps
-
yo ah.heng If it's a vfat drive then there can't be too much folder permissions involved unless you have specified those explicetly in "/etc/fstab" (umask setting e.g.). A possible reasons could be that the "Explorer"-app you are using (Konquerer or the GNOME-counterpart) choke upon the contents of the folders. In other words: If the folders contain media-files (mp3s, vids etc.) the "Explorer"-app might want to parse and "offer" this media content. An easy way to check if the error comes from the folder itself or the files therein is to browse it on the shell (command: "user@machine # ls /mnt/my_weird_folder"). If the folder is mounted and the "ls"-command doesn't yield anything, then the mount-type or -parameters might be wrong. In this case check the entries in "/etc/fstab" (see above: "umask" settings etc.). If files are being listed, then the chances are good that the media-content of the files cause a halt within your graphical filemanager. Whatever it is: keep us updated good luck
-
greetings debianUser Unfortunately I only have Debian running on servers, but as it goes for X setup, Debian should be fairly similar to any other distro. Therefore: Open the X config file from "/etc/X11" (or the respective folder in case it's XOrg) and see what the installer placed there as graphics driver. If it's anything different than "vesa" you might want to give it a go and try the "vesa" driver instead. If this works it would at least enable X for you, even if the performance would be rotten. But even a slow X with internet access is a million times better than searching a driver solution in lynx on the console. Ad "startx"/compilation error msgs: Getting to see the err. msgs. could be helpful as the regular "no screens found" can have around 12^34*6 reasons (misconfigured display, module missing etc.). Also a look at the make-output would reveal some possibly unmet dependancies. good luck (edited due to overwhelming nr. of typos
-
addendum: Just checked the KDE site and they have a nice explanation of things (except the question "How many stars are in the sky?" here ... KDE Faq Chapters 4.9 and 4.13 repsectively address your issue.
-
(2nd attempt: Somehow pressed return and the form was gone; so if this post appears twice, I apologize in advance). howdy cheetahman, you can install KDE to wahtever directory you want. This also includes folders on external drives. Prerequisite is of course that you have a fixed mount point for the KDE dir on the external drive. I suggest a preocedure as follows ... 1) Install KDE to fixed hard disk KDE will likely go into "/opt/kde" or somewhere underneath "/usr" (distro dependant) and see if everything is running smoothly. 2) Leave X and go to console (also make yourself "root") and copy the complete KDE folder to your external drive (e.g. "mv /opt/kde /mnt/myexternaldrive_kde_dir") 3) Create a sym-link to the folder on your external drive ("ln -s /mnt/myexternaldrive_kde_dir /opt/kde") DONE This way the link redirects everything to the external harddrive and KDE still thinks it has set up camp in "/opt/kde" You can also skip the symbolic link. In this case you would have to adjust the evironment vars "KDEDIR" and "LD_LIBRARY_PATH" to tell KDE where its home is. hope this helps
-
greetings pr-man concerning Yoper: I came across Yoper a couple of months ago and it still is the distro of my choice - as far as workstation configs are concerned (Debian/Mepis for servers). Indeed the distro sheds much light, especially in terms of speed. In fact it is truly the fastest Linux flavor I've worked with so far. But also compiling apps from out of the tarballs works nicely and the system seems to be pretty compatible over all (no issues with the ASUS A7N8X mobo, GF6800gt etc.). Installation is pretty easy (compared to Gentoo e.g.) and the X-setup even comes with a graphical interface (SAX) where you can choose the right settings for your monitor/gfx-card display. But where there is much light there also have to be some shadows: The packages provided by the yoper-servers occasionally come in rather odd conditions or version. E.g. Apache came only in the 2.0.x version with no PHP-includes, even if I did install the only PHP that was available (PHP5). A few other packages just miraculously died upon installation. Package management and software-installation solidness are in fact the two major issues. Still though: if you have some Linux routine and the necessary spare time, Yoper is the distro that can be adjusted to top-notch performance the easiest (also comapred to gentoo). As it goes for stability: no probs here either. Currently I'm running KDE, Firefox, Postgres 7.4.x, Apache 1.3/SSL (from tar), PHP 4.3.x (from tar), OpenOffice (V1.3), GIMP 2.2 (from tar), Scribus, Eclipse 3.0/3.1, KDevelop, jEdit 4.2 based upon JRE-SDK 5.0 and various other apps without the slightest problems. So my advice would be: Definitely worth a try (if you're a seasoned Linux-o-holic) cu
-
Howdy pr9phet if you actually have files with the suffix "iso" on your discs then you might have burned the CDs improperly. What you need to do is to open the ISO file with a burning proggie and write the content of the ISO-file to the disc. I'm not aware if the onboard XP-burning program does handle ISO files, but in case it doesn't, you might want to check out DeepBurner which is a quite nifty freeware tool that runs under windows and also processes ISO-files (just select "Burn ISO image" when the app starts). Also keep in mind that you should burn bootable CD-images slowly (I always burn OS isos with 8x speed) hope that helps
-
@Dapper Dan (wrote ...) "My old TI 42000 GeForce 4 ran perfectly ... I just updated the kernel to 6.2.9* from 6.2.7" Boy oh boy, how time flies! A Ti42000 under kernel 6.2.9. ) Seriously, DapperDan: The time spent here, helping us folks ceaselessly in a 24/7 manner has obviously caused a malfunction in your numeric coprocessing unit ) Lean back, relax, take a break - but do come back (to spill more tips and all that
-
Howdy egorgry If you want all your internet aps (browser, mail, ftp-progs) under one parent-folder I'd recommend you install firefox like this .. 1: Create destination folder ... by opening a console and entering this command ... Code: user@box: mkdir -p /usr/local/internet/firefox the "-p"-switch will create all necessary directories too, in case those ain't present. In our case not only the folder "firefox" would be created but also the folder "internet" in case there isn't such a folder already. 2: Install firefox ... by following the steps of the GUI-installer and specifying /usr/local/internet/firefox as target-directory. 3: Create symbolic links to the firefox-aplication... Once more we're gonna do this in the console: Enter the command ... Code: user@box: ln -s /usr/local/internet/firefox/firefox /usr/bin/firefox to create a symbolic link to firefox in /usr/bin. This folder should always be in the path, so this oughta make finding the firefox-executable much easier. A simple "firefox" in a console will then launch the browser. Alternatively you can also create a desktop-shortcut to /usr/bin/firefox to have the browser available via mouse click, or enter the realm of menu-editing and place a shortcut therein. hope that helps
-
linux-secure not available after install
blackpage replied to linuxheadache01's topic in Linux Security
why thanx for the great info. til today I was under the illusion Linux was some sort of OS that stringenty does what I tell it to do. And if I'd tell Mandrake (which I have installed around 40 times on many, many machines without major troubles) something weird, it would respond with an equally weird result. That's the formula: You tell Linux what to do, and Linux does it, no "if"s, no "when"s, no anything - coda. I'm saying it clearly that I'm only speaking for myself now: but if I'm interested in reading vague accusations and naiv whining, then I'd have the SCO site in my bookmarks, and not linuxcompatible.org. Every decent forum user here knows that Linux has flaws here and there, some minor ones some major ones. But that's a thing that goes for all OS'. And if my recent encounters with the mindbogglingly idiotic XP-security subsystem would make me act like you, I'd spend the rest of my days caomplaining over at ntcompatible. If you are experiencing troubles with any hardware or software componenents of whatever Linux-distro, or if you are disorientated by the different paradigm that Linux carries, then you can come here and ask for help anytime - and you can bet your linux-dissatisfied a** that you will be welcomed and everybody here will also gladly help you, including myself, no matter how superficial or profound the problem might be. That has been the deal ever since I came here some month ago, and that should be the deal in the future. p.s: and "yes", I also have Macs, and the OS is indeed sweet and just the right thing if you are into music or graphics. But guess what: OSX is based on BSD, yet another Unix-flavor. The world just isn't fair, ey?