DE FRAG

Discussion in 'Computer Corner' started by pamsdish, Nov 16, 2008.

  1. borrowers

    borrowers Gardener

    Joined:
    Jul 28, 2007
    Messages:
    2,615
    Ratings:
    +48
    :scratch:Just had a look and it doesn't seem to be there anymore!? I am only up to Adobe/Acrobat 7 so it seems I need to update that.

    It (my pc:oops:) says I have 232 total Gb and 199gb free so I suppose that's good isn't it? That will include everything on my husbands 'side' won't it? He has lots of games etc on his 'log' part of the pc.

    I need to go back to a beginners class for all of this I think. Anyway thank you, all of you.

    cheers
     
  2. Larkshall

    Larkshall Gardener

    Joined:
    Oct 29, 2006
    Messages:
    584
    Ratings:
    +14
    Me too!
     
  3. Kristen

    Kristen Under gardener

    Joined:
    Jul 22, 2006
    Messages:
    17,534
    Gender:
    Male
    Location:
    Suffolk, UK
    Ratings:
    +12,668
    "I use Ubuntu no defrag"

    I don't see why Ubuntu would remove the need for Defrag - but I know diddly-squat about Ubuntu.

    File grows, needs more space, contiguous adjacent space not available, an "extension" is provided on a bit of free disk space. Now to retrieve that file the reading-head on the drive has to "jump" to get the extra bit. Defragging would move the two bits to be contiguous, and avoid the "head jump".

    Isn't that the same for all operating systems?
     
  4. Larkshall

    Larkshall Gardener

    Joined:
    Oct 29, 2006
    Messages:
    584
    Ratings:
    +14
    Your comment caused me to search my system for a "de-frag" facility, I couldn't find one. It would seem that it's not necessary. If I ever got to the stage where I thought it was necessary would copy all data to a USD HDD then do a re- install which would completely wipe the internal HDD and have a clean start. Then copy the files I want to the internal HDD. I don't think this occasion will arise as I save most data files to a USB HDD anyway. I even have a USB HDD for my laptop, which is the size of a pocket calculator and 160GB. I keep several old HDD's (10GB) for archiving data.
     
  5. Kristen

    Kristen Under gardener

    Joined:
    Jul 22, 2006
    Messages:
    17,534
    Gender:
    Male
    Location:
    Suffolk, UK
    Ratings:
    +12,668
    " I couldn't find one"

    When Windows NT came out DeFrag was then touted as unnecessary (and in the main I think it is unnecessary on disks using the NTFS file retrieval).

    But on the many-multi-GB databases we use we take the trouble to ensure that the whole database file is contiguous, and thus reduce disk read-write head travel.

    That's not to say that Unix doesn't have a far better algorithm for file retrieval, but my recollection back to Windows NT and NTFS is that Microsoft head hunted a key architect of disk retrieval systems of that time (from memory a designer from DEC PDP or somesuch), so I reckon its roots are much the same as the Unix retrieval system.

    So the absence of a Defrag utility doesn't mean, to me, that it is not needed; but I think the average user defragging their Windows, or Unix, system is a waste of time. Windows will create a file contiguously, if possible, and unless that file get extended copiously (such as a growing database file might), and possibly putting aside Directory blocks (which might grow haphazardly, and benefit from being contiguous), then all the huge Picture-from-camera and Word-document-with-embedded-everything won't benefit from defragging one jot - whether Unix or Windows! All IMHO of course!!
     
  6. terrier

    terrier Gardener

    Joined:
    Oct 1, 2007
    Messages:
    1,519
    Ratings:
    +12
  7. Kristen

    Kristen Under gardener

    Joined:
    Jul 22, 2006
    Messages:
    17,534
    Gender:
    Male
    Location:
    Suffolk, UK
    Ratings:
    +12,668
    That leads to: http://en.wikipedia.org/wiki/Ext3#Disadvantages

    "That being said, as the Linux System Administrator Guide states, "Modern Linux filesystem(s) keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can't be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system."[15] While ext3 is more resistant to file fragmentation than the FAT filesystem, nonetheless ext3 filesystems can and do get fragmented over time.[16] Consequently the successor to the ext3 filesystem, ext4, includes a filesystem defragmentation utility and support for extents (contiguous file regions)."


    As I said earlier I don't think it matters much for regular users, but for heavy usage on large files that are extended haphazardly it is still an issue (presumably) that System Administrators have to consider.
     
  8. Dave W

    Dave W Total Gardener

    Joined:
    Feb 6, 2006
    Messages:
    6,143
    Gender:
    Male
    Occupation:
    Anything I fancy and can afford!
    Location:
    Tay Valley
    Ratings:
    +3,035
    What happens when the average punter ends up with lots of small blocks of scattered free space from deleted files? Is defragging still a waste of time?
    Is there something about NTFS or the Ubuntu FS, that reduces mechanical disc head movement times. I don't understand how a disc with scattered free space can be made to accommodate large files in contiguous blocks without defragging once in a while. Maybe Ubuntu does it 'on the fly'?

    Does anyone have any data on comparative read/write times of mechanical HDDs v solid state media? It looks like very large capacity memory cards will be with us and affordable within the very near future.

    Edit - Looks like some of my questions were being answered even as I typed!
     
  9. Larkshall

    Larkshall Gardener

    Joined:
    Oct 29, 2006
    Messages:
    584
    Ratings:
    +14
    Very Good, that does explain why Ext3 FS is less prone to fragmentation. I avoid saving data to the Ext3 by saving most data to a USB Vfat (fat32) HDD. If it gets to be a problem, the Vfat disk could be de-fragged on a Windows system. All files on that disk can be read and written in either OS.
     
  10. Kristen

    Kristen Under gardener

    Joined:
    Jul 22, 2006
    Messages:
    17,534
    Gender:
    Male
    Location:
    Suffolk, UK
    Ratings:
    +12,668
    Interesting question.

    With Windows NT I was brought up to believe that you should never fill the disk more than 80% (or somesuch - too old now to remember exactly!). The 20% "elbow room" enabled more intelligent allocation of files.

    I think the same is probably true of Mr and Mrs Average's home computer. If they don't fill the disk the deletion of small files, and creation of new ones, still has sufficient Elbow Room that contiguous space is always available (well, nearly always, YKWIM). The biggest threat is a big file which grows all the time, and I don't think those sort of files exist very much on the average Home PC (well, if they do they are probably not time-critical to retrieve).

    "Is there something about NTFS or the Ubuntu FS, that reduces mechanical disc head movement times"

    Dunno enough to speculate. I remember when "elevator seeking" came in though ... Operating System is reading File A at disk addresses 1, 1000, 2000 and 3000 (say) and just after reading the block at location 1000 the O/S gets another request to read something from address 1500 - so it grabs that on the way to getting the next block from location 2000. I thought that was pretty trendy at the time!

    Computers have way more memory than they used to, so lots of disk is cached in memory - avoiding having to re-read it at all. That helps.

    "Maybe Ubuntu does it 'on the fly'?"

    Indeed, entirely possible. Although I' not sure I like the sound of that! There are two issues I perceive:

    1) For a large database we check what the fragmentation is. If we think making the whole database contiguous will improve performance we schedule downtime for it. Shifting 100GB of fragments around can take tens of minutes. Having that happen at a time not of my choosing could lead to timeouts for the users of the database.

    2) When I commit a database transaction I absolutely and 100% must be able to be certain that the data is stored on the disk. A power failure immediately after that must not be able to cause my data to be missing! (Assume I'm processing a cheque and have just moved the money from one account to another. The first account must be debited, and the second credited, not one-or-the-other (sadly!))

    So any behind-the-scenes background defragmentation would have to honour that requirement too - I reckon that would make for some pretty tricky programming!
     
  11. Larkshall

    Larkshall Gardener

    Joined:
    Oct 29, 2006
    Messages:
    584
    Ratings:
    +14
    Information input into a computer via the keyboard goes into the keyboard queue. The "interrupt" then pulls it out in "packages" and sends it to where it is to go. The "interrupt" works far faster than anyone can use a keyboard, even on an 8bit processor. So in between times it does other less important things if they are waiting. All computer information is moved by "packet switching". Amateur Radio enthusiasts will know what this is. The Internet works in the same way and packets can go by various routes to get to their destination, not all continuous by the same route. The digital telephone system works slightly different. When you dial a number the controlling exchange "hunts" for a vacant line, so although you dial a Brighton number from London, if the first vacant line is via Birmingham it will use that. Yet it is done in a fraction of a second.
     
  12. Kristen

    Kristen Under gardener

    Joined:
    Jul 22, 2006
    Messages:
    17,534
    Gender:
    Male
    Location:
    Suffolk, UK
    Ratings:
    +12,668
    Indeed, the computer can detect "idle time" and set about doing something useful - Finding aliens using SETI or looking for cancer cures even.

    Trouble is if it embarks on a defrag of 100GB (I choose that size on the basis that the task might take 10 minutes or more), and then there is a call for heavy demand on that file (and lets take my database example where it is mission critical that changes are committed to disk in a power-fail-recoverable manner) then the in-progress defrag is going to get in the way of that!

    Hmmm .... so I have a big file, and I want a bit more space on the end. But there is are 10 Spreadsheet files and have a dozen word processing files there. Lets move them out of the way! Chances are they aren't being used (the O/S will know that), and then we can avoid fragmentation.

    I could flag files I want to be able to grow contiguously so that the Idle Time is used to move stuff out of the way. The French government bought the land where the channel tunnel terminal was going to be whenever it came up for sale - so by the time the two nations had finished doing their entente-cordial and were ready to start work they must have already owned most of it. "Fragmentation Avoidance" - they you go! every computer should have one! Form an orderly queue please ... ;)
     
Loading...

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice