Defragmentation

Hi all,
What is deframentation in essbase...What are the optimization techniques regarding this...?
Thanks & regards

http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_esb_dbag/daprcset.htm#daprcset1015335
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Is DSC Citadel compatible with the Windows 7 disk defragmentation utility?

    Is DSC Citadel compatible with the Windows 7 disk defragmentation utility? I'm seeing gaps in trace data, and noticed Win7's defrag utility was scheduled to run weekly (default setting for Win7). I didn't see such gaps when running DSC 8.6 on XP.
    - Andrew

    Hello andrew johnson,
    Based on your description the Windows 7 disk defragmentation utility may be limiting write operations to the Citadel database. Does your current setup defragment part of your hard drive on a weekly basis? In order to get more information regarding this issue, you could turn of this utility in order to isolate where the gaps in trace data are occurring.
    Paul-B
    Applications Engineer
    National Instruments

  • Reorganization of the database (defragmentation of tables).

    Hi All ,
    Can anyone tell me how much approximate time do we require for reorganization of the database(defragmentation of tables) .
    DB size is 300Gb . I think there is a package called dbms_redefinition for this job? Correct me if i am wrong
    Os version : AIX 5L Databse : 11g
    Our client want to carry out this task on weekend and they need approx time window require to do this.
    Thanks in advance

    You're on 11g, so you're presumably using locally managed tablespaces (LMTs). If so, that makes it practically impossible to have "fragmentation" in tables, at least for any practical definition of "fragmentation". So my first response would be 0 time, the tables don't need to be defragmented.
    If you plan on using the DBMS_REDEFINITION package, be aware that you'll need enough space for another copy of each table (i.e. potentially another 300 GB of disk space if you plan on running in parallel).
    Justin

  • Online Defragmentation Not completing

    I am running exchange 2007 and have noticed that online defragmentation is not completing during the default allocated time of 1-5 am. My current database is around 50GB and for the past couple of weeks it always stops at 4:59:59 and states that it will
    resume during the next maintenance window (event id 704). I also receive event id 9871 at the same time stating that there is an online maintenance overlap, but the only other database that I have is the public database which is very small and it always completes.
    I can see from my SAN view that the IOPS are consistent during this maintenance window and then drop at 4:59. Do I need to extend the maintenance window? Thanks.

    I would.  What I usually recommend is to schedule for the time backups aren't running outside of business hours during the week but then schedule it for the full time from Friday night to Monday morning.  That ought to be long enough for it
    to complete at least once a week.
    Ed Crowley MVP "There are seldom good technological solutions to behavioral problems."

  • Database defragmentation or creating a new database?

    Hi,
    We want to reclaim the whitespace in our Exchange database, so do you recommend creating a second database and moving the mailboxes over there, or shall we go with offline defragmentation?
    Database size: 400GB.
    Mailboxes counts: 300.
    Thoughts and recommendations?

    How much free space is in your database?  If it's under 20%, then I'd stay as you are.  If it's between 20% and 40%, it becomes a judgment call.  If it's over 40%, then I'd create a new database and move the mailboxes - primarily because this
    will allow your users to access unless their specific mailbox is being moved (and even then, the access I available for most of the move anyway), and you will both remove whitespace and eliminate any bad items you may have in the database.  Just keep
    in mind that when you move the mailboxes, any recoverable mailboxes will remain on the original database unless you re-associate them with accounts so you can move them.  (And if you re-associate them with accounts and move them, their "deleted on"
    date counter will be reset to the date you move them, if you delete them again.)
    Taking the database offline so you can compact it will leave your users out of email access for the duration of the process.  And if you have a DAG and multiple database copies, you will need to reseed them each after the process is complete. 
    If you create a new database, with associated copies on other servers, the move mailbox process will automatically reseed the database.
    Ever since Exchange 2000, when multiple database were available on a single server, I have been leery of running an offline defrag.  And since Exchange 2010 and its move requests, I am adamant against doing one - we reclaimed nearly 24TB in our 24 databases
    (each was nearly 2 TB, and each is now under 1 TB), all using the process I suggest above.

  • Question about "fast fsck", ext4 and defragmentation status [SOLVED]

    I'm trying to use fsck to do a defacto defragmentation check of an ext4 partition. I'm running fsck from a live cd (SysRescue 1.15) to check one of my ext4 partitions. The ext4 partition is unmounted, of course.
    The check goes amazingly fast, but it doesn't give me any info about the percentage of non-contiguous inodes, which I understand to be the the same as the percentage of defragmentation (true?). I'm thinking this is because of the new "fast fsck" feature of ext4, as detailed below.
    My question: can I force a "slow fsck" in order to get a complete check including the inode-contiguity info? Or is there another way to get at the defragmentation status using fsck?
    Thanks.
    FWIW, here's the info on "fast fsck" from the excellent http://kernelnewbies.org/Ext4 page:
    2.7. Fast fsck
    Fsck is a very slow operation, especially the first step: checking all the inodes in the file system. In Ext4, at the end of each group's inode table will be stored a list of unused inodes (with a checksum, for safety), so fsck will not check those inodes. The result is that total fsck time improves from 2 to 20 times, depending on the number of used inodes (http://kerneltrap.org/Linux/Improving_f … ds_in_Ext4). It must be noticed that it's fsck, and not Ext4, who will build the list of unused inodes. This means that you must run fsck to get the list of unused inodes built, and only the next fsck run will be faster (you need to pass a fsck in order to convert a Ext3 filesystem to Ext4 anyway). There's also a feature that takes part in this fsck speed up - "flexible block groups" - that also speeds up filesystem operations.
    Last edited by dhave (2009-02-17 22:09:49)

    Ranguvar wrote:
    Woot! http://fly.isti.cnr.it/cgi-bin/dwww/usr … z?type=man
    fsck.ext4 -E fragcheck /dev/foo
    Thanks, Ranguvar. I had read the man page for fsck.ext3, but I hadn't run across the page for fsck.ext4. The link was helpful.

  • Defragmentation in 10g

    hi friends,
    Database: 10.2.0.3
    OS: windows 2003 server
    i had a tablespace (locally managed and segment space management auto) which is set as default for a user.
    few days back data was loaded into the tables for which i increased datafile size.
    yesterday, data is purged from the tables which are residing on this tablespace. because of this activity, fragmentation will occur in this tablespace.
    do we need to perform defragmentation on this tablespace (using expdp & impdp) to reclaim space? or in 10g is there any other way?
    thanks in advance.

    Oracle Devotee wrote:
    hi satish i do agree your point. but we require to reclaim the space to disk again. that space need to be used by archive destination. (we are having less disk space now)
    So, do you anticipate that this table (or any other object) will never again need that space? Not much point in jumping through all the hoops necessary to free it at the OS level, just to have to come back and re-allocate it next week.

  • Defragmentation at OS level

    hii,
    we use Oracle 8i on WIN NT 4.5 ,if i do defragmentation of HDD at OS level will there be any effect on databse .please advice.

    This is one of those operations that almost certainly has no upside and could certainly have a major downside.
    First off, I would find it unlikely that Oracle database files are highly fragmented at the OS level. Generally, database files are allocated in chunks of hundreds of MB or GB-- fragmentation generally happens when you have lots of operations which cause files to grow and shrink, which is not what you see with Oracle.
    In theory, if the database is shut down while the OS defrag utility is run, you should not damage your database. Realistically, though, given the negligible benefit, the potential cost is way, way too high.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Calc time vs. defragmentation

    I have a database with an average cluster ratio of .44. If I export and reload my data, it will go to 1.0, but as soon as I calc it goes back to .44.Under my current data settings, this calc takes a mere 5.7 seconds to run and retrevial time is fine. In an effort to improve the cluster ratio, I played with my dense/sparse settings, changed my time dimension to sparse, and was able to get a .995 cluster ratio after calculation; the problem is now the calc script ran for 127 seconds which is 22x longer.I know that either calc time is minimal by Essbase standards, but I'm still curious which way is "optimal". I would think it is always best to take the enhanced performance over the academic issue of cluster ratio, but I'm concerned at what point this becomes more than an academic question. How imporant is the cluster ratio and what kind of implications are there for having a database that is more fragmented? Are there other things besides calc and retrieval time that maybe I'm not seeing on the surface that I should be concerned with. Since defragmentation should improve performance is it worth it to sacrifice some performance for less fragmentation? Of course as this database grows this will become more of an issue.Any input, thoughts and comments would be appreciated.

    Just my humble opinion: Everybody's data has a different natural sparsity and rather than think in terms of 'fragmentation', think in terms of the nature of your data. If you made EVERY dimension sparse except for Accounts, and had only one member in Accounts, your database would consist solely of single-cell datablocks that are 100% populated - as dense as you can get. The trade-off is that you will have a HUGE number of these small, highly compact datablocks and your calc times would be enormous. As a general rule, you can take each of your densest dimensions in turn and make them "dense" in the outline until your datablocks approach 80k in size. The tradeoff is that not all cells in each datablock will be populated, but you'll have fewer datablocks and your calcs will zoom. Your goal is not to simply minimize the number of datablocks or to minimize the datablock size or to maximize the block density. You goal is to reach a compromise position that maximizes the utility of the database.A good approach is to hit a nice compromise spot in terms of sparse/dense settings and then begin optimizing your calcs and considering converting highly sparse stored dimensions to attributes and such. These changes can make a tremendous impact on calc time. We just dropped our calc time on a database from 14 hours to 45 minutes and didn't even touch the dense/sparse settings.-dan

  • Defragmentation of /var/lib/pacman/ (reiserfs)

    on my laptop i run arch from 0.4 and since then i have over 1200 packages installed. i used reiserfs and realized now, that it accumulated a certain amount of fragmentation over time and is now accessing /var/lib/pacman/ very slowly (that's why pacman sometimes takes very long do to anything) ... coupled to this, the hdd on this laptop is not the fastest ...
    now i just for fun tried something i didn't believe really that it would help but it helped a lot (pacman is now much faster) - what i did:
    [root@Asteraceae lib]# pwd
    /var/lib
    [root@Asteraceae lib]# cp -r pacman/ pacman_cpy/
    [root@Asteraceae lib]# rm -r pacman
    [root@Asteraceae lib]# mv pacman_cpy/ pacman/
    this defragmented the big collection of small files stored across my hdd to one place and now pacman is much faster

    apeiro wrote:
    Wow, that's a good (and simple!) idea, Damir.
    It might be a good thing to include as a pacman option itself.
    pacman --dbmaint or some such.
    thank you. this really helped me with speed of pacman but i feel it more a workaround (a very effective one!) than the solution. however, as it really solves the problem, i can live with this workaround ;-)
    yet another idea towards the solution i can think of now is to use a tarball to store everything and using a ramdrive while working with these files (extract everything from tarball to ramdrive and mount it @ /var/lib/pacman (at boot) and copy everything back to tarball when something is updated (e.g. pacman is run)). this would keep it simple but minimize the accumulation of a defragmentation to a minimum (on the hdd, the whole db will be a single file that is only used as a mirror to the ramdrive)
    including such a workaround in pacman i don't mind but other people will say that this is not a good solution (one of my colleagues at the uni already laughted at me using such "primitive" methods against defragmentation, but i told him that it worked great and is very effective and i don't mind if it is "primitive" or not as long as it does the job). also there will be others to claim that this is unneccessary as everybody can do a cp -r and then rm and then mv by hand.

  • Table Defragmentation

    Hi ,
    Recently we had performed table defrgamention on 9i oracle database on linux platform . we did fragmentation for around 20 tables , out of which 10 tables size decrease , 5 tables didnt got affected and to my surprise 5 tables got increased by 8 mb each ? deframention is not there should not increase the size . We even queried dba_tab_modifications and to my surprise there is no inserts ,updates or deltes on those tables ? so just wondering how did the table size increase .
    Pleae Oracle Guru's give your thoughts and inputs on this .
    Thanks & Regards ,
    Edited by: 967462 on 24 Oct, 2012 11:08 AM

    John and Jonathan have covered it pretty well (read Jonathan's article then read it again-- very good).
    What you did was, basically, to attempt to get every block in every table full to roughly the PCTFREE setting of the table (glossing over the fact that you likely have a few partially filled blocks). If your starting point was a table that had lots of mostly empty blocks (that would have been filled up by subsequent INSERT statements), the size of the table will go down. If your starting point was a table that had lots of mostly full blocks (i.e. blocks where there was less free space than the PCTFREE setting), the size of the table will go up.
    If your PCTFREE is set correctly and unless you have some pathological patterns in your data flows-- for example, you're only ever doing direct-path loads so empty blocks are never reused-- or unless you need to permanently reclaim some space following a large, one-time DELETE, it should be very rare that there should be any reason to do an ALTER TABLE MOVE for "defragmentation". Tables don't get fragmented so there is nothing to defragment. You might do a lot of work to reuse space today that was going to get reused organically in the very near future but that's generally the best case.
    Justin

  • Decimals mismatch after all data defragmentation

    Hi,
    We did a defragmentation of essbase cubes by exporting, clearing and import all data. There is a difference in decimals. Just to avoid the aggregation of cubes, we did a full data export rather than going for a level 0 and aggregation. Is there a possibility to avoid this difference in decimals?
    Before Defragmentation - OTHER EXPEN     -98177.2799999957
    After defragmentation - OTHER EXPEN     -98177.2799999976Regards,
    Ragav.

    Just to share my experience. I have also ever encounter similar issue which the
    rounding are different when I change the outline order. There is no Dyn calc, simply
    aggregation, some members are tagged as +, some are -.
    The rounding difference is noticeable when the numbers are very big, e.g. billion, and the difference
    is from 5 decimals onwards. The rounding difference is insignificant compared
    to the amount itself, so it is not causing an issue.
    I also notice similar behaviour in excel when I sum a number with massive amount.
    There is a rounding difference. So, I guess this is something to do with how binary works.
    Read the following article:
    http://support.microsoft.com/kb/78113

  • Cluster ratio & Defragmentation

    Scenario:
    Before:-Cluster ratio:1, Index cache=300MB
    After:- After running a calc scipt (max calc dim are used) cluster ratio-0.68 & index cache has increased.
    I am running the calc script as a part of test and I am not loading any data. Data is aggregated to Higher level and still I am running the calc scipt for aggregation.
    According to debag:-
    Fragmentation is unused disk space. Fragmentation is created when Essbase
    writes a data block to a new location on disk and leaves unused space in the former location of the data block.
    Block size increases because data from a data load or calculation is appended to the blocks; the blocks must therefore be written to the end of a data file.
    What does the above statement "calculation is appended to the blocks" mean? even though my higher level blocks are present.
    Does this mean that when I run a calculation my block size increases?
    If yes why does it increase?
    If yes, the next time when I run the calc script why my cluster ratio is still at 0.68 and why doesn't that decrease?
    NOTE: After fragmentation, all blocks are marked as dirty. Does this has anything to do with the decrease in the cluster ratio after rrunnnig the calc script (all blocks are marked as clean)? I run the calc script again (blocks are marked as clean) but still it calculates all the blocks becaouse intelligent calc is off. now after this calc also does it append the calculated value to the blocks?
    What is the logic behind this?
    I am not using intelligent calculation but my clearupdatestatus is set to after.

    Yes, defragmentation (restructuring) does throw away old (free for reuse) blocks.
    You must differentiate a little bit. dirty blocks != clean blocks != blocks available for reuse.
    Have a look at the documentation for restructuring, that's the time essbase runs through all blocks and tries to allign them back in best order for retrieval and throwing away the blocks available for reuse. That's also the reason essbase should have at least twice as much space left on the drive as your cube has in size.
    1- I keep on calculating again and again so the new blocks gets added again
    and again even though my blocks are clean increaing my data file Yes for intelligent calculation turned off AND certain conditions which influence the stepping of a calculation (commited mode, parallel calc, cache size changes between calcs ..)
    (or)
    2 - Is it like my blocks are clean so it doesn't add any new blocks for my further > calculation?Again yes :-) for intelligent calculation turned on AND certain conditions which influence the stepping of the calculation are NOT present.
    If the block is relocated, the former position becomes free for reuse
    Does this means, this block has both teh old and new values which increases
    the size of the block and it needs to be relocated as the existing space is not
    sufficient for this block?Quite so, but not one block is holding new and old data, each block is holding its own data. A new block is created for the new data, the old block remains untouched besides flagging it for being available for reuse.
    The flagging for reuse does not mean it is really lost. As it could be reused by a block which does fit in. But I do not know if reuse need an exact match or is a less than equal comparison. In the later case (which I would guess is the one used) small gaps would still be present as not the whole space for the block would be used up.

  • Hyperion User Acces and Defragmentation

    1.How to create  new user in share services and how to assign groups after that what is process will do hyperion planning 11.1.2.1 Verion,Please send me if you have any screen shots
    2.What is the Defragmentation process steps in hyperion.Actually I exported the data and clear the data, and then imported the data, after what I will do for Aggregation,Please help me
    Thanks&Regards

    Hi
    Defragementation in Essbase includes below steps
    Step 1: Export Lev-0 data
    Step 2: clear the cube
    Step 3: Load the data
    Step 4: Run Calcall function to aggregate the data
    or else
    you can do the below steps if you have less volume of data
    Step 1: Export All Level data
    Step2 : Clear the Cube/database
    Step 3: Load the exported all level data
    Hope it helps
    Thanks
    Ramya

  • [SOLVED] Measuring defragmentation on ext4

    Other than unmounting a filesystem and running fsck, is there a way to get the percentage of defragmentation on an ext4 filesystem?
    I'm still running Arch kernel 2.6.28, so I'm not using e4defrag yet.
    Thanks.
    Last edited by dhave (2009-02-17 21:50:06)

    http://bbs.archlinux.org/viewtopic.php?id=65647 :?

  • Defragmentation and Optimization  OS X Tiger

    Hi, which of the following utilities does the best job of
    1) hard drive defragmentation and,
    2) hard drive optimization.
    A) iDegrag or
    B) TechTool Pro 4

    There is no good reason to do what you want to do, because OSX does not need to be defragged like a Windows OS. Those "utilities" are unnecessary and depend on newcomers for a customer base. TechTool can be useful for disk repair and salvage, but only if something is seriously wrong with the hardware.
    I don't know what you mean by optimization. Possible using a separate partition for virtual memory files. If so, you would be better off investing the money in more RAM. You can keep track of memory swapfiles by watching the /private/var/vm folder over the course of several days. A good way to optimize is to run your OS in one partition and users in another, which is what I do.

Maybe you are looking for