Dbms_stats fails on larger patitioned tables.

Hi,
I need to run dbms_stats on partitioned table with partition as large as 300gb and it fails please help how would i work on a startegy to collect stats.
Thanks,
Kp

"it fails" and "there no errors and just doesnt complete and have to stop" are not the same thing.
Does it fail ?
OR
Does it take long to run (and you haven't waited for it to complete) ?
With very large Partitioned Tables you need to be sure of the GRANULARITY, DEGREE, METHOD_OPT and CASCADE options of the Gather_Stats that you need to / want to run.
I think so questions are enough the question is can you answerWe have no idea of the options you use, the type and number of partitons, whether all the partitions have changing data, how many indexes exist, whether indexes are Global or Local, how many PQ operators are available and in use etc.
So , we cannot answer your question.
Hemant K Chitale
http://hemantoracledba.blogspot.com

Similar Messages

  • Dead Lock what made by another user!(patition table)

    I have a question about Dead-Lock!
    Our Situation is ..
    User "A" made a Patition Table, ACNT_WONJANG
    (without any Trigger,Function, Procedure)
    When "B" - another user - tried to drop its Partition,
    Dead-Lock invoked.
    but A droped it's Partition well.
    What can i Do?
    this is the trace file.
    /oracle/home/admin/ACNT/udump/ora_44478_acnt.trc
    Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
    With the Partitioning option
    JServer Release 8.1.7.0.0 - Production
    ORACLE_HOME = /oracle/home
    System name: AIX
    Node name: acnt
    Release: 3
    Version: 4
    Machine: 000C962D4C00
    Instance name: ACNT
    Redo thread mounted by this instance: 1
    Oracle process number: 15
    Unix process pid: 44478, image: oracle@acnt (TNS V1-V3)
    *** SESSION ID:(16.394) 2001-10-04 15:00:41.829
    A self-deadlock among DDL and parse locks
    is detected. In most cases, this self-deadlock
    is handled internally.
    This should be reported to Oracle Support
    ONLY IF an error is signalled back to the
    user on a command-line or screen.
    The following information may aid in finding
    user on a command-line or screen.
    The following information may aid in finding
    the problem.
    ORA-04020: deadlock detected while trying to lock object
    F03P.ACNT_WONJANG
    session: 440786b4 request: X
    LIBRARY OBJECT HANDLE: handle=43108348
    name=F03P.ACNT_WONJANG
    hash=76b93583 timestamp=NULL
    namespace=TABL/PRCD/TYPE flags=KGHP/TIM/SML/[02000000]
    kkkk-dddd-llll=0000-0001-0001 lock=S pin=S latch=0
    lwt=43108360[43108360,43108360] ltm=43108368[43108368,43108368]
    pwt=43108378[43108378,43108378] ptm=431083d0[431083d0,431083d0]
    ref=43108350[43108350,43108350] lnd=431083dc[4310824c,425b7ec4]
    LIBRARY OBJECT: object=431080d0
    flags=NEX[0002] pflags= [00] status=VALD load=0
    DATA BLOCKS:
    data# heap pointer status pins change
    0 431082d8 43108154 I/P/A 0 NONE
    HEAP DUMP OF DATA BLOCK 0:
    HEAP DUMP heap name="library cache" desc=0x431082d8
    HEAP DUMP heap name="library cache" desc=0x431082d8
    extent sz=0x224 alt=32767 het=8 rec=9 flg=2 opc=0
    parent=30000030 owner=431080d0 nex=0 xsz=0x0
    EXTENT 0
    Chunk 431080c0 sz= 196 perm "perm "
    alo=196
    431080C0 500000C5 00000000 00000000 000000C4 [P...............]
    431080D0 43108348 431080D4 431080D4 431080DC [C..HC...C...C...]
    431080E0 431080DC 00000000 00000000 00020100 [C...............]
    431080F0 00000000 00000000 00000000 00000000 [................]
    43108100 43108144 00000000 00000000 00000000 [C..D............]
    43108110 00000000 00000000 00000000 00000000 [................]
    Repeat 2 times
    43108140 00000000 431082D8 00000000 43108154 [....C.......C..T]
    43108150 00000000 00000000 00000000 00000000 [................]
    Repeat 1 times
    43108170 00000000 00000000 00000019 00000000 [................]
    43108180 00000000 [....]
    Total heap size = 196
    FREE LISTS:
    Bucket 0 size=0
    Total free space = 0
    UNPINNED RECREATABLE CHUNKS (lru first):
    Total free space = 0
    UNPINNED RECREATABLE CHUNKS (lru first):
    PERMANENT CHUNKS:
    Chunk 431080c0 sz= 196 perm "perm "
    alo=196
    Permanent space = 196

    carlyfromal wrote:
    Here's the thing I myself have an Ipad 3 that I got from Ebay that is activation locked and I have the same issue. Can't get the info. Well,since Apple conveniently decided to discontinue selling the Ipad 3 the only way I could get one was to buy a used one,so it looks to me like they could have some mercy and help a person unlock the thing. We're not dishonest people that go around stealing things,yet because of Apple's brilliant(I use that term sarcastically) idea to put this stupid new crap in place people like us who have to buy second-hand products have to suffer and get screwed out of money we had to save up to buy this stuff! And all anyone can come up with is "well boohoo" or "tough luck" or whatever! But,what about the rights of the rest of us?! Some of you may find this a tad rude, but oh well,tough luck!
    On the other hand, there are those of us that appreciate the theft protection provided by the latest IOS.
    There are certain things to watch out for when purchasing used devices of any sort, the first of which is to ensure that you're not buying stolen property.  Since you are unable to obtain cooperation from the seller, perhaps your device was stolen!

  • Creating index on large partitioned table

    Is anyone aware of a method for telling how far along is the creation of an index on a large partitioned table? The statement I am executing is like this:
    CREATE INDEX "owner"."new_index"
    ON "owner"."mytable"(col_1, col_2, col_3, col_4)
    PARALLEL 8 NOLOGGING ONLINE LOCAL;
    This is a two-node RAC system on Windows 2003 x64, using ASM. There are more than 500,000,000 rows in the table, and I'd estimate that each row is about 600-1000 bytes in size.
    Thank you.

    you can know the progress from v$session_longops.
    select
    substr(SID ||','||SERIAL# ,1,8) "sid,srl#",
    substr(OPNAME ||'>'||TARGET,1,50) op_target,
    substr(trunc(SOFAR/TOTALWORK*100)||'%',1,5) progress,
    TIME_REMAINING rem,
    ELAPSED_SECONDS elapsed
    from v$session_longops
    where SOFAR!=TOTALWORK
    order by sid;
    hth

  • Select count from large fact tables with bitmap indexes on them

    Hi..
    I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
    Thx

    You may have corruption in the index if the queries ...
    Select /*+ full(t) */ count(*) from my_table t
    ... and ...
    Select /*+ index_combine(t my_index) */ count(*) from my_table t;
    ... give different results.
    Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them.

  • How to set dbms_stats parameters for a single table

    Hi,
    I see that dbms_stats has the following procedure:
    PROCEDURE SET_PARAM
    Argument Name Type In/Out Default?
    PNAME VARCHAR2 IN
    PVAL VARCHAR2 IN
    Is there a way to change the parameters only for a single table?
    I need to set METHOD_OPT=>'FOR ALL COLUMNS SIZE 1' only for a specific table...

    I'm sorry, mate. It looks like setting individual table preferences was introduced in 11g (and doesn't seem to work all that well).
    You can still:
    1. Explicitly specify any of the supported parameters by using DBMS_STATS.GATHER_TABLE_STATS() for the individual table and run it along.
    2. Write a PL/SQL wrapper for let's say DBMS_STATS.GATHER_SCHEMA_STATS/GATHER_DTABASE_STATS that would gather the stats for the whole schema but ignore this particular table. Then gather the stats for the table with the METHOD_OPT parameter of your choice that could be different from the one used for the rest of the schema.
    This could be achieved by locking particular table stats with DBMS_STATS.LOCK_TABLE_STATS, running GATHER_SCHEMA_STATS with force=>FALSE (which is the default). That parameter will make the procedure ignore any tables with locked stats. As the last step of the wrapper you can execute DBMS_STATS.GATHER_TABLE_STATS for the table in question with the desired METHOD_OPT and force=>TRUE.
    It's a little more work, but may solve your problem.
    Max
    Edited by: Max Seleznev on Nov 28, 2012 6:21 PM
    Edited by: Max Seleznev on Nov 28, 2012 6:22 PM

  • Splitting Large internal tables

    Hi All,
    How to split large internal table into smaller ones of fixed number of lines.
    The total number of lines are not known and is subjected to vary.
    Regards,
    Naba

    I am not sure about your requirements, you try with the below solution
    Itab : contains all entries  let us say 3000
    Itab1 
    itab2
    itab3
    No.of entries to be split based on 3000/n  ( 3000/3)  = 1000
    split_val = 1000
    N_split_from = 1
    N_split_to = 1000
    APPEND LINES OF ITAB1 FROM N_split_from TO N_split TO ITAB1.
    N_split_from = 1 +  split_val
    N_split_to = 1000 + split_val
    APPEND LINES OF ITAB1 FROM N_split_from TO N_split TO ITAB2.
    N_split_from = 1 +  split_val
    N_split_to = 1000 + split_val
    APPEND LINES OF ITAB1 FROM N_split_from TO N_split TO ITAB3.
    Regards
    Sasi

  • Failed to create MySQL tables. Installation aborted. BOE XI 3.1 Linux

    Tried to do a full install of BOE XI 3.1 Linux on Redhat 5 with at least the mimimum requirements and it goes through fine but then in the end errors out with "Failed to create MySQL tables. Installation aborted." Any ideas on what is causing this and how to fix it?

    Hello Fred,
    I recommend to post this query to the [BusinessObjects Enterprise Administration|BI Platform; forum.
    This forum is dedicated to topics related to administration and configuration of BusinessObjects Enterprise, BusinessObjects Edge, and Crystal Reports Server.
    It is monitored by qualified technicians and you will get a faster response there.
    Also, all BOE Administration queries remain in one place and thus can be easily searched in one place.
    Best regards,
    Falk

  • How to manage large partitioned table

    Dear all,
    we have a large partitioned table with 126 columns and 380G not indexed can any one tell me how to manage it because now the queries are taking more that 5 days
    looking forward for your reply
    thank you

    Hi,
    You can store partitioned tables in separate tablespaces. This does the following:
    Reduce the possibility of data corruption in multiple partitions
    Back up and recover each partition independently
    Control the mapping of partitions to disk drives (important for balancing I/O load)
    Improve manageability, availability, and performance
    Remeber as the doc states :
    The maximum number of partitions or subpartitions that a table may have is 1024K-1.
    Lastly you can use SQL*Loader and the import and export utilities to load or unload data stored in partitioned tables. These utilities are all partition and subpartition aware.
    Document Reference:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/partiti.htm
    Adith

  • Logical sql firing larger aggregate table instead of smaller one

    Hi
    When we process a request containing one particular column alone along with some time dimension say month or year, the logical sql is firing larger aggregare table instead of smaller aggregate table. Please, help us in resolving this issue.
    The OracleBI version we are using is 10.1.3.4.1
    Thanks.

    Hi,
    Try posting in the OLAP forum.
    OLAP
    Thanks, Mark

  • Optimisation of NN lookup for large roads table

    we have a large roads table (approx 2.5M records) for the UK. we have vehicles for which we need to locate the nearest road for approx 1-5k vehicle locations at a time. The roads table has an R-Tree index on it.
    The roads data is in British National Grid - the vehicle locations are in WGS-84
    Using a standard SDO_NN query, this takes approx 250ms per lookup on a twin 1ghz pIII with 1.5gb ram scsi 10k raid 5 disk array. - clearly not a very good result. does anybody have any ideas as how to optimise this search further. I would try using a filter, but when I include this in the single query, it degrades performance. the ideal situation would be to do a filter which is used globally for all the 1-5k vehicle batch, but i'm not sure as to how to do this in java/vb.

    Hi,
    What David says is correct. You should always use the hint
    otherwise the spatial index may not be used and that may return
    an error. However, in your case, it does not return an error.
    So I would guess the spatial index is being used.
    With regard to performance:
    1. If there are multiple neighbors for each road, there are
    multiple candidates to be evaluated and the query will be slow.
    Spatial queries currently are evaluated using the minimum
    resources for memory, etc. so as to work in all situations and
    they do not perform anything fancy with 1G memory or twin processors.
    Parallel query is an item for future releases.
    One thing that could be done in your situation is:
    cache some of the table data (use buffer_pool parameters for table)
    and pose queries within same county or a region one after another
    so that these queries which are likely to have same data as neighbors
    could take advantage of buffering.
    2. In addition to the above, in 9.2 nearest-neighbor performance should
    have improved substantially. This has to do with some internal
    optimizations minimizing the number of table fetches. So in concept,
    this would behave close to filter which does not fetch any geometries
    from the table (only uses the index) and fetches only as many
    candidate geometries as needed assuming a limited memory.
    Without doing any tuning described in 1, you should see improvements.
    3. An alternative is to use filter and sdo_distance.
    Use sdo_filter by creating a buffer around the query geometry and
    identify all data in the buffered_query. Evaluate the sdo_geom.sdo_distance
    between the query geometry and each of the result data and order them
    based on the distance and select the k-nearest. The efficiency of this
    method depends upon the selectivity of the filter operation (with the
    buffered query). If the filter returns nearly 2 or 3 times the number
    of neighbors needed, this method would perform great.
    Hope that helps,
    - Ravi.

  • Large editable tables

    I have a datatable with input fields assigned to each column. The user can edit values freely and then submit the changes for update. Because of the size of the list it appears that a huge amount of processing is required even though only a couple of rows are actually modified by the user.
    Is there a "best practices" for handling large editable tables that anyone is aware of?
    Thanks

    Presuming Oracle backend database.....(you didnt say)
    Partitioning would seem a quick (and expensive $$$$) win, if you partition on date, be sure to use the same date key to perform 'partition pruning' , which is to say give the optimizer a chance to throw away all unrequired partitions before it starts going to disk. When you say 'didnt improve performance much' you need to give a partition wise join for the partitioning to become effective at query time - did you do this ? for example a dashboard prompt with a filter on week / month would help (assuming your week / month time dim column is joined on partition key)
    Is your data model a star schema ? Do you have dimension metadata in your database ? I'd be looking at Materialized views and looking for some sort of aggregate rollup query re-write whereby you queries are re-written at run time to the Mview and does not hit your fact table.
    As for caching, OBIEE caching is the last resort after you exhaust every other alternative, its not there to speed up queries and you would be hiding the problem, what when a user drills down or needs to run an ad-hoc report that is not cached ?
    I would start with understanding Oracle execution plans, review your data model (with the view on speedy data extraction) and look at what Oracle DB gives for data warehousing appliances :
    parallelism is your friend!
    star transformation,
    Mviews (table or Cube based)
    OLAP

  • Large volume tables in SAP

    Hello All,
    Does any1 have the list of all large volume tables(Tables which might create a problem in select queries) present in SAP ?

    Hi Nirav,
    There is no as such specific list. But irrespective of the data in the table if you are providing all the primary key in the select query there will be no issue with SELECT.
    Still if you want the highest size table check transaction DB02.
    Regards,
    Atish

  • Failed to read the table scslmthresh: error = 113

    I'm continously getting the above error (Failed to read the table scslmthresh: error = 113) on a new sun cluster 3.2 on two T5240s with a 2540 as their storage. I've searched for this error but I cannot find any information on it. Anyone know what this is??
    java[1935]: [ID 513556 daemon.warning] Failed to read the table scslmthresh: error = 113
    I do have the cluster telemetry services running and a few storage resources, but nothing big yet. This error happens about every 2 hours, but the timing is not exaclty consistent.
    thanks for any insight.

    I figured out that this error definitely has to do with reading/writing information to the telemetry resource/data service, so I removed mine completely and re-installed it using the clsetup (option 8, then option 2). That seems to have removed the error. I'm guessing a Java patch changed something that the telemetry service didn't like.

  • Failed to Create RBS tables in Content DataBase when installed RBS

    I'd config RBS for SQL server 2008 R2. But it failed to create rbs tables in content database after I'd run "msiexec /qn /lvx* rbs_install_log.txt /i RBS-x64.msi TRUSTSERVERCERTIFICATE=true FILEGROUP=PRIMARY DBNAME="WSS_Content"
    DBINSTANCE="DBInstanceName" FILESTREAMFILEGROUP=RBSFilestreamProvider FILESTREAMSTORENAME=FilestreamProvider_1 "  with powershell.  And the install log file had the
    erro log"Executing op: ActionStart(Name=FixFilestreamStoreConfig,,)
    Información 2769. El instalador ha encontrado un error inesperado.The
    error code is 2769. Custom Action CreateFilesNoUI did not close 21 MSIHANDLEs." I'd found some imformation on this website and was told to enable "name pipes". But it doesn't work for me.
    Is anyone can help solve this problem? Many thanks!

    Finally,I found the solution.
    First make sure you have enable "Name Pipes" , and your sql "RemoteDacEnable" is "TRUE" , then what's the point that when you run command in powershell with"msiexec /qn /lvx* rbs_install_log.txt /i RBS-x64.msi
    TRUSTSERVERCERTIFICATE=true FILEGROUP=PRIMARY DBNAME="WSS_Content" DBINSTANCE="DBInstanceName" FILESTREAMFILEGROUP=RBSFilestreamProvider FILESTREAMSTORENAME=FilestreamProvider_1" , if
    your DBINSTANCE is the defult one , then change the name whit "localhost" and not "MSSQLSERVER".
    More importantly, if you had something changed , you must uninstall the RBS first then reinstall it .
    I'd read a blog that if your rbs_install_log.txt is smaller than 1M ,there must be something wrong to your installation even though  it has logged that your RBS has installed successfully.
    Hope this help. Thanks!

  • Exception: "Failed to save MapiAclTable table" using Exfolder.exe

    To Forum,
    I currently have a user who is trying to share their Calendar to another fellow
    employee and is getting a "The modified permissions cannot be saved."
    within outlook 2010. I was able to replicate the issue on another outlook 2010
    client. Did some research and discovered the Exfolder.exe tool for exchange
    2010 and connect to database where the user resides and open calendar folder
    permission and the default and anonymous were listed but they are not listed in
    the user's outlook calendar permissions. I than proceeded to clear the ACLS and
    add another exchange user and received the "Failed to save MapiAclTable
    table. (hoping the default and anonymous would re-populate to user's calendar
    permissions)<o:p></o:p>
    The only information on google relating to this problem appears to be with exchange
    public folders not a user's calendar ACL or DACL permissions. <o:p></o:p>
    Nick

    Hi,
    Glad to know that you have found the solution. Thanks for your generous sharing : )
    Thanks
    Mavis
    Mavis Huang
    TechNet Community Support

Maybe you are looking for

  • Error to add urlservice sample portlet to pages

    I have successfuly installed the JPDK and URLService,and I can see the google portlet works in the portlet repository,but when I add it to pages ,there is a lot of error: : Error while adding Portlets to the Page. (WWC-44012) An unexpected error occu

  • Payment Distribution to Artists by per user plays, rather than total plays

    Hello! I just had a thought about how payment is distributed to artists that I wanted to see if anyone might have some thoughts on. Spotify still does come under criticism on occasion for artist payment distribution - I've seen a couple of instances

  • Write the time stamp to a excel file

    Dear all superusers, The title imply my question that is how to store every time stamp obtained to an excel file.  But when open the file, it contains nothing. Could someone help on this matter. Regards, Lee Joon Teow Electronic Test Engineer

  • I downloaded Firefox 4 but now find it won't work with my Mac OS version. How do I revert back?

    After being prompted to download Firefox 4, I did so, only to find out that it will not work with my operating system which is Mac version 10.4.11. So now I cannot use Firefox. How do I revert back to my previous version?

  • Numberic control clear contents

    hi, sorry but this is a newbie question........ i have a numeric control with a default value (0.0) on my front panel.  i want to clear the contents when i click on the control so that when the user clicks the control to enter a value they dont have