Skip gather stats for a partition | table from the gather_stats_job in 10g

Hi all,
Can we skip gather statistics for a table or a partition in a partitioned-table from the GATHER_STATS_JOB in Oracle 10g ?
(cause that partition store in an offline-datafile, so GATHER_STATS_JOB had errors when running in sheduled).
Thanks.
Edited by: user8710247 on Nov 26, 2011 6:41 PM

GATHER_TABLE_STATS will default to GRANULARITY 'AUTO' which will include Global and Partition Statistics. Global Statistics have to be across all the Partitions -- so Oracle will attempt to read all the partitions for this !
You need to run GATHER_TABLE_STATS with GRANULARITY 'PARTITION' and naming the other Partitions --- i.e. run it for each of the online partitions.
See :
SQL> create table XYZ (col_1  number, col_2 varchar2(5))
  2  partition by range (col_1)
  3  (partition P1 values less than (10) tablespace HEMANT,
  4  partition P2 values less than (100) tablespace USERS)
  5  /
Table created.
SQL> insert into XYZ values (5,'Five');
1 row created.
SQL> insert into XYZ values (50,'Fifty');
1 row created.
SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL');
PL/SQL procedure successfully completed.
SQL> select partition_name, tablespace_name , num_rows, sample_size from user_tab_partitions
  2  where table_name = 'XYZ'
  3  /
PARTITION_NAME                 TABLESPACE_NAME                  NUM_ROWS
SAMPLE_SIZE
P1                             HEMANT                                  1
          1
P2                             USERS                                   1
          1
SQL>
SQL> exec dbms_stats.lock_table_stats('','XYZ');
PL/SQL procedure successfully completed.
SQL> alter tablespace HEMANT offline;
Tablespace altered.
SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL');
BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL'); END;
ERROR at line 1:
ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at "SYS.DBMS_STATS", line 13159
ORA-06512: at "SYS.DBMS_STATS", line 13179
ORA-06512: at line 1
SQL>
SQL> exec dbms_stats.unlock_table_stats('','XYZ');
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.lock_partition_stats('','XYZ','P1');
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL');
BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL'); END;
ERROR at line 1:
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2:
'/usr/oracle/oradata/MONDB/datafile/o1_mf_hemant_7d6m8zkx_.dbf'
ORA-06512: at "SYS.DBMS_STATS", line 13159
ORA-06512: at "SYS.DBMS_STATS", line 13179
ORA-06512: at line 1
SQL>
SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2');
BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2'); END;
ERROR at line 1:
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2:
'/usr/oracle/oradata/MONDB/datafile/o1_mf_hemant_7d6m8zkx_.dbf'
ORA-06512: at "SYS.DBMS_STATS", line 13159
ORA-06512: at "SYS.DBMS_STATS", line 13179
ORA-06512: at line 1
SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2',granularity=>'GLOBAL AND PARTITION');
BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2',granularity=>'GLOBAL AND PARTITION'); END;
ERROR at line 1:
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2:
'/usr/oracle/oradata/MONDB/datafile/o1_mf_hemant_7d6m8zkx_.dbf'
ORA-06512: at "SYS.DBMS_STATS", line 13159
ORA-06512: at "SYS.DBMS_STATS", line 13179
ORA-06512: at line 1
SQL>
SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2',granularity=>'PARTITION');
PL/SQL procedure successfully completed.
SQL>Hemant K Chitale

Similar Messages

  • Unable to wipe ZFS partition table from the disk

    I used an SD card as part of a zfs zpool made of three SD cards, without partition table. ZFS was managing the entire devices, not just partitions.
    I subsequently retired this zpool but did not run “zpool destroy”. This worked for two of the cards, but it seems as if one of the SD cards just can’t shake the zfs_member marker, no matter what I do.
    So far I tried multiple times, on several different machines (including two without ZOL installed, so zfs cache file is not an issue here):
    1. dd the entire device with zeros. Four times.
    2. zpool labelclear -f /dev/sdc
    3. create new msdos partition table in gparted and fdisk
    4.
    $ mkfs.btrfs -f /dev/sdc
    $ mount /dev/sdc /mnt/usb
    mount: unknown filesystem type 'zfs_member'
    Unable to wipe ZFS partition table from the disk
    5. Windows format and Partition Minitools windows equivalent of gparted.
    As you can see, none of those methods wrote over the zfs data. It remains intact and invulnerable to anything I tried.
    I am out of ideas. It looks like google is out of ideas, too.

    Yeah, somehow while writing the first post I missed that wipefs also does absolutely nothing.
    # wipefs /dev/sdc
    offset type
    0x23000 zfs_member [raid]
    LABEL: SD
    UUID: 9662645799256520897
    # wipefs /dev/sdc -o 0x23000
    /dev/sdc: 8 bytes were erased at offset 0x00023000 (zfs_member): 0c b1 ba 00 00 00 00 00
    # wipefs /dev/sdc
    offset type
    0x23000 zfs_member [raid]
    LABEL: SD
    UUID: 9662645799256520897

  • Gather Stats on Newly Partitioned Table

    I partitioned an existing table containing 92 million rows. The method was using dbms_redefinition, whereby I started the redef and then added the indexes and constraints last. After partitioning, I did not gather stats on any of the partitions that were created and I did not analyze any of the indexes. Then I loaded an additional 4 million records into on of the partitions of the newly partitioned table. I ran dbms gather stats on this particular partition and it took over 15 hours. Normally it only takes 4 hours to run dbms gather stats on the individual partitions, so I stopped it after 15 hours. When I monitored it while it was running, it looked like it was taking a really long time gathering stats on the indexes. Is this normal for a newly partitioned table? Is there something I can to prevent it from taking so long when I run gather stats? Oracle Version 10.2.0.4

    -- Gather PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_partition_name, estimate_percent => 20, cascade=> FALSE,granularity => 'PARTITION');
    -- Gather GLOBAL INDEX Statistics
    for i in (select * from sys.dba_indexes where table_owner = upper(v_table_owner)
    and table_name = upper(v_table_name) and partitioned = 'NO'
    order by index_name)
    loop
    SYS.DBMS_STATS.gather_index_stats(ownname => upper(v_table_owner), indname => i.index_name,
    estimate_percent => 20, degree => NULL);
    end loop;
    -- Gather SUB-PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_subpartition_name, estimate_percent => 20, cascade=> TRUE,granularity => 'ALL');

  • Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0

    Hi,
    My Oracle DB Version is:
    BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
    I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
    We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
    Is there a better way to do this?
    I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
    --Procedure
    create or replace
    PROCEDURE p_manual_gather_table_stats AS
    TYPE ttab
    IS
        TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
        ltab ttab;
    BEGIN
        ltab(1) := 'TAB1';
        ltab(2) := 'TAB2';
        FOR i IN ltab.first .. ltab.last
        LOOP
            dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
            method_opt => 'for all indexed columns size auto', degree =>
            dbms_stats.auto_degree ,CASCADE => TRUE );
        END LOOP;
    END p_manual_gather_table_stats;
    --Scheduled Job
    BEGIN
        -- Job defined entirely by the CREATE JOB procedure.
        DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
        job_type => 'PLSQL_BLOCK',
        job_action => 'BEGIN p_manual_gather_table_stats; END;',
        start_date => SYSTIMESTAMP,
        repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
        end_date => NULL,
        enabled => TRUE,
        comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
    END;Thanks,
    Somiya

    The question was, is there a better way, and you partly answered it.
    Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
    The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic.

  • HOW to enable oracle advance compression for EXIST partitioned table

    Hi All,
    I have to enable oracle advance compression for existing table which PARTITION BY RANGE then SUBPARTITION BY HASH.
    ORacle version: 11.2.0.2.0
    Please provide me any relevant doc or any exp.
    Thanks in advance.

    could not see any text for how to enable oracle advance compression for EXIST partitioned table.RTFM.
    From the resource above:
    How do I compress an existing table?
    There are multiple options available to compress existing tables. For offline compression, one could use ALTER TABLE Table_Name MOVE COMPRESS statement. A compressed copy of an existing table can be created by using CREATE TABLE Table_Name COMPRESS FOR ALL OPERATIONS AS SELECT *. For online compression, Oracle’s online redefinition utility can be used. More details for online redefinition are available here.
    "

  • Datapump skipping partitioned tables in the database

    I have run expdp on Oracle 10.2.0.4.0 on AIX 5.6 Platform, the export runs well exporting rows in the database but when it comes to partitioned tables in the database it export no rows for all the partitioned tables. When I run a normal exp/imp the partitioned tables are exported with all their rows.
    I used the following commands:
    expdp system/****** dumpfile=export_data.dmp directory=DATA_PUMP_DIR full=y logfile=export_dump.log
    Output for expdp on partitioned table:
    . . exported "SCOTT"."DEPT":"DEPT_2003_P1" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P10" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P11" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P12" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P2" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P3" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P4" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P5" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P6" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P7" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P8" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P9" 0 KB 0 rows
    And for exp:
    exp system/****** file=export_dump.dmp full=y log=export_log1.log
    Result from the export log for partitioned tables:
    . . exporting partition DEPT_2005_P1 881080 rows exported
    . . exporting partition DEPT_2005_P2 1347780 rows exported
    . . exporting partition DEPT_2005_P3 2002962 rows exported
    . . exporting partition DEPT_2005_P4 2318227 rows exported
    . . exporting partition DEPT_2005_P5 3122371 rows exported
    . . exporting partition DEPT_2005_P6 3916020 rows exported
    . . exporting partition DEPT_2005_P7 4217100 rows exported
    . . exporting partition DEPT_2005_P8 4125915 rows exported
    . . exporting partition DEPT_2005_P9 1913970 rows exported
    . . exporting partition DEPT_2005_P10 1100156 rows exported
    . . exporting partition DEPT_2005_P11 786516 rows exported
    . . exporting partition DEPT_2005_P12 822976 rows exported
    I am not sure about this behavour from datapump, my database is more than 800GB and we want to migrate the database from AIX to LINUX.
    Thanks

    Sorry I just copied and pasted some extracts from my exp and expdp logs:
    For testing purposes I tried to run a datapump export of only 1 partitioned table in the database and its going through, but when I do the same on a full datapump export these partitioned tables are being exported with no rows.
    Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 August, 2011 12:18:47
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** dumpfile=DEPT.dmp tables=scott.dept logfile=dept1.log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 48.50 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    Processing object type TABLE_EXPORT/TABLE/RLS_POLICY
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SCOTT"."DEPT":"DEPT_2009_P6" 1.452 GB 7377736 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P7" 1.363 GB 6935687 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P6" 1.304 GB 6656096 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P7" 1.410 GB 7300618 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P7" 1.296 GB 6641073 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P6" 1.328 GB 6863885 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P6" 1.158 GB 6568075 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P5" 1.141 GB 5801822 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P5" 1.162 GB 6027466 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P7" 1.100 GB 6214680 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P6" 1.106 GB 5762303 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P5" 1.133 GB 5859492 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P5" 1.001 GB 5664315 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P5" 1.023 GB 5229356 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P8" 1.078 GB 5549666 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P8" 940.3 MB 5171379 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P8" 989.0 MB 4920276 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P8" 918.6 MB 4553523 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P6" 821.0 MB 5220879 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P4" 766.6 MB 3832262 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P8" 747.9 MB 4753538 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P7" 741.8 MB 4708242 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P4" 734.2 MB 3713567 rows
    . . exported "SCOTT"."DEPT":"DEPT_2005_P7" 661.4 MB 4217100 rows
    . . exported "SCOTT"."DEPT":"DEPT_2005_P8" 647.1 MB 4125915 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P4" 677.8 MB 3428887 rows
    I also tried to run a normal schema by schema export with the normal exp system/password command the and got my dump file which is about 300GB, when I run the imp system/password command and specify fromuser=<system > and touser=<schemas_in_the_dumpfile> seperated by commas, it just comes up with this message:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set
    Import terminated successfully without warnings.
    No tables are exported.
    If I specify the parameter imp system/password file=dept_export.dmp full=y log=dept_imp.log with the same dumpfile and it imports data from the dumpfile into my database.
    I am not sure what could be wrong with my dumpfile or my imp command and its parameters.

  • I am passing range table from the method of ODATA Service to FM but In FM range table is becoming initial.What would be the reason for the same?

    I am passing range table from the method of ODATA Service to FM but In FM range table is becoming initial.What would be the reason for the same?

    Vinod, Can you share detail on how are you sending and how are you reading.

  • Can I gather object statistics on large tables at the same time?

    We have large partitioned tables to the tune of 3-4 billion rows, and they have no object statistics. Can I gather object statistics for them at the same time? For example, 4-5 large tables at the same time. I need to gather them in multiple tables because we have several of those large tables and I have to schedule the gathering carefully. So, I want to start at 4 tables. I'm wondering if gathering statistics on the above tables will be intrusive or will impact the performance while it is running.
    Alex

    What version are you running? If you are running 11g they are automatically gathered via the autotask job DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC which, depending on your window, will normally run daily at 10 pm. Im very surpised that no stats have been gathered as this collects stats on all new tables or if more than 10% of rows have been changed.
    See the following links:
    http://www.oracle-base.com/articles/11g/automated-database-maintenance-task-management-11gr1.php
    http://docs.oracle.com/cd/B28359_01/server.111/b28274/stats.htm

  • Need to create a procedure whic h search and drop tables from the db.

    Dear Gurus,
    I need to create a procedure, which first checks the tables and then drop those tables if find in the database. For example, I have 5 tables, then my procedure should first checks the existence , then drop all those 5 tables from the database. Actually, I have to attach this procedure to report buildeer, so please keep in mind the above mentioned scenario. Your input will be highly appriciated.
    hare krishna
    Alok

    Dropping 5 tables each time user hits the report!!! (According to my understanding)
    I would like to share my experience. My group developed many complex reports, we used oracle jobs to run the complex queries time to time, according to our business requirements and stored the result in a final table. Just for viewing at front end level, we used a simple select statement.
    -aijaz

  • Access 2013 crashed when I export a table from the access

    Hi!
    Access 2013 always crashed when I export a table from the access to the Database Symfoware.
    1、The conditions of the Access 2013  crashed are the follows.
    1)Create a table in the Access 2013.
    2)Using the ODBC driver of the Database Symfoware to export the table from the access to the Database Symfoware.
    2、The Environments are the follows.
    1)Access 2013 X64(Version:15.0.4420.1017)
    2)Win2008R2 X64
    3)Symfoware V11.1 X64
    3、The Application log from the Win2008R2 are  follows.
    ログの名前:         Application
    ソース:           Application Error
    日付:            2014/04/18 16:21:06
    イベント ID:       1000
    タスクのカテゴリ:      (100)
    レベル:           エラー
    キーワード:         クラシック
    ユーザー:          N/A
    コンピューター:       WIN-29UTU2AIK6J
    説明:
    障害が発生しているアプリケーション名: MSACCESS.EXE、バージョン: 15.0.4420.1017、タイム スタンプ: 0x50674523
    障害が発生しているモジュール名: ACECORE.DLL、バージョン: 15.0.4420.1017、タイム スタンプ: 0x506742b7
    例外コード: 0xc0000005
    障害オフセット: 0x0000000000171f36
    障害が発生しているプロセス ID: 0xb6c
    障害が発生しているアプリケーションの開始時刻: 0x01cf5ad638668c5b
    障害が発生しているアプリケーション パス: C:\Program Files\Microsoft Office\Office15\MSACCESS.EXE
    障害が発生しているモジュール パス: C:\Program Files\Common Files\Microsoft Shared\OFFICE15\ACECORE.DLL
    レポート ID: 00e87957-c6ca-11e3-ad2c-0050568d2ced
    イベント XML:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Application Error" />
        <EventID Qualifiers="0">1000</EventID>
        <Level>2</Level>
        <Task>100</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2014-04-18T07:21:06.000000000Z" />
        <EventRecordID>3442</EventRecordID>
        <Channel>Application</Channel>
        <Computer>WIN-29UTU2AIK6J</Computer>
        <Security />
      </System>
      <EventData>
        <Data>MSACCESS.EXE</Data>
        <Data>15.0.4420.1017</Data>
        <Data>50674523</Data>
        <Data>ACECORE.DLL</Data>
        <Data>15.0.4420.1017</Data>
        <Data>506742b7</Data>
        <Data>c0000005</Data>
        <Data>0000000000171f36</Data>
        <Data>b6c</Data>
        <Data>01cf5ad638668c5b</Data>
        <Data>C:\Program Files\Microsoft Office\Office15\MSACCESS.EXE</Data>
        <Data>C:\Program Files\Common Files\Microsoft Shared\OFFICE15\ACECORE.DLL</Data>
        <Data>00e87957-c6ca-11e3-ad2c-0050568d2ced</Data>
      </EventData>
    </Event>
    4、When the access crashed,I got the dump file .Then I use the Windbg got some information from the dump files.
         The detailes are  follows.
    FAULTING_IP: 
    +58872faf03d6dd84
    00000000`00000000 ??              ???
    EXCEPTION_RECORD:  00000000001723c0 -- (.exr 0x1723c0)
    ExceptionAddress: 000007fee3951f36 (ACECORE+0x0000000000171f36)
       ExceptionCode: c0000005 (Access violation)
      ExceptionFlags: 00000000
    NumberParameters: 2
       Parameter[0]: 0000000000000000
       Parameter[1]: 0000000000000000
    Attempt to read from address 0000000000000000
    FAULTING_THREAD:  00000000000010b8
    DEFAULT_BUCKET_ID:  WRONG_SYMBOLS
    PROCESS_NAME:  MSACCESS.EXE
    ADDITIONAL_DEBUG_TEXT:  
    Use '!findthebuild' command to search for the target build information.
    If the build information is available, run '!findthebuild -s ; .reload' to set symbol path and load symbols.
    MODULE_NAME: ACECORE
    FAULTING_MODULE: 0000000076eb0000 ntdll
    DEBUG_FLR_IMAGE_TIMESTAMP:  506742b7
    ERROR_CODE: (NTSTATUS) 0x80000003 - {
    EXCEPTION_CODE: (NTSTATUS) 0x80000003 (2147483651) - {
    MOD_LIST: <ANALYSIS/>
    CONTEXT:  0000000000171ed0 -- (.cxr 0x171ed0)
    rax=0000000001be3598 rbx=0000000000000000 rcx=01cf5d1b0f402a5c
    rdx=0000000009cfad30 rsi=0000000006688ef0 rdi=0000000001be35d0
    rip=000007fee3951f36 rsp=0000000000172490 rbp=00000000001727a0
     r8=0000000009b409d0  r9=0000000006688ef0 r10=0000000000000000
    r11=0000000000000246 r12=0000000000000000 r13=0000000001be19f0
    r14=0000000009cfad30 r15=0000000000000000
    iopl=0         nv up ei pl nz na po nc
    cs=0033  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010206
    ACECORE+0x171f36:
    000007fe`e3951f36 4d3927          cmp     qword ptr [r15],r12 ds:00000000`00000000=????????????????
    Resetting default scope
    PRIMARY_PROBLEM_CLASS:  WRONG_SYMBOLS
    BUGCHECK_STR:  APPLICATION_FAULT_WRONG_SYMBOLS
    LAST_CONTROL_TRANSFER:  from 000007fee395c095 to 000007fee3951f36
    STACK_TEXT:  
    00000000`00172490 000007fe`e395c095 : ffffffff`ffffffff 00000000`00000000 00000000`01bddaa2 ffffffff`ffffffff : ACECORE+0x171f36
    00000000`001727b0 000007fe`e388f8eb : 000007fe`e399dc10 000007fe`e399dc30 000007fe`e399dc50 00000000`000000f9 : ACECORE+0x17c095
    00000000`00172b50 000007fe`e388f590 : 00000000`06688ef0 00000000`00000004 00000000`000000f9 00000000`00000000 : ACECORE+0xaf8eb
    00000000`00172ba0 000007fe`e388f406 : 00000000`00000004 00000000`000000f9 00000000`11cb1130 00000000`000000f9 : ACECORE+0xaf590
    00000000`00172c30 000007fe`e391aee7 : 00000000`00000000 00000000`00173198 00000000`06688ef0 00000000`11cb0998 : ACECORE+0xaf406
    00000000`00172c70 000007fe`e3933a72 : 00000000`00400100 00000000`00000002 00000000`00000001 00000000`06688ef0 : ACECORE+0x13aee7
    00000000`00172e50 000007fe`e384e5b1 : 00000000`00000000 00000000`11cb0020 00000000`06688ef0 00000000`00000001 : ACECORE+0x153a72
    00000000`00172e90 000007fe`e38358d1 : 00000000`000007ff 00000000`06688ef0 00000000`01bd99f0 00000000`00000000 : ACECORE+0x6e5b1
    00000000`00173280 000007fe`e38718eb : 00000000`000007ff 00000000`06688ef0 00000000`11cb0000 00000000`00000000 : ACECORE+0x558d1
    00000000`00173340 00000001`3f3b0cd2 : 00000000`0a0945c0 00000000`000000fe 00000000`000007ff 00000000`00000000 : ACECORE+0x918eb
    00000000`00173430 00000001`3f3b0b9b : 00000000`00173730 00000000`001741e0 00000000`065e5b00 00000000`00000100 : MSACCESS!CreateIExprSrvObj+0x16af32
    00000000`00173630 00000001`3f3afcea : 00000000`00173730 00000000`00000000 00000000`00000000 00000000`065e5b00 : MSACCESS!CreateIExprSrvObj+0x16adfb
    00000000`001736a0 00000001`3f983140 : 00000000`00174401 00000000`0a3235e0 00000000`0a319aa0 00000000`00179fc8 : MSACCESS!CreateIExprSrvObj+0x169f4a
    00000000`00174290 00000001`3f3aabeb : 00000000`00000000 00000000`00000000 00000000`00179f38 00000000`00000000 : MSACCESS!FUniqueIndexTableFieldEx+0x26ab4
    00000000`00174530 00000001`3f3a9670 : 00000000`001778e0 00000000`00179f38 00000000`00000031 00000000`00000000 : MSACCESS!CreateIExprSrvObj+0x164e4b
    00000000`00177870 00000001`3f3a8c1b : 00000000`00000010 00000000`108c5990 003d0044`00570050 00000000`74ac3f69 : MSACCESS!CreateIExprSrvObj+0x1638d0
    00000000`001779d0 00000001`3f5450df : 00000000`0a0945c0 00000000`0000009c 00000000`0a0945c0 00000000`00000000 : MSACCESS!CreateIExprSrvObj+0x162e7b
    00000000`00179e30 00000001`3f64e915 : 00000000`43ed8000 00000000`00000000 00000000`3f800000 00000000`00000000 : MSACCESS!FillADT+0x5830b
    00000000`0017c450 00000001`3f33b172 : 00000000`00000000 00000000`0006075e 00000000`0017e370 00000000`00000000 : MSACCESS!IdsComboFillOfActidIarg+0xaddf1
    00000000`0017d730 00000001`3f33a9ef : 00000000`0017e0a0 00000000`00000b86 00000000`00000000 00000000`0fea0000 : MSACCESS!CreateIExprSrvObj+0xf53d2
    00000000`0017d790 00000001`3f33a20c : 00000000`06620400 00000000`00000000 00000000`00000001 00000000`0017e420 : MSACCESS!CreateIExprSrvObj+0xf4c4f
    00000000`0017e400 00000001`3f6019cd : 0071023b`007a000a 00000000`00374a30 00000000`00000005 00000000`0000000c : MSACCESS!CreateIExprSrvObj+0xf446c
    00000000`0017e850 00000000`06706fd5 : e2504700`aa00ee81 00005d1b`0f2363ea 00000000`0017e9e0 00000001`3f200000 : MSACCESS!IdsComboFillOfActidIarg+0x60ea9
    00000000`0017e8f0 00000000`0a30c378 : 00000000`00000000 00000000`00000008 00000000`067069ac 00000000`00000000 : 0x6706fd5
    00000000`0017e960 00000000`00000000 : 00000000`00000008 00000000`067069ac 00000000`00000000 00000000`00000008 : 0xa30c378
    FOLLOWUP_IP: 
    ACECORE+171f36
    000007fe`e3951f36 4d3927          cmp     qword ptr [r15],r12
    SYMBOL_STACK_INDEX:  0
    SYMBOL_NAME:  ACECORE+171f36
    FOLLOWUP_NAME:  MachineOwner
    IMAGE_NAME:  ACECORE.DLL
    STACK_COMMAND:  .cxr 0x171ed0 ; kb
    BUCKET_ID:  WRONG_SYMBOLS
    FAILURE_BUCKET_ID:  WRONG_SYMBOLS_80000003_ACECORE.DLL!Unknown
    WATSON_STAGEONE_URL:  http://watson.microsoft.com/StageOne/MSACCESS_EXE/15_0_4420_1017/50674523/unknown/0_0_0_0/bbbbbbb4/80000003/00000000.htm?Retriage=1
    Followup: MachineOwner
    5、Where the access crashed I used the ODBC trace to get the trace log of the ODBC API.
         The ODBC API called by Access before it crashed are the follows.
         I also ansysised the log,and I did not find any Abnormal.
    Test            1174-10f0
    EXIT  SQLGetData  with return code 0 (SQL_SUCCESS)
    HSTMT               0x0000000009F7C490
    UWORD                      
     6 
    SWORD                      
    -8 <SQL_C_WCHAR>
    PTR                 0x00000000002C1F20 [  
       12] "LENGTH"
    SQLLEN                    62
    SQLLEN *            0x00000000002C1E38 (12)
    Test            1174-10f0
    ENTER SQLGetData 
    HSTMT               0x0000000009F7C490
    UWORD                      
    10 
    SWORD                      
    99 <SQL_C_DEFAULT>
    PTR                 <unknown type>
    SQLLEN                     4
    SQLLEN *            0x00000000002C1E38
    Test            1174-10f0
    EXIT  SQLGetData  with return code 0 (SQL_SUCCESS)
    HSTMT               0x0000000009F7C490
    UWORD                      
    10 
    SWORD                      
    99 <SQL_C_DEFAULT>
    PTR                 <unknown type>
    SQLLEN                     4
    SQLLEN *            0x00000000002C1E38 (-1)
    Test            1174-10f0
    ENTER SQLFetch 
    HSTMT               0x0000000009F7C490
    Test            1174-10f0
    EXIT  SQLFetch  with return code 100 (SQL_NO_DATA_FOUND)
    HSTMT               0x0000000009F7C490
    Test            1174-10f0
    ENTER SQLFreeStmt 
    HSTMT               0x0000000009F7C490
    UWORD                      
     0 <SQL_CLOSE>
    Test            1174-10f0
    EXIT  SQLFreeStmt  with return code 0 (SQL_SUCCESS)
    HSTMT               0x0000000009F7C490
    UWORD                      
     0 <SQL_CLOSE>
    6、I also did the test with the Access 2010,and with the Sql Srever/Oracle.
         The results are the follows.
    1) With the Access 2010(X86 or X64) and Access 2013 X86,it successes when I export the table to the DB Symfoware.But only with the Access 2013 X64,the Access crashed.
    2)With the Access 2010(X86 or X64) and Access 2013(X86 or X64),using the Sql Srever/Oracle,it always successes .
    According to all the descrived above, I wonder if it a bug of the Access2013 .
    Could anyone can help me ?
    Thanks for any help.

    George Zhao
    Thank you for your help.
    I have already install
    the latest patches of the Office 2013.But it doesn't work.It is down also too. And ODBC
    drive of Database Symfoware is the newest driver of the Database Symfoware. Now I wonder that
    if it si a bug of the  ODBC
    drive of Database Symfoware,I should fix the bug,but I have analysised the odbc trace log and I do not find any wrongs ,so I think maybe it is a bug of the Access .
    I have see many situations about the Access down on the internet .

  • Code for reading particular  fields from the file placed in application

    hi,
    code for reading particular  fields from the file placed in application server in to the internal table.

    Hi,
    Use the GUI_UPLOAD FM to upload the File into ur Internal Table.
    DATA : FILE_TABLE TYPE FILE_TABLE OCCURS 0,
             fwa TYPE FILE_TABLE,
             FILENAME TYPE STRING,
             RC TYPE I.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>FILE_OPEN_DIALOG
      EXPORTING
        WINDOW_TITLE            = 'Open File'
       DEFAULT_EXTENSION       =
       DEFAULT_FILENAME        =
       FILE_FILTER             =
       INITIAL_DIRECTORY       =
       MULTISELECTION          =
       WITH_ENCODING           =
      CHANGING
        FILE_TABLE              = FILE_TABLE
        RC                      = RC
       USER_ACTION             =
       FILE_ENCODING           =
      EXCEPTIONS
        FILE_OPEN_DIALOG_FAILED = 1
        CNTL_ERROR              = 2
        ERROR_NO_GUI            = 3
        NOT_SUPPORTED_BY_GUI    = 4
        others                  = 5
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    READ TABLE FILE_TABLE INDEX 1 into fwa.
    FILENAME = fwa-FILENAME.
        CALL FUNCTION 'GUI_UPLOAD'
             EXPORTING
                  filename                = filename
                  FILETYPE                = 'DAT'
           IMPORTING
                FILELENGTH              =
             TABLES
                  data_tab                = itab
             EXCEPTIONS
                  file_open_error         = 1
                  file_read_error         = 2
                  no_batch                = 3
                  gui_refuse_filetransfer = 4
                  invalid_type            = 5
                  OTHERS                  = 6 .
        IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
             WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
    Regards,
    Balakumar.G
    Reward Points if helpful.

  • Can we find the number of rows in table from the dump file

    Hi All,
    Can we find the number of rows in table from the dump file with out importing the table in to the database?
    Please let me know ,if any option is there.
    Thanks,
    Kumar.

    <s>Try to import with option SHOW=Y, that should skip the number of rows which are into a table from a dump file.</s><br>
    <br>
    Nicolas.<br>
    Oops, sorry, that doesn't show the number of lines...<br>
    Message was edited by: <br>
    N. Gasparotto

  • What gotchas should we watch out for when porting code from the Adobe SDK to the Acrobat SDK?

    What gotchas should we watch out for when porting code from the Adobe SDK to the Acrobat SDK?
    ... and the other way around?
    I have found some evidence that the preprocessor variable PLUGIN seems to prevent macros from includes from being defined. The NPROC and SPROC constructs seem to be involved (partners in crime, if you will).
    -Ramon
    ps: Please see my related thread "What is the difference between xxProcs.h and xxCalls.h?"

    Here's a gotcha that I bumped into:
    extern "C" HINSTANCE gHINSTANCE;
    I found it in a successfully developed code on the Windows/Plugin side. It is something that I had never used on the APDFL side AND my Windows linker is complaining about several gXXXX missing functions.
    IOW: It sounds like the above statement is the solution to my linking problems. Somehow the APDFL seems to take care of the C vs. C++ details.
    -Ramon

  • Export dump only 1000 tables from the schema which contains 3000 tables.

    Hi,
    I have an requirement to export the dump only with particular 1000 tables from the schema which contains 3000 tables.
    As I want to take the dump, I need to mention the List of tables in "TABLES" Parameter. But syntax won't allow for 1000 tables.
    Kindly guide me on this to proceed further to take the dump of only particular 1000 tables.
    Thanks in advance.
    Thanks,
    Orahar.

    I have an requirement to export the dump only with particular 1000 tables from the schema which contains 3000 tables.
    As I want to take the dump, I need to mention the List of tables in "TABLES" Parameter. But syntax won't allow for 1000 tables.
    Kindly guide me on this to proceed further to take the dump of only particular 1000 tables.You haven't mentioned the oracle release version.
    if you're using 10g, you could use datapump export/import to achieve this. Not a straight way.
    Check Metalink Export/Import DataPump Parameters INCLUDE and EXCLUDE - How to Load and Unload Specific Objects - 341733.1
    Under section 9. Exporting or Importing a large number of objects.
    HTH
    -Anantha

  • How to restore one table from the previous backup in 9.2.0.8 version.

    Hi,
    How to restore one table from the previous backup in 9.2.0.8 version.
    Thanks
    -Ganga

    Hi,
    What is the table you want to restore?
    Using export/import is supported with Oracle Apps database (for full database exp/imp, and certain schemas like custom ones). For the Apps schema, I believe it is not supported due to object dependencies and integrity constraints.
    Regards,
    Hussein

Maybe you are looking for

  • Using a US Airport Extreme in England

    I have an Airport Extreme that I've been using in my home in Maryland for wi-fi. Now I'm moving to England. Will the Airport Extreme work over there? I see two potential issues: The plug that's supplied won't fit in the British outlet. And do they us

  • Why does Photoshop keep slowing down?

    I've been using Adobe Photoshop CS5.1 for two years now, and for some reason it seems to have the problem of slowing down after using it for a while, so if I'm working on a project I have to keep closing it and reopening it to get it fast again. It's

  • Why record had been unlock in Oracle Apps form?

    I have a problem with lock in oracle apps forms. The problem occurs with all forms in the Oracle Financials (forms with standard oracle apps user-exit), the problem does not occur with standard Oracle default form (outside of the oracle apps, form wi

  • How to create the "calender week" ?

    Hello, i found the "Format Date/Time String function" to create the current date. But how can i create the currrent calender-week which can be 1..52? Thanks a lot for help Solved! Go to Solution.

  • Multipage pdf in photoshop cs5

    Hi out there! I've upgraded from PS CS2 to CS5 - now I'm missing the option to compile multipaged pdf out from PS. I know for sure that bridge will compile psd's to one single pdf but bridge seems not to embed fonts used in a psd. And yes, I know tha