CTAS statement?

Hello -
I am running 10.2.0.3. I did a create table as select statement to create a new table with qualifed data. I found out that the default value of a column was not created in the table. I thought the create table as select statement would create the table using the DDL from the source table. That DDL includes the default value. Are default values not included in a "create table as.." statement?
Thanks,
Mike

Hi,
No, CTAS does not include default value.
Regards

Similar Messages

  • Create table as select (CTAS)statement is taking very long time.

    Hi All,
    One of my procedure run a create table as select statement every month.
    Usually it finishes in 20 mins. for 6172063 records and 1 hour in 13699067.
    But this time it is taking forever even for 38076 records.
    When I checked all it is doing is CPU usage. No I/O.
    I did a count(*) using the query it brought results fine.
    BUT CTAS keeps going on.
    I'm using Oracle 10.2.0.4 .
    main table temp_ip has 38076
    table nhs_opcs_hier has 26769 records.
    and table nhs_icd10_hier has 49551 records.
    Query is as follows:
    create table analytic_hes.temp_ip_hier as
    select b.*, (select nvl(max(hierarchy), 0)
    from ref_hd.nhs_opcs_hier a
    where fiscal_year = b.hd_spell_fiscal_year
    and a.code in
    (primary_PROCEDURE, secondary_procedure_1, secondary_procedure_2,
    secondary_procedure_3, secondary_procedure_4, secondary_procedure_5,
    secondary_procedure_6, secondary_procedure_7, secondary_procedure_8,
    secondary_procedure_9, secondary_procedure_10,
    secondary_procedure_11, secondary_procedure_12)) as hd_procedure_hierarchy,
    (select nvl(max(hierarchy), 0) from ref_hd.nhs_icd10_hier a
    where fiscal_year = b.hd_spell_fiscal_year
    and a.code in
    (primary_diagnosis, secondary_diagnosis_1,
    secondary_diagnosis_2, secondary_diagnosis_3,
    secondary_diagnosis_4, secondary_diagnosis_5,
    secondary_diagnosis_6, secondary_diagnosis_7,
    secondary_diagnosis_8, secondary_diagnosis_9,
    secondary_diagnosis_10, secondary_diagnosis_11,
    secondary_diagnosis_12, secondary_diagnosis_13,
    secondary_diagnosis_14)) as hd_diagnosis_hierarchy
    from analytic_hes.temp_ip b
    Any help would be greatly appreciated

    Hello
    This is a bit of a wild card I think because it's going to require 14 fill scans of the temp_ip table to unpivot the diagnosis and procedure codes, so it's lilkely this will run slower than the original. However, as this is a temporary table, I'm guessing you might have some control over its structure, or at least have the ability to sack it and try something else. If you are able to alter this table structure, you could make the query much simpler and most likely much quicker. I think you need to have a list of procedure codes for the fiscal year and a list of diagnosis codes for the fiscal year. I'm doing that through the big list of UNION ALL statements, but you may have a more efficient way to do it based on the core tables you're populating temp_ip from. Anyway, here it is (as far as I can tell this will do the same job)
    WITH codes AS
    (   SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            primary_PROCEDURE       procedure_code,
            primary_diagnosis       diagnosis_code,
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_1    procedure_code,
            secondary_diagnosis_1    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_2    procedure_code ,
            secondary_diagnosis_2    diagnosis_code     
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_3    procedure_code,
            secondary_diagnosis_3    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_4    procedure_code,
            secondary_diagnosis_4    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_5    procedure_code,
            secondary_diagnosis_5    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_6    procedure_code,
            secondary_diagnosis_6    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_7    procedure_code,
            secondary_diagnosis_7    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_8    procedure_code,
            secondary_diagnosis_8    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_9    procedure_code,
            secondary_diagnosis_9    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_10  procedure_code,
            secondary_diagnosis_10    diagnosis_code
        FROM
            temp_ip
        UNION ALL
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_11  procedure_code,
            secondary_diagnosis_11    diagnosis_code
        FROM
            temp_ip    
        SELECT
            bd.primary_key_column_s,
            hd_spell_fiscal_year,
            secondary_procedure_12  procedure_code,
            secondary_diagnosis_12    diagnosis_code
        FROM
            temp_ip
    ), hd_procedure_hierarchy AS
    (   SELECT
            NVL (MAX (a.hierarchy), 0) hd_procedure_hierarchy,
            a.fiscal_year
        FROM
            ref_hd.nhs_opcs_hier a,
            codes pc
        WHERE
            a.fiscal_year = pc.hd_spell_fiscal_year
        AND
            a.code = pc.procedure_code
        GROUP BY
            a.fiscal_year
    ),hd_diagnosis_hierarchy AS
    (   SELECT
            NVL (MAX (a.hierarchy), 0) hd_diagnosis_hierarchy,
            a.fiscal_year
        FROM
            ref_hd.nhs_icd10_hier a,
            codes pc
        WHERE
            a.fiscal_year = pc.hd_spell_fiscal_year
        AND
            a.code = pc.diagnosis_code
        GROUP BY
            a.fiscal_year
    SELECT b.*, a.hd_procedure_hierarchy, c.hd_diagnosis_hierarchy
      FROM analytic_hes.temp_ip b,
           LEFT OUTER JOIN hd_procedure_hierarchy a
              ON (a.fiscal_year = b.hd_spell_fiscal_year)
           LEFT OUTER JOIN hd_diagnosis_hierarchy c
              ON (c.fiscal_year = b.hd_spell_fiscal_year)HTH
    David

  • Tablespace issue when running CTAS

    I get the following error when I run the CTAS statement below:
    ORA-01652: unable to extend temp segment by 8192 in tablespace ROTL_DATA
    create table ROTL.test3 tablespace rotl_data nologging as select * from ROTL.prodrecs3;
    I seem to get that even after creating it on 3 different tablespaces. I used a query to get the following information:
    TABLESPACE_NAME     
    ROTL_DATA
    CUR_USE_MB     
    198
    CUR_SZ_MB     
    16,384
    CUR_PRCT_FULL     
    1
    FREE_SPACE_MB     
    16,186
    MAX_SZ_MB     
    16,384
    OVERALL_PRCT_FULL
    1                         
    What could be causing my problem and how can I fix it?
    Query used for extracting tablespace data:
    select tablespace_name,
    round(sum(total_mb)-sum(free_mb),2) cur_use_mb,
    round(sum(total_mb),2) cur_sz_mb,
    round((sum(total_mb)-sum(free_mb))/sum(total_mb)*100) cur_prct_full,
    round(sum(max_mb) - (sum(total_mb)-sum(free_mb)),2) free_space_mb,
    round(sum(max_mb),2) max_sz_mb,
    round((sum(total_mb)-sum(free_mb))/sum(max_mb)*100) overall_prct_full
    from
    (select tablespace_name,sum(bytes)/1024/1024 free_mb,0 total_mb,
    0 max_mb from DBA_FREE_SPACE where tablespace_name='ROTL_DATA' group by tablespace_name
    union select tablespace_name,0 current_mb,sum(bytes)/1024/1024 total_mb,
    sum(decode(maxbytes, 0, bytes, maxbytes))/1024/1024 max_mb
    from DBA_DATA_FILES where tablespace_name='ROTL_DATA' group by tablespace_name)
    a group by tablespace_name;

    What could be causing my problem and how can I fix it?
    01652, 00000, "unable to extend temp segment by %s in tablespace %s"
    // *Cause:  Failed to allocate an extent of the required number of blocks for
    //          a temporary segment in the tablespace indicated.
    // *Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more
    //          files to the tablespace indicated

  • CTAS with objects

    I'm want to resequence my table, but one of the fields is a VARRAY type, which I can't group by in my CTAS statement - it's something like this:
    create new_table as (
    select a, b, varray_attribute
    from old_table
    group by b, a, varray_attribute)
    when I try it I get a 'inconsistent datatypes' message..
    Is there some way around this?

    I'm want to resequence my table, but one of the fields is a VARRAY type, which I can't group by in my CTAS statement - it's something like this:
    create new_table as (
    select a, b, varray_attribute
    from old_table
    group by b, a, varray_attribute)
    when I try it I get a 'inconsistent datatypes' message..
    Is there some way around this?

  • I/O Write Performance

    Hello ,
    we are currently experiencing heavy I/O problmes perfoming prrof of concept
    testig for one of our customers. Our setup is as follows:
    HP ProLiant DL380 with 24GB Ram and 8 15k 72GB SAS drives
    An HP P400 Raid controller with 256MB cache in RAID0 mode was used.
    Win 2k8r2 was installed on c (a physical Drive) and the database on E
    (= two physical drives in RAID0 128k Strip Size)
    With the remaining 5 drives read and write tests were performed using raid 0 with variing number of drives.
    I/O performance, as measured with ATTO Disk benchmark, increased as expected linear with the number of drives used.
    We expected to see this increased performance in the database, too and performed the following tests:
    - with 3 different tables the full table scan (FTS) (Hint: /*+ FULL (s) NOCACHE (s) */)
    - a CTAS statement.
    The system was used exclusively for testing.
    The used tables:
    Table 1: 312 col, 12,248 MB, 11,138,561 rows, avg len 621 bytes
    Table 2: 159 col, 4288 MB, 5,441,171 rows, avg len 529 bytes
    Table 3: 118 col, 360MB, 820,259 rows, avg len 266 bytes
    The FTS has improved as expected. With 5 physical drives in a RAID0, a performance of
    420MB/s was achieved.
    In the write test on the other hand we were not able to archieve any improvement.
    The CTAS statement always works with about 5000 - 6000 BLOCK/s (80MB/s)
    But when we tried running several CTAS statements in different sessions, the overall speed increased as expected.
    Further tests showed that the write speed seems to depend also on the number of columns. 80MB/s were only
    possible with Tables 2 and 3. With Table 1, however only 30MB/s were measured.
    Is this maybe just an incorrectly set parameter?
    What we already tried:
    - change the number of db_writer_processes 4 and then to 8
    - Manual configuration of PGA and SGA size
    - setting DB_BLOCK_SIZE to 16k
    - FILESYSTEMIO_OPTIONS set to setall
    - checking that Resource Manager are really disabled
    Thanks for any help.
    V$PARAMETERS
    1     lock_name_space     
    2     processes     150
    3     sessions     248
    4     timed_statistics     TRUE
    5     timed_os_statistics     0
    6     resource_limit     FALSE
    7     license_max_sessions     0
    8     license_sessions_warning     0
    9     cpu_count     8
    10     instance_groups     
    11     event     
    12     sga_max_size     14495514624
    13     use_large_pages     TRUE
    14     pre_page_sga     FALSE
    15     shared_memory_address     0
    16     hi_shared_memory_address     0
    17     use_indirect_data_buffers     FALSE
    18     lock_sga     FALSE
    19     processor_group_name     
    20     shared_pool_size     0
    21     large_pool_size     0
    22     java_pool_size     0
    23     streams_pool_size     0
    24     shared_pool_reserved_size     93952409
    25     java_soft_sessionspace_limit     0
    26     java_max_sessionspace_size     0
    27     spfile     C:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\SPFILEORATEST.ORA
    28     instance_type     RDBMS
    29     nls_language     AMERICAN
    30     nls_territory     AMERICA
    31     nls_sort     
    32     nls_date_language     
    33     nls_date_format     
    34     nls_currency     
    35     nls_numeric_characters     
    36     nls_iso_currency     
    37     nls_calendar     
    38     nls_time_format     
    39     nls_timestamp_format     
    40     nls_time_tz_format     
    41     nls_timestamp_tz_format     
    42     nls_dual_currency     
    43     nls_comp     BINARY
    44     nls_length_semantics     BYTE
    45     nls_nchar_conv_excp     FALSE
    46     fileio_network_adapters     
    47     filesystemio_options     
    48     clonedb     FALSE
    49     disk_asynch_io     TRUE
    50     tape_asynch_io     TRUE
    51     dbwr_io_slaves     0
    52     backup_tape_io_slaves     FALSE
    53     resource_manager_cpu_allocation     8
    54     resource_manager_plan     
    55     cluster_interconnects     
    56     file_mapping     FALSE
    57     gcs_server_processes     0
    58     active_instance_count     
    59     sga_target     14495514624
    60     memory_target     0
    61     memory_max_target     0
    62     control_files     E:\ORACLE\ORADATA\ORATEST\CONTROL01.CTL, C:\ORACLE\FAST_RECOVERY_AREA\ORATEST\CONTROL02.CTL
    63     db_file_name_convert     
    64     log_file_name_convert     
    65     control_file_record_keep_time     7
    66     db_block_buffers     0
    67     db_block_checksum     TYPICAL
    68     db_ultra_safe     OFF
    69     db_block_size     8192
    70     db_cache_size     0
    71     db_2k_cache_size     0
    72     db_4k_cache_size     0
    73     db_8k_cache_size     0
    74     db_16k_cache_size     0
    75     db_32k_cache_size     0
    76     db_keep_cache_size     0
    77     db_recycle_cache_size     0
    78     db_writer_processes     1
    79     buffer_pool_keep     
    80     buffer_pool_recycle     
    81     db_flash_cache_file     
    82     db_flash_cache_size     0
    83     db_cache_advice     ON
    84     compatible     11.2.0.0.0
    85     log_archive_dest_1     
    86     log_archive_dest_2     
    87     log_archive_dest_3     
    88     log_archive_dest_4     
    89     log_archive_dest_5     
    90     log_archive_dest_6     
    91     log_archive_dest_7     
    92     log_archive_dest_8     
    93     log_archive_dest_9     
    94     log_archive_dest_10     
    95     log_archive_dest_11     
    96     log_archive_dest_12     
    97     log_archive_dest_13     
    98     log_archive_dest_14     
    99     log_archive_dest_15     
    100     log_archive_dest_16     
    101     log_archive_dest_17     
    102     log_archive_dest_18     
    103     log_archive_dest_19     
    104     log_archive_dest_20     
    105     log_archive_dest_21     
    106     log_archive_dest_22     
    107     log_archive_dest_23     
    108     log_archive_dest_24     
    109     log_archive_dest_25     
    110     log_archive_dest_26     
    111     log_archive_dest_27     
    112     log_archive_dest_28     
    113     log_archive_dest_29     
    114     log_archive_dest_30     
    115     log_archive_dest_31     
    116     log_archive_dest_state_1     enable
    117     log_archive_dest_state_2     enable
    118     log_archive_dest_state_3     enable
    119     log_archive_dest_state_4     enable
    120     log_archive_dest_state_5     enable
    121     log_archive_dest_state_6     enable
    122     log_archive_dest_state_7     enable
    123     log_archive_dest_state_8     enable
    124     log_archive_dest_state_9     enable
    125     log_archive_dest_state_10     enable
    126     log_archive_dest_state_11     enable
    127     log_archive_dest_state_12     enable
    128     log_archive_dest_state_13     enable
    129     log_archive_dest_state_14     enable
    130     log_archive_dest_state_15     enable
    131     log_archive_dest_state_16     enable
    132     log_archive_dest_state_17     enable
    133     log_archive_dest_state_18     enable
    134     log_archive_dest_state_19     enable
    135     log_archive_dest_state_20     enable
    136     log_archive_dest_state_21     enable
    137     log_archive_dest_state_22     enable
    138     log_archive_dest_state_23     enable
    139     log_archive_dest_state_24     enable
    140     log_archive_dest_state_25     enable
    141     log_archive_dest_state_26     enable
    142     log_archive_dest_state_27     enable
    143     log_archive_dest_state_28     enable
    144     log_archive_dest_state_29     enable
    145     log_archive_dest_state_30     enable
    146     log_archive_dest_state_31     enable
    147     log_archive_start     FALSE
    148     log_archive_dest     
    149     log_archive_duplex_dest     
    150     log_archive_min_succeed_dest     1
    151     standby_archive_dest     %ORACLE_HOME%\RDBMS
    152     fal_client     
    153     fal_server     
    154     log_archive_trace     0
    155     log_archive_config     
    156     log_archive_local_first     TRUE
    157     log_archive_format     ARC%S_%R.%T
    158     redo_transport_user     
    159     log_archive_max_processes     4
    160     log_buffer     32546816
    161     log_checkpoint_interval     0
    162     log_checkpoint_timeout     1800
    163     archive_lag_target     0
    164     db_files     200
    165     db_file_multiblock_read_count     128
    166     read_only_open_delayed     FALSE
    167     cluster_database     FALSE
    168     parallel_server     FALSE
    169     parallel_server_instances     1
    170     cluster_database_instances     1
    171     db_create_file_dest     
    172     db_create_online_log_dest_1     
    173     db_create_online_log_dest_2     
    174     db_create_online_log_dest_3     
    175     db_create_online_log_dest_4     
    176     db_create_online_log_dest_5     
    177     db_recovery_file_dest     c:\oracle\fast_recovery_area
    178     db_recovery_file_dest_size     4322230272
    179     standby_file_management     MANUAL
    180     db_unrecoverable_scn_tracking     TRUE
    181     thread     0
    182     fast_start_io_target     0
    183     fast_start_mttr_target     0
    184     log_checkpoints_to_alert     FALSE
    185     db_lost_write_protect     NONE
    186     recovery_parallelism     0
    187     db_flashback_retention_target     1440
    188     dml_locks     1088
    189     replication_dependency_tracking     TRUE
    190     transactions     272
    191     transactions_per_rollback_segment     5
    192     rollback_segments     
    193     undo_management     AUTO
    194     undo_tablespace     UNDOTBS1
    195     undo_retention     900
    196     fast_start_parallel_rollback     LOW
    197     resumable_timeout     0
    198     instance_number     0
    199     db_block_checking     FALSE
    200     recyclebin     on
    201     db_securefile     PERMITTED
    202     create_stored_outlines     
    203     serial_reuse     disable
    204     ldap_directory_access     NONE
    205     ldap_directory_sysauth     no
    206     os_roles     FALSE
    207     rdbms_server_dn     
    208     max_enabled_roles     150
    209     remote_os_authent     FALSE
    210     remote_os_roles     FALSE
    211     sec_case_sensitive_logon     TRUE
    212     O7_DICTIONARY_ACCESSIBILITY     FALSE
    213     remote_login_passwordfile     EXCLUSIVE
    214     license_max_users     0
    215     audit_sys_operations     FALSE
    216     global_context_pool_size     
    217     db_domain     
    218     global_names     FALSE
    219     distributed_lock_timeout     60
    220     commit_point_strength     1
    221     global_txn_processes     1
    222     instance_name     oratest
    223     service_names     ORATEST
    224     dispatchers     (PROTOCOL=TCP) (SERVICE=ORATESTXDB)
    225     shared_servers     1
    226     max_shared_servers     
    227     max_dispatchers     
    228     circuits     
    229     shared_server_sessions     
    230     local_listener     
    231     remote_listener     
    232     listener_networks     
    233     cursor_space_for_time     FALSE
    234     session_cached_cursors     50
    235     remote_dependencies_mode     TIMESTAMP
    236     utl_file_dir     
    237     smtp_out_server     
    238     plsql_v2_compatibility     FALSE
    239     plsql_warnings     DISABLE:ALL
    240     plsql_code_type     INTERPRETED
    241     plsql_debug     FALSE
    242     plsql_optimize_level     2
    243     plsql_ccflags     
    244     plscope_settings     identifiers:none
    245     permit_92_wrap_format     TRUE
    246     java_jit_enabled     TRUE
    247     job_queue_processes     1000
    248     parallel_min_percent     0
    249     create_bitmap_area_size     8388608
    250     bitmap_merge_area_size     1048576
    251     cursor_sharing     EXACT
    252     result_cache_mode     MANUAL
    253     parallel_min_servers     0
    254     parallel_max_servers     135
    255     parallel_instance_group     
    256     parallel_execution_message_size     16384
    257     hash_area_size     131072
    258     result_cache_max_size     72482816
    259     result_cache_max_result     5
    260     result_cache_remote_expiration     0
    261     audit_file_dest     C:\ORACLE\ADMIN\ORATEST\ADUMP
    262     shadow_core_dump     none
    263     background_core_dump     partial
    264     background_dump_dest     c:\oracle\diag\rdbms\oratest\oratest\trace
    265     user_dump_dest     c:\oracle\diag\rdbms\oratest\oratest\trace
    266     core_dump_dest     c:\oracle\diag\rdbms\oratest\oratest\cdump
    267     object_cache_optimal_size     102400
    268     object_cache_max_size_percent     10
    269     session_max_open_files     10
    270     open_links     4
    271     open_links_per_instance     4
    272     commit_write     
    273     commit_wait     
    274     commit_logging     
    275     optimizer_features_enable     11.2.0.3
    276     fixed_date     
    277     audit_trail     DB
    278     sort_area_size     65536
    279     sort_area_retained_size     0
    280     cell_offload_processing     TRUE
    281     cell_offload_decryption     TRUE
    282     cell_offload_parameters     
    283     cell_offload_compaction     ADAPTIVE
    284     cell_offload_plan_display     AUTO
    285     db_name     ORATEST
    286     db_unique_name     ORATEST
    287     open_cursors     300
    288     ifile     
    289     sql_trace     FALSE
    290     os_authent_prefix     OPS$
    291     optimizer_mode     ALL_ROWS
    292     sql92_security     FALSE
    293     blank_trimming     FALSE
    294     star_transformation_enabled     TRUE
    295     parallel_degree_policy     MANUAL
    296     parallel_adaptive_multi_user     TRUE
    297     parallel_threads_per_cpu     2
    298     parallel_automatic_tuning     FALSE
    299     parallel_io_cap_enabled     FALSE
    300     optimizer_index_cost_adj     100
    301     optimizer_index_caching     0
    302     query_rewrite_enabled     TRUE
    303     query_rewrite_integrity     enforced
    304     pga_aggregate_target     4831838208
    305     workarea_size_policy     AUTO
    306     optimizer_dynamic_sampling     2
    307     statistics_level     TYPICAL
    308     cursor_bind_capture_destination     memory+disk
    309     skip_unusable_indexes     TRUE
    310     optimizer_secure_view_merging     TRUE
    311     ddl_lock_timeout     0
    312     deferred_segment_creation     TRUE
    313     optimizer_use_pending_statistics     FALSE
    314     optimizer_capture_sql_plan_baselines     FALSE
    315     optimizer_use_sql_plan_baselines     TRUE
    316     parallel_min_time_threshold     AUTO
    317     parallel_degree_limit     CPU
    318     parallel_force_local     FALSE
    319     optimizer_use_invisible_indexes     FALSE
    320     dst_upgrade_insert_conv     TRUE
    321     parallel_servers_target     128
    322     sec_protocol_error_trace_action     TRACE
    323     sec_protocol_error_further_action     CONTINUE
    324     sec_max_failed_login_attempts     10
    325     sec_return_server_release_banner     FALSE
    326     enable_ddl_logging     FALSE
    327     client_result_cache_size     0
    328     client_result_cache_lag     3000
    329     aq_tm_processes     1
    330     hs_autoregister     TRUE
    331     xml_db_events     enable
    332     dg_broker_start     FALSE
    333     dg_broker_config_file1     C:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\DR1ORATEST.DAT
    334     dg_broker_config_file2     C:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\DR2ORATEST.DAT
    335     olap_page_pool_size     0
    336     asm_diskstring     
    337     asm_preferred_read_failure_groups     
    338     asm_diskgroups     
    339     asm_power_limit     1
    340     control_management_pack_access     DIAGNOSTIC+TUNING
    341     awr_snapshot_time_offset     0
    342     sqltune_category     DEFAULT
    343     diagnostic_dest     C:\ORACLE
    344     tracefile_identifier     
    345     max_dump_file_size     unlimited
    346     trace_enabled     TRUE

    961262 wrote:
    The used tables:
    Table 1: 312 col, 12,248 MB, 11,138,561 rows, avg len 621 bytes
    Table 2: 159 col, 4288 MB, 5,441,171 rows, avg len 529 bytes
    Table 3: 118 col, 360MB, 820,259 rows, avg len 266 bytes
    The FTS has improved as expected. With 5 physical drives in a RAID0, a performance of
    420MB/s was achieved.
    In the write test on the other hand we were not able to archieve any improvement.
    The CTAS statement always works with about 5000 - 6000 BLOCK/s (80MB/s)
    But when we tried running several CTAS statements in different sessions, the overall speed increased as expected.
    Further tests showed that the write speed seems to depend also on the number of columns. 80MB/s were only
    possible with Tables 2 and 3. With Table 1, however only 30MB/s were measured.
    If multiple CTAS can produce higher throughput on writes this tells you that it is the production of the data that is the limit, not the writing. Notice in your example that nearly 75% of the time of the CTAS as CPU, not I/O.
    The thing about number of columns is that table 1 has exceeded the critical 254 limit - this means Oracle has chained all the rows internally into two pieces; this introduces lots of extra CPU-intensive operations (consistent gets, table access by rowid, heap block compress) so that the CPU time could have gone up significantly, resulting in a lower throughput that you are interpreting as a write problem.
    One other thought - if you are currently doing CTAS by "create as select from {real SAP table}" there may be other side effects that you're not going to see. I would do "create test clone of real SAP table", then "create as select from clone" to try and eliminate any such anomalies.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • SQL Query produces different results when inserting into a table

    I have an SQL query which produces different results when run as a simple query to when it is run as an INSERT INTO table SELECT ...
    The query is:
    SELECT   mhldr.account_number
    ,        NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
    ,        COUNT(1) num_apps
    FROM     app_parties ap
    SELECT   accsta.account_number
    ,        actply.party_sysid
    ,        RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
    FROM     activity_players actply
    ,        account_status accsta
    WHERE    1 = 1
    AND      actply.table_id (+) = 'ACCGRP'
    AND      actply.acttyp_code (+) = 'MHLDRM'
    AND      NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
    AND      actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
    ) mhldr
    WHERE    1 = 1
    AND      ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
    GROUP BY mhldr.account_number;      The INSERT INTO code:
    TRUNCATE TABLE applicant_summary;
    INSERT /*+ APPEND */
    INTO     applicant_summary
    (  account_number
    ,  main_borrower_status
    ,  num_apps
    SELECT   mhldr.account_number
    ,        NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
    ,        COUNT(1) num_apps
    FROM     app_parties ap
    SELECT   accsta.account_number
    ,        actply.party_sysid
    ,        RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
    FROM     activity_players actply
    ,        account_status accsta
    WHERE    1 = 1
    AND      actply.table_id (+) = 'ACCGRP'
    AND      actply.acttyp_code (+) = 'MHLDRM'
    AND      NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
    AND      actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
    ) mhldr
    WHERE    1 = 1
    AND      ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
    GROUP BY mhldr.account_number;      When run as a query, this code consistently returns 2 for the num_apps field (for a certain group of accounts), but when run as an INSERT INTO command, the num_apps field is logged as 1. I have secured the tables used within the query to ensure that nothing is changing the data in the underlying tables.
    If I run the query as a cursor for loop with an insert into the applicant_summary table within the loop, I get the same results in the table as I get when I run as a stand alone query.
    I would appreciate any suggestions for what could be causing this odd behaviour.
    Cheers,
    Steve
    Oracle database details:
    Oracle Database 10g Release 10.2.0.2.0 - Production
    PL/SQL Release 10.2.0.2.0 - Production
    CORE 10.2.0.2.0 Production
    TNS for 32-bit Windows: Version 10.2.0.2.0 - Production
    NLSRTL Version 10.2.0.2.0 - Production
    Edited by: stevensutcliffe on Oct 10, 2008 5:26 AM
    Edited by: stevensutcliffe on Oct 10, 2008 5:27 AM

    stevensutcliffe wrote:
    Yes, using COUNT(*) gives the same result as COUNT(1).
    I have found another example of this kind of behaviour:
    Running the following INSERT statements produce different values for the total_amount_invested and num_records fields. It appears that adding the additional aggregation (MAX(amount_invested)) is causing problems with the other aggregated values.
    Again, I have ensured that the source data and destination tables are not being accessed / changed by any other processes or users. Is this potentially a bug in Oracle?Just as a side note, these are not INSERT statements but CTAS statements.
    The only non-bug explanation for this behaviour would be a potential query rewrite happening only under particular circumstances (but not always) in the lower integrity modes "trusted" or "stale_tolerated". So if you're not aware of any corresponding materialized views, your QUERY_REWRITE_INTEGRITY parameter is set to the default of "enforced" and your explain plan doesn't show any "MAT_VIEW REWRITE ACCESS" lines, I would consider this as a bug.
    Since you're running on 10.2.0.2 it's not unlikely that you hit one of the various "wrong result" bugs that exist(ed) in Oracle. I'm aware of a particular one I've hit in 10.2.0.2 when performing a parallel NESTED LOOP ANTI operation which returned wrong results, but only in parallel execution. Serial execution was showing the correct results.
    If you're performing parallel ddl/dml/query operations, try to do the same in serial execution to check if it is related to the parallel feature.
    You could also test if omitting the "APPEND" hint changes anything but still these are just workarounds for a buggy behaviour.
    I suggest to consider installing the latest patch set 10.2.0.4 but this requires thorough testing because there were (more or less) subtle changes/bugs introduced with [10.2.0.3|http://oracle-randolf.blogspot.com/2008/02/nasty-bug-introduced-with-patch-set.html] and [10.2.0.4|http://oracle-randolf.blogspot.com/2008/04/overview-of-new-and-changed-features-in.html].
    You could also open a SR with Oracle and clarify if there is already a one-off patch available for your 10.2.0.2 platform release. If not it's quite unlikely that you are going to get a backport for 10.2.0.2.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Problem creating a table with a subquery and a dblink

    Hello!!
    I have a little problem. When I create a table with a subquery and a dblink, for example:
    CREATE TABLE EXAMPLE2 AS SELECT * FROM EXAMPLE1@DBLINK
    the table definition is changed. Fields with a type of CHAR or VARCHAR2 are created with a size three times bigger than the original table in the remote database. Field of type DATE and NUMBER are not changed. For example if the original table, in the database 1, has a field of type CHAR(1) it is create in the local database with a type of CHAR(3), and a VARCHAR2(5) field is created with VARCHAR2(15).
    Database 1 has a WE8DEC character set.
    Database 2 has a AL32UTF8 character set.
    Could it be related to the difference in character sets?
    What can I do to make Oracle use the same table definition when creating a table in this way?
    Thanks!!

    That is related to character sets, and probably necessary if you want all the data in the remote table to be able to fit in the new table.
    When you declare a column VARCHAR2(5), by default, you're allocating 5 bytes of storage. In a single-byte character set, which I believe WE8DEC is, that also happens to equate to 5 characters. In a multi-byte character set like AL32UTF8, though, 1 character may require up to 3 bytes of storage, so you'd need to allocate 15 bytes to store that data. That's what's going on here.
    You could still store all the data if you create the table locally by explicitly requesting 5 characters of storage by declaring the column VARCHAR2(5 CHAR). You could also set the NLS_LENGTH_SEMANTICS parameter to CHAR rather than BYTE before creating the table, but I believe that both of these will only apply when you're explicitly defining columns as part of your CREATE TABLE. I don't believe either will apply to CTAS statements.
    Justin

  • Migrate Partition Tables

    What is the best way to migrate partition tables from one oracle instance to another? I noticed when using CTAS statement and import/export utilitiy, Oracle migrates the table as a regular table and not as a partitioned table.

    CTAS can be used to create a partitioned table, but you need to specify the partitioning in the CREATE TABLE part of the statement.
    Use DBMS_METADATA.GET_DDL to extract the DDL for the table, like this:
    SELECT dbms_metadata.get_ddl('TABLE',table_name) FROM user_tables;Then modify it into your CTAS statement.
    For example:
    create table part_table
      partition by range (timestamp)
      (partition aug15 values less than (to_date('16-aug-2008','DD-mon-YYYY'))
      ,partition aug16 values less than (to_date('17-aug-2008','DD-mon-YYYY'))
      ,partition aug17 values less than (to_date('18-aug-2008','DD-mon-YYYY'))
      ,partition aug18 values less than (to_date('19-aug-2008','DD-mon-YYYY'))
      ,partition aug21 values less than (to_date('22-aug-2008','DD-mon-YYYY'))
      ,partition aug22 values less than (to_date('23-aug-2008','DD-mon-YYYY'))
      ,partition other values less than (MAXVALUE)
      ) tablespace USERS
    as select * from non_part_table@my_dblink;

  • Materialzied Refresh is very slow in oracle10g

    We are migrating our database from Oracle8i-->Oracle10g.
    But in dataware housing we are facing problem for refresh materialized view is very slow
    We change due to this below.
    1. exec DBMS_MVIEW.REFRESH('MVIEW,atomic_refresh => FALSE );
    Our two parameter are same as Oracle8i
    query_rewrite_enabled = TRUE
    query_rewrite_integrity = enforced
    Please suggest to make it fast
    Thanks
    Chetan

    ChetanS wrote:
    We are migrating our database from Oracle8i-->Oracle10g.
    But in dataware housing we are facing problem for refresh materialized view is very slow
    We change due to this below.
    1. exec DBMS_MVIEW.REFRESH('MVIEW,atomic_refresh => FALSE );
    Our two parameter are same as Oracle8i
    query_rewrite_enabled = TRUE
    query_rewrite_integrity = enforced
    Please suggest to make it fastChetan,
    depending on your MV definition it might not be the refresh DML but the query that is taking substantially longer than in your 8i environment. May be you want to compare the performance of your query by simply running a CTAS statement using the MV query to find out whether it's a DDL/DML/fast refresh issue or a query related performance issue.
    Do you use the FAST refresh method, i.e. do you have MV logs on the base tables or is this a complete refresh?
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Loading SQL Tuning Sets

    On a 10.2.0.4 database, is it possible to load SQL Tuning sets with SQL from a text file? I can't find anything in the documentation mentioning this but I could have simply missed it and this facility would be very useful to us.
    Our reporting tool is MicroStrategy which operates by building lots of temporary tables: extracting the SQL via Top Activity EM page will put the CTAS statement into the tuning set which (I believe) is ignored by SQL Access Advisor - extracting the sql to a text file, editting to remove the "create table as" and executing in order to load the tuning set is very long winded for a set of queries - at least being able to load the tuning set directly with a set of SQL select statements directly from a text file would help significantly.
    Many thanks for any advice
    Pete

    I did not see a direct way to create the STS from a text file. From the docs:
    "The standard sources for populating an STS are the workload repository, another STS, or the cursor cache"
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sql_tune.htm#i34915
    But you could try a couple of things:
    1) create one tuning task per statement, using DBMS_SQLTUNE.CREATE_TUNING_TASK, which can use plain text as input: http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sql_tune.htm#CHDJDHGE
    2) create the STS with queries from cursor cache: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_sqltun.htm#i1010615
    Both approaches needs some PL/SQL programming, but it can be useful if you can identify your SQL statements some way.
    Regards.

  • Primary Key in Create as Select

    In Oracle, how do we specify a primary key in a create as select statement? I ran a simple statement but received the ORA-00933 error. Do I have to make it two steps with table created first and do alter table to add the key?
    Thanks,
    CREATE TABLE Test as
    SELECT
    FROM Source a
    WHERE a.sys=100
    CONSTRAINT PK1 PRIMARY KEY (AccountID)

    Do I have to make it two steps with table created first and do alter table to add the key? Only if you wish to use the * notation. If you're prepared to specify the columns you can do it in a single step.
    SQL> create table my_emp
      2      (empno constraint myemp_pk primary key
      3      , ename
      4      , sal)
      5  as select empno
      6      , ename
      7      , sal
      8  from
      9  emp
    10  /
    Table created.
    SQL> select * from my_emp
      2  /
         EMPNO ENAME             SAL
          7369 CLARKE            800
          7499 VAN WIJK         1600
          7521 PADFIELD         1250
          7566 ROBERTSON        2975
          7654 BILLINGTON       1250
          7698 SPENCER          2850
          7782 BOEHMER          2450
          7788 RIGBY            3000
          7839 SCHNEIDER        5000
          7844 CAVE             1500
          7876 KULASH           1100
          7900 HALL              950
          7902 GASPAROTTO       3000
          7934 KISHORE          1300
    14 rows selected.
    SQL>Cheers, APC
    blog: http://radiofreetooting.blogspot.com
    Edited by: APC on Apr 9, 2009 1:55 PM
    Forgot to include the CTAS statement. Doh!

  • Got ORA-1555, but session is running

    I ran a CTAS statement on my database. After 1 hr, I received ora-1555 error in my alert log. But, if I check the database, I can see that the session is active and the size of temp segment is growing. How is that possible? What am I missing here?
    Please advice.
    Update :Any advice out there?

    Are you sure that the session that got the ORA-01555 error was the session running the CTAS statement? Might you have caused a different session to encounter the error?
    Justin

  • Speed up CREATE TABLE AS

    Are there tricks to speeding up the inserts performed by CREATE TABLE AS?
    We are having performance problems with these. The embedded query runs quickly when wrapped with a SELECT COUNT(*) FROM, but the insert takes a long time. (minutes for the former hours for the latter for 16k rows.)
    I see that Sql Loader has many options to speed bulk loads. Can any of these be applied to CREATE TABLE AS?
    The table we are creating does have one CLOB column, if that is relevant.
    Thanks,
    Steve

    steve-pcbi wrote:
    sorry that i am kind of ignorant. how do i trace a session?There is a good walk-through of the options in the [Oracle FAQ on tracing|http://www.orafaq.com/wiki/SQL_Trace]
    yes, i believe the CTAS plan is the same as the SELECT plan (will check tomorrow). If it is not, is there a general way to force it to be so? The two plans should generally be the same, but if you could verify that, it would be quite useful. Without knowing what the plans are and figuring out why the optimizer was generating different plans, it's a bit hard to provide a general way to force one query to have the same plan as another short of telling you to fully hint the CTAS statement, which would generally be a major pain.
    Justin

  • Sles9  sp2   scsi error  internal  target failure  buffer I/O error

    Apr 3 10:40:41 hostname kernel: SCSI error : <2 0 0 0> return code = 0x10000
    Apr 3 10:40:41 hostname kernel: end_request: I/O error, dev sdb, sector 24047
    Apr 3 10:40:41 hostname kernel: Buffer I/O error on device sdb1, logical block 2998
    Apr 3 10:40:41 hostname kernel: lost page write due to I/O error on sdb1
    can any one tell me what happened ? i had checked the disk
    had no medium error. after remount , it goes well. but this will happen again.

    961262 wrote:
    The used tables:
    Table 1: 312 col, 12,248 MB, 11,138,561 rows, avg len 621 bytes
    Table 2: 159 col, 4288 MB, 5,441,171 rows, avg len 529 bytes
    Table 3: 118 col, 360MB, 820,259 rows, avg len 266 bytes
    The FTS has improved as expected. With 5 physical drives in a RAID0, a performance of
    420MB/s was achieved.
    In the write test on the other hand we were not able to archieve any improvement.
    The CTAS statement always works with about 5000 - 6000 BLOCK/s (80MB/s)
    But when we tried running several CTAS statements in different sessions, the overall speed increased as expected.
    Further tests showed that the write speed seems to depend also on the number of columns. 80MB/s were only
    possible with Tables 2 and 3. With Table 1, however only 30MB/s were measured.
    If multiple CTAS can produce higher throughput on writes this tells you that it is the production of the data that is the limit, not the writing. Notice in your example that nearly 75% of the time of the CTAS as CPU, not I/O.
    The thing about number of columns is that table 1 has exceeded the critical 254 limit - this means Oracle has chained all the rows internally into two pieces; this introduces lots of extra CPU-intensive operations (consistent gets, table access by rowid, heap block compress) so that the CPU time could have gone up significantly, resulting in a lower throughput that you are interpreting as a write problem.
    One other thought - if you are currently doing CTAS by "create as select from {real SAP table}" there may be other side effects that you're not going to see. I would do "create test clone of real SAP table", then "create as select from clone" to try and eliminate any such anomalies.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • How to migrate data from MS Access to Oracle???????

    how to migrate data from MS Access to Oracle???????

    You can use heterogeneous services (HS) connection to MS-Access from Oracle and still use the same PL/SQL (or even a single CTAS statement) or SQL to load directly to Oracle tables. It's transparent, quick and clean as opposed to writing complex control files.
    You might have to set the ODBC connection to your Access database and add the details to Oracle listener.
    Please let me know if you are not sure how to set up the connection.

Maybe you are looking for