Very high data block waits

I have one table xxxx in a tablespace tbx and the tablespace tbx has only 1 datafile. The table xxxx size is 890MB w/ 14 millions records. The datafile size is 2048MB. This table is a frequently access with insert/delete/select. The system spends alot of time waiting on this datafile. If I create a new tablespace abc with 20 datafiles worth about 100MB each, would it help reducing the data block wait count? The pctfree/pctused is 10/40 respectively.
Can anyone please give me an how to resolve this?

I am looking at an Oracle Statistics. We use SAN technology with RAID 0+1 Striped across all disks.
First I use this query to get the wait statistics:
select time, count, class
from v$waitstat
order by time, count;
From this query above, I got this result and database just been up 02/17/2004, just about 4 days ago:
TIME COUNT CLASS
0 0 sort block
0 0 save undo block
0 0 save undo header
0 0 free list
0 0 bitmap block
0 0 unused
0 0 system undo block
0 0 system undo header
0 0 bitmap index block
10 10 extent map
48 656 undo header
271 853 undo block
301 730 segment header
780382 1214405 data block
Then I use this query to find which datafile is being hit the most:
select count, file#, name
from x$kcbfwait, v$datafile
where indx + 1 = file#
order by count desc;
The query above returned:
COUNT     FILE#     NAME
473324     121     /xx/xx_ycm_tbs_03_01.dbf
104179     120     /xx/xx_ycm_tbs_02_01.dbf
93336     118     /xx/xx_idx_tbs_03_01.dbf
93138     119     /xx/xx_idx_tbs_03_02.dbf
80289     90     /xx/xx_datafile67.dbf
64044     108     /xx/xx_ycm_tbs_01_01.dbf
61485     41     /xx/xx_datafile25.dbf
61103     21     /xx/xx_datafile8.dbf
57329     114     /xx/xx_ycm_tbs_01_02.dbf
29338     5     /xx/xx_datafile02.dbf
29101     123     /xx/xx_idx_tbs_04_01.dbf
file# 121 is in a tablespace with this only datafile and this tablespace hold only one table. file#120 is the same thing, it's in another tablespace and only one table in that tablespace.
At the same time, i use TOP in Solaris I see iowait range between 5-25% during busy hour.

Similar Messages

  • I am receiving bills from my carrier with very high data usage. I read books from apple store. Do ibooks use gb once purchased?

    I am receiving bills from my carrier with very high data usage. I read books from apple store. Do ibooks use gb once purchased?

    To reduce data usage, you should put iPad on Aeroplane Mode to stop all background activities when not using iPad.
    Message was edited by: Diavonex

  • Very high data traffic generated by Nokia Messagin...

    Is anyone else seeing really high data use on their phone log following last week's Messaging outage?
    Up until last week's outage, the data traffic generated by Messaging was about 5kb an hour (with no new emails sent or received). Presumably this small amount of data traffic results from the Nokia email server "pinging" the phone periodically to keep the connection open.
    But after last week's outage and issues with passwords being requested etc etc, I saw that the data traffic generated by the Messaging app has gone through the roof - about 400 - 500kb an hour!  Again, this is with no new emails sent or received - so theoretically is just the server pinging the phone to keep the idle connection open!
    I reinstalled Nokia Messaging on my phone (E61i) - no difference - the data generated by the Messaging app is still 400 - 500 kb an hour (it stops when I take Messaging offline).  
    I have friends roaming in Ireland and their phone has had the same problem - the data traffic generated by their Messaging app is sky high and they have had to shut it down because of the high cost of data whilst roaming.
    I also newly installed Messaging on another phone (N95) that I have, using a brand new email account and a new T-mobile SIM card, and guess what - that phone also has sky high data traffic due to Messaging.
    What is going on??  Can someone at Nokia Messaging please sort this out?  Anyone roaming with Nokia Messaging enabled is facing hugh data charges from their network.

    The data traffic occurs in phones which are on a home network (mine in the UK) and on a roaming network (my friends traveling in Ireland).  Interestingly, I've now checked with another friend who is roaming in Italy and they also use Messaging but haven't seen any high packet data issues with their phone.
    The data traffic shows up in the Packet Log, as well as on the actual T-Mobile network account (plus the charges for the data!)  Accessing a T-Mobile UK account online shows all packet data use - in this case, 120-150 kb every 15 minutes, whereas before last week it was 1-2kb every 15 minutes.  
    When I "Go Offline" with Nokia Messaging, the high data traffic stops, both on the Packet Log and network account, so I am 100% certain this is due to Messaging.

  • Why does Lightroom 4.3 use very high cpu when waiting for user to click OK?

    I saved metadata for 52 photos. 
    Lightroom 4.3 couldn't save metadata for 2 of the photos and opened a window to tell me so. (It had no problem saving metadata on second attempt.)
    While it waited for my response and wasn't doing any work, it was using over 50% of my CPU.
    Why?
    Windows 7 is 32-bit Professional.
    Processor: Intel Core i5 750 Processor 2.66 GHz
    Here's my system info as reported by Lightroom. 
    Lightroom version: 4.3 [865747]
    Operating system: Windows 7 Business Edition
    Version: 6.1 [7601]
    Application architecture: x86
    System architecture: x86
    Logical processor count: 4
    Processor speed: 2.7 GHz
    Built-in memory: 3579.4 MB
    Real memory available to Lightroom: 716.8 MB
    Real memory used by Lightroom: 710.9 MB (99.1%)
    Virtual memory used by Lightroom: 847.9 MB
    Memory cache size: 2.0 MB
    Maximum thread count used by Camera Raw: 4
    System DPI setting: 96 DPI
    Desktop composition enabled: Yes
    Displays: 1) 1920x1080
    Application folder: C:\Program Files\Adobe\Adobe Photoshop Lightroom 4.3
    Library Path: D:\Users\Calvin\Pictures\My Lightroom\lightroom\lightroom4\lightroom4.lrcat
    Settings Folder: C:\Users\Calvin\AppData\Roaming\Adobe\Lightroom
    Adapter #1: Vendor : 10de
        Device : 640
        Subsystem : c9593842
        Revision : a1
        Video Memory : 1007
    AudioDeviceIOBlockSize: 1024
    AudioDeviceName: Speakers (Realtek High Definition Audio)
    AudioDeviceNumberOfChannels: 2
    AudioDeviceSampleRate: 44100
    Build: Uninitialized
    Direct2DEnabled: false
    GL_ALPHA_BITS: 0
    GL_BLUE_BITS: 8
    GL_GREEN_BITS: 8
    GL_MAX_3D_TEXTURE_SIZE: 2048
    GL_MAX_TEXTURE_SIZE: 8192
    GL_MAX_TEXTURE_UNITS: 4
    GL_MAX_VIEWPORT_DIMS: 8192,8192
    GL_RED_BITS: 8
    GL_RENDERER: GeForce 9500 GT/PCIe/SSE2
    GL_SHADING_LANGUAGE_VERSION: 3.30 NVIDIA via Cg compiler
    GL_VENDOR: NVIDIA Corporation
    GL_VERSION: 3.3.0
    OGLEnabled: true
    OGLPresent: true
    GL_EXTENSIONS: GL_ARB_arrays_of_arrays GL_ARB_base_instance GL_ARB_blend_func_extended GL_ARB_clear_buffer_object GL_ARB_color_buffer_float GL_ARB_compatibility GL_ARB_compressed_texture_pixel_storage GL_ARB_conservative_depth GL_ARB_copy_buffer GL_ARB_copy_image GL_ARB_debug_output GL_ARB_depth_buffer_float GL_ARB_depth_clamp GL_ARB_depth_texture GL_ARB_draw_buffers GL_ARB_draw_elements_base_vertex GL_ARB_draw_instanced GL_ARB_ES2_compatibility GL_ARB_ES3_compatibility GL_ARB_explicit_attrib_location GL_ARB_explicit_uniform_location GL_ARB_fragment_coord_conventions GL_ARB_fragment_layer_viewport GL_ARB_fragment_program GL_ARB_fragment_program_shadow GL_ARB_fragment_shader GL_ARB_framebuffer_no_attachments GL_ARB_framebuffer_object GL_ARB_framebuffer_sRGB GL_ARB_geometry_shader4 GL_ARB_get_program_binary GL_ARB_half_float_pixel GL_ARB_half_float_vertex GL_ARB_imaging GL_ARB_instanced_arrays GL_ARB_internalformat_query GL_ARB_internalformat_query2 GL_ARB_invalidate_subdata GL_ARB_map_buffer_alignment GL_ARB_map_buffer_range GL_ARB_multisample GL_ARB_multitexture GL_ARB_occlusion_query GL_ARB_occlusion_query2 GL_ARB_pixel_buffer_object GL_ARB_point_parameters GL_ARB_point_sprite GL_ARB_program_interface_query GL_ARB_provoking_vertex GL_ARB_robust_buffer_access_behavior GL_ARB_robustness GL_ARB_sampler_objects GL_ARB_seamless_cube_map GL_ARB_separate_shader_objects GL_ARB_shader_bit_encoding GL_ARB_shader_objects GL_ARB_shader_texture_lod GL_ARB_shading_language_100 GL_ARB_shading_language_420pack GL_ARB_shading_language_include GL_ARB_shading_language_packing GL_ARB_shadow GL_ARB_stencil_texturing GL_ARB_sync GL_ARB_texture_border_clamp GL_ARB_texture_buffer_object GL_ARB_texture_buffer_range GL_ARB_texture_compression GL_ARB_texture_compression_rgtc GL_ARB_texture_cube_map GL_ARB_texture_env_add GL_ARB_texture_env_combine GL_ARB_texture_env_crossbar GL_ARB_texture_env_dot3 GL_ARB_texture_float GL_ARB_texture_mirrored_repeat GL_ARB_texture_multisample GL_ARB_texture_non_power_of_two GL_ARB_texture_query_levels GL_ARB_texture_rectangle GL_ARB_texture_rg GL_ARB_texture_rgb10_a2ui GL_ARB_texture_storage GL_ARB_texture_storage_multisample GL_ARB_texture_swizzle GL_ARB_texture_view GL_ARB_timer_query GL_ARB_transpose_matrix GL_ARB_uniform_buffer_object GL_ARB_vertex_array_bgra GL_ARB_vertex_array_object GL_ARB_vertex_attrib_binding GL_ARB_vertex_buffer_object GL_ARB_vertex_program GL_ARB_vertex_shader GL_ARB_vertex_type_2_10_10_10_rev GL_ARB_viewport_array GL_ARB_window_pos GL_ATI_draw_buffers GL_ATI_texture_float GL_ATI_texture_mirror_once GL_S3_s3tc GL_EXT_texture_env_add GL_EXT_abgr GL_EXT_bgra GL_EXT_bindable_uniform GL_EXT_blend_color GL_EXT_blend_equation_separate GL_EXT_blend_func_separate GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_compiled_vertex_array GL_EXT_Cg_shader GL_EXT_depth_bounds_test GL_EXT_direct_state_access GL_EXT_draw_buffers2 GL_EXT_draw_instanced GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_framebuffer_blit GL_EXT_framebuffer_multisample GL_EXTX_framebuffer_mixed_formats GL_EXT_framebuffer_multisample_blit_scaled GL_EXT_framebuffer_object GL_EXT_framebuffer_sRGB GL_EXT_geometry_shader4 GL_EXT_gpu_program_parameters GL_EXT_gpu_shader4 GL_EXT_multi_draw_arrays GL_EXT_packed_depth_stencil GL_EXT_packed_float GL_EXT_packed_pixels GL_EXT_pixel_buffer_object GL_EXT_point_parameters GL_EXT_provoking_vertex GL_EXT_rescale_normal GL_EXT_secondary_color GL_EXT_separate_shader_objects GL_EXT_separate_specular_color GL_EXT_shadow_funcs GL_EXT_stencil_two_side GL_EXT_stencil_wrap GL_EXT_texture3D GL_EXT_texture_array GL_EXT_texture_buffer_object GL_EXT_texture_compression_dxt1 GL_EXT_texture_compression_latc GL_EXT_texture_compression_rgtc GL_EXT_texture_compression_s3tc GL_EXT_texture_cube_map GL_EXT_texture_edge_clamp GL_EXT_texture_env_combine GL_EXT_texture_env_dot3 GL_EXT_texture_filter_anisotropic GL_EXT_texture_integer GL_EXT_texture_lod GL_EXT_texture_lod_bias GL_EXT_texture_mirror_clamp GL_EXT_texture_object GL_EXT_texture_shared_exponent GL_EXT_texture_sRGB GL_EXT_texture_sRGB_decode GL_EXT_texture_storage GL_EXT_texture_swizzle GL_EXT_timer_query GL_EXT_vertex_array GL_EXT_vertex_array_bgra GL_EXT_import_sync_object GL_IBM_rasterpos_clip GL_IBM_texture_mirrored_repeat GL_KHR_debug GL_KTX_buffer_region GL_NV_blend_square GL_NV_conditional_render GL_NV_copy_depth_to_color GL_NV_copy_image GL_NV_depth_buffer_float GL_NV_depth_clamp GL_NV_ES1_1_compatibility GL_NV_explicit_multisample GL_NV_fence GL_NV_float_buffer GL_NV_fog_distance GL_NV_fragment_program GL_NV_fragment_program_option GL_NV_fragment_program2 GL_NV_framebuffer_multisample_coverage GL_NV_geometry_shader4 GL_NV_gpu_program4 GL_NV_half_float GL_NV_light_max_exponent GL_NV_multisample_coverage GL_NV_multisample_filter_hint GL_NV_occlusion_query GL_NV_packed_depth_stencil GL_NV_parameter_buffer_object GL_NV_parameter_buffer_object2 GL_NV_path_rendering GL_NV_pixel_data_range GL_NV_point_sprite GL_NV_primitive_restart GL_NV_register_combiners GL_NV_register_combiners2 GL_NV_shader_buffer_load GL_NV_texgen_reflection GL_NV_texture_barrier GL_NV_texture_compression_vtc GL_NV_texture_env_combine4 GL_NV_texture_expand_normal GL_NV_texture_multisample GL_NV_texture_rectangle GL_NV_texture_shader GL_NV_texture_shader2 GL_NV_texture_shader3 GL_NV_transform_feedback GL_NV_vertex_array_range GL_NV_vertex_array_range2 GL_NV_vertex_buffer_unified_memory GL_NV_vertex_program GL_NV_vertex_program1_1 GL_NV_vertex_program2 GL_NV_vertex_program2_option GL_NV_vertex_program3 GL_NVX_conditional_render GL_NVX_gpu_memory_info GL_SGIS_generate_mipmap GL_SGIS_texture_lod GL_SGIX_depth_texture GL_SGIX_shadow GL_SUN_slice_accum GL_WIN_swap_hint WGL_EXT_swap_control

    I don't think so. 
    I have enough experience with LR to know what to expect for various activities. 
    Previews for all photos had been rendered long before I initiated the Save Metadata.
    Metadata for 50 of the 52 files had already been written. 
    The high cpu had gone on for at least 5 minutes.   I switched to another app imediately after initiating the Save Metadata for the 52 files and only went back to LR when I heard my CPU fan running for a long time.
    When I got back to LR, I saw the "Could not save metadata" window.  I clicked "Show in Library" and OK.  As soon as I did that, CPU usage went back to normal. 
    I've experienced the exact same scenario, where LR can't save metadata for all photos and I've never had LR get stuck consuming a very large amount of CPU.
    As a test, I selected 118 other photos, changed metadata for all of them and then selected all and saved metadata.  LR took about 20 seconds to save metadata for all 118 and LR CPU usage never went above 17%.  The difference is that LR did not show the "Could not save metadata" window.

  • Deadlocks and very high wait times

    We are seeing a very high number of deadlocks in the system. The deadlocks trace all show a 'enq: TX - row lock contention' with wait times of around 2929700+ seconds ex:
    last wait for 'enq: TX - row lock contention' blocking sess=0x70000006d85e1b8 seq=55793 wait_time=2929704 seconds since wait started=4
    name|mode=54580006, usn<<16 | slot=1d0010, sequence=705f
    Dumping Session Wait History
    for 'enq: TX - row lock contention' count=1 wait_time=2929704
    name|mode=54580006, usn<<16 | slot=1d0010, sequence=705f
    for 'latch: enqueue hash chains' count=1 wait_time=1649
    address=70000006dbb4a20, number=13, tries=0
    for 'enq: TX - row lock contention' count=1 wait_time=2929708
    name|mode=54580006, usn<<16 | slot=1d0010, sequence=705f
    for 'SQL*Net message from client' count=1 wait_time=101740
    driver id=54435000, #bytes=1, =0
    for 'SQL*Net message to client' count=1 wait_time=1
    driver id=54435000, #bytes=1, =0
    for 'direct path write temp' count=1 wait_time=921
    file number=fb, first dba=6521b, block cnt=2
    for 'SQL*Net more data from client' count=1 wait_time=3
    driver id=54435000, #bytes=10, =0
    for 'SQL*Net more data from client' count=1 wait_time=5
    driver id=54435000, #bytes=1e, =0
    for 'SQL*Net more data from client' count=1 wait_time=10
    driver id=54435000, #bytes=2c, =0
    for 'SQL*Net more data from client' count=1 wait_time=5
    driver id=54435000, #bytes=3a, =0
    Any ideas on how to resolve this ?
    Thanks
    Surya

    Sorry for the typo, that Ora-0060 error we are seeing. Here is the deadlock graph:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Oracle Label Security, OLAP, Data Mining Scoring Engine
    and Real Application Testing options
    ORACLE_HOME = /orasw/product/10.2.0.4.0
    System name: AIX
    Node name: spda5001
    Release: 3
    Version: 5
    Machine: 00074D5AD400
    Instance name: IAMS01P1
    Redo thread mounted by this instance: 1
    Oracle process number: 21
    Unix process pid: 2306444, image: oracle@spda5001
    *** 2011-12-24 05:05:39.885
    *** SERVICE NAME:(IAMS01P) 2011-12-24 05:05:39.884
    *** SESSION ID:(443.2130) 2011-12-24 05:05:39.884
    DEADLOCK DETECTED ( ORA-00060 )
    [Transaction Deadlock]
    The following deadlock is not an ORACLE error. It is a
    deadlock due to user error in the design of an application
    or from issuing incorrect ad-hoc SQL. The following
    information may aid in determining the deadlock:
    Deadlock graph:
    ---------Blocker(s)-------- ---------Waiter(s)---------
    Resource Name process session holds waits process session holds waits
    TX-00080020-000c3957 21 443 X 58 391 X
    TX-001d0010-0000705f 58 391 X 21 443 X
    session 443: DID 0001-0015-0000002E session 391: DID 0001-003A-00000081
    session 391: DID 0001-003A-00000081 session 443: DID 0001-0015-0000002E
    Rows waited on:
    Session 391: obj - rowid = 0001098B - AAATtpAAGAAADROAAD
    (dictionary objn - 67979, file - 6, block - 13390, slot - 3)
    Session 443: obj - rowid = 00010B25 - AAARRwAAGAAAAdgAAN
    (dictionary objn - 68389, file - 6, block - 1888, slot - 13)
    Information on the OTHER waiting sessions:
    Session 391:
    pid=58 serial=16572 audsid=52790041 user: 93/IAMS_USR
    O/S info: user: , term: , ospid: 1234, machine: mac3023
    program:
    Current SQL Statement:
    update spt_identity set created=:1, modified=:2, owner=:3, assigned_scope=:4, assigned_scope_path=:5, extended1=:6, extended2=:7, extended3=:8, extended4=:9, extended5=:10, extended6=:11, extended7=:12, extended8=:13, extended9=:14, extended10=:15, extended11=:16, extended12=:17, extended13=:18, extended14=:19, extended15=:20, extended16=:21, extended17=:22, extended18=:23, extended19=:24, extended20=:25, name=:26, description=:27, protected=:28, iiqlock=:29, attributes=:30, manager=:31, display_name=:32, firstname=:33, lastname=:34, email=:35, manager_status=:36, inactive=:37, last_login=:38, last_refresh=:39, password=:40, password_expiration=:41, password_history=:42, bundle_summary=:43, assigned_role_summary=:44, correlated=:45, auth_question_lock_start=:46, failed_auth_question_attempts=:47, controls_assigned_scope=:48, certifications=:49, activity_config=:50, preferences=:51, history=:52, scorecard=:53, uipreferences=:54, attribute_meta_data=:55, workgroup=:56 where id=:57
    End of information on OTHER waiting sessions.
    Current SQL statement for this session:
    update spt_workflow_case set created=:1, modified=:2, owner=:3, assigned_scope=:4, assigned_scope_path=:5, stack=:6, attributes=:7, launcher=:8, host=:9, launched=:10, completed=:11, progress=:12, percent_complete=:13, type=:14, messages=:15, name=:16, description=:17, complete=:18, target_class=:19, target_id=:20, target_name=:21, workflow=:22 where id=:23

  • Very high log file sequential read and control file sequential read waits?

    I have a 10.2.0.4 database and have 5 streams capture processes running to replicate data to another database. However I am seeing very high
    log file sequential read and control file sequential read by the capture procesess. This is causing slowness in the database as the databass is wasting so much time on these wait events. From AWR report
    Elapsed: 20.12 (mins)
    DB Time: 67.04 (mins)
    and From top 5 wait events
    Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
    CPU time 1,712 42.6
    log file sequential read 99,909 683 7 17.0 System I/O
    log file sync 49,702 426 9 10.6 Commit
    control file sequential read262,625 384 1 9.6 System I/O
    db file sequential read 41,528 378 9 9.4 User I/O
    Oracle support hasn't been of much help, other than wasting my 10 days and telling me to try this and try that.
    Do you have streams running in your environment, are you experiencing this wait. Have you done anything to resolve these waits..
    Thanks

    Welcome to the forums.
    There is insufficient information in what you have posted to know that your analysis of the situation is correct or anything about your Streams environment.
    We don't know what you are replicating. Not size, not volume, not type of capture, not rules, etc.
    We don't know the distance over which it is being replicated ... 10 ft. or 10 light years.
    We don't have any AWR or ASH data to look at.
    etc. etc. etc. If this is what you provided Oracle Support it is no wonder they were unable to help you.
    To diagnose this problem, if one exists, requires someone on-site or with a very substantial body of data which you have not provided. The first step is to fill in the answers to all of the obvious first level questions. Then we will likely come back with a second level of questioning.
    But when you do ... do not post here. Your questions are not "Database General" they are specific to Streams and there is a Streams forum specifically for them.
    Thank you.

  • A very basic question regarding data block

    Hi All,
    I've a very basic question concerning data blocks in oracle forms 10g.
    I want to make a view only screen (only query allowed, no update, insert or delete).
    I'll have 6-7 fields on the screen but all the fields are not from a single table.
    For e.g, let say we've field names to display on the screen are f1, f2, f3, f4..
    Out of this f1 and f2 will come from table A and f3, f4 will come from table B.
    Now, my question : Is it possible to create a data block using the data block wizard for such situation if we select create data block from table options?
    If no, can you please tell me an approach to do this.
    Regards,
    Navnit

    Hello ,
    First write your query & select datablock property.
    just change the below properties
    Query Data Source Type=From Clause Query
    Query Data Source Name = (Paste query here)
    now you should add the block ITEMs and give their names according to query columns names. shows the column on canvas and run..
    Best Regard
    skyniazi
    Edited by: SKYNIAZI on Mar 29, 2009 1:32 PM

  • Very high "Control File" IOStat -- Reads: Data = 70G

    What could cause the IOStat value for "Control File -- Reads: Data" to be very high relative to the IOStat value "Data File" for example? In looking at one AWR report I see a value of 70G for "Control File -- Reads: Data" while the "Data File -- Reads: Data" value is only 20G.

    user11976449 wrote:
    What could cause the IOStat value for "Control File -- Reads: Data" to be very high relative to the IOStat value "Data File" for example? In looking at one AWR report I see a value of 70G for "Control File -- Reads: Data" while the "Data File -- Reads: Data" value is only 20G.post results from following SQL
    SELECT * FROM V$VERSION;
    over what duration was the AWR report.
    please post FORMATTED excerpts from AWR report so we can see for ourselves what you report.

  • Arch wait on sendreq is always very high

    Hi all,
    we have a DG system ,
    a primary and 1 standby both 10.2.0.3 , the arch wait on sendreq is always very high
    i can't see any recommendation from Grid control?
    should i worry about that?
    Thanks

    Hi,
    there is doc http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_transport.htm#i1227137
    http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_DataGuardNetworkBestPractices.pdf
    Regards,
    Tom
    http://oracledba.cz

  • Very high cpu utilization with mq broker

    Hi all,
    I see a very high cpu utilization (400% on 8 cpu server) when I connect consumers to OpenQ. It increase close to 100% for every consumer I add. Slowly, the consumer comes to a halt, as the producers are sending messages at a good rate too.
    Environment Setup
    Glassfish version 2.1
    com.sun.messaging.jmq Version Information Product Compatibility Version: 4.3 Protocol Version: 4.3 Target JMS API Version: 1.1
    Cluster set up using persistent storage. snippet from broker log.
    Java Runtime: 1.6.0_14 Sun Microsystems Inc. /home/user/foundation/jdk-1.6/jre [06/Apr/2011:12:48:44 EDT] IMQ_HOME=/home/user/foundation/sges/imq [06/Apr/2011:12:48:44 EDT] IMQ_VARHOME=/home/user/foundation/installation/node-agent-server1/server1/imq [06/Apr/2011:12:48:44 EDT] Linux 2.6.18-164.10.1.el5xen i386 server1 (8 cpu) user [06/Apr/2011:12:48:44 EDT] Java Heap Size: max=394432k, current=193920k [06/Apr/2011:12:48:44 EDT] Arguments: -javahome /home/user/foundation/jdk-1.6 -Dimq.autocreate.queue=false -Dimq.autocreate.topic=false -Dimq.cluster.masterbroker=mq://server1:37676/ -Dimq.cluster.brokerlist=mq://server1:37676/,mq://server2:37676/ -Dimq.cluster.nowaitForMasterBroker=true -varhome /home/user/foundation/installation/node-agent-server1/server1/imq -startRmiRegistry -rmiRegistryPort 37776 -Dimq.imqcmd.user=admin -passfile /tmp/asmq5711749746025968663.tmp -save -name clusterservercom -port 37676 -bgnd -silent [06/Apr/2011:12:48:44 EDT] [B1004]: Starting the portmapper service using tcp [ 37676, 50, * ] with min threads 1 and max threads of 1 [06/Apr/2011:12:48:45 EDT] [B1060]: Loading persistent data...
    I followed step in http://middlewaremagic.com/weblogic/?p=4884 to narrow it down to Threads that was causing high cpu. Both were around 94%.
    Following is the stack for those threads.
    "Thread-jms[224]" prio=10 tid=0xd635f400 nid=0x5665 runnable [0xd18fe000] java.lang.Thread.State: RUNNABLE at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xf3d35730> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
    "Thread-jms[214]" prio=10 tid=0xd56c8000 nid=0x566c waiting for monitor entry [0xd2838000] java.lang.Thread.State: BLOCKED (on object monitor) at com.sun.messaging.jmq.jmsserver.data.TransactionInformation.isConsumedMessage(TransactionList.java:2544) - locked <0xdbeeb538> (a com.sun.messaging.jmq.jmsserver.data.TransactionInformation) at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xe4c9abf0> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
    "Thread-jms[213]" prio=10 tid=0xd65be800 nid=0x5670 runnable [0xd1a28000] java.lang.Thread.State: RUNNABLE at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xe4c4bad8> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
    Any ideas will be appreciated.
    --

    Thanks ak, for the response.
    Yes, the messages are consumed in transactions. I set imq.txn.reapLimit=200 in Start Arguments in jvm configuration.
    I verified that it is being set in the log.txt file for the broker:
    -Dimq.autocreate.queue=false -Dimq.autocreate.topic=false -Dimq.txn.reapLimit=250
    It did not make any difference. Do I need to set this property somewhere else ?
    As far as upgrading MQ is concerned, I am using glassfish 2.1. And I think MQ 4.3 is packaged with it. Can you suggest a safe way to upgrade to OpenMQ 4.5 in a running environment. I can bring down the cluster temporarily. Can I just change the jar file somwhere to use MQ4.5 ?
    Here is the snippet of the consumer code :
    I create Connection in @postConstruct and close it in @preDestroy, so that I don't have to do it everytime.
    private ResultMessage[] doRetrieve(String username, String password, String jndiDestination, String filter, int maxMessages, long timeout, RetrieveType type)
    throws InvalidCredentialsException, InvalidFilterException, ConsumerException {
    // Resources
    Session session = null;
    try {
    if (log.isTraceEnabled()) log.trace("Creating transacted session with JMS broker.");
    session = connection.createSession(true, Session.SESSION_TRANSACTED);
    // Locate bound destination and create consumer
    if (log.isTraceEnabled()) log.trace("Searching for named destination: " + jndiDestination);
    Destination destination = (Destination) ic.lookup(jndiDestination);
    if (log.isTraceEnabled()) log.trace("Creating consumer for named destination " + jndiDestination);
    MessageConsumer consumer = (filter == null || filter.trim().length() == 0) ? session.createConsumer(destination) : session.createConsumer(destination, filter);
    if (log.isTraceEnabled()) log.trace("Starting JMS connection.");
    connection.start();
    // Consume messages
    if (log.isDebugEnabled()) log.trace("Creating retrieval containers.");
    List<ResultMessage> processedMessages = new ArrayList<ResultMessage>(maxMessages);
    BytesMessage jmsMessage = null;
    for (int i = 0 ; i < maxMessages ; i++) {
    // Attempt message retrieve
    if (log.isTraceEnabled()) log.trace("Attempting retrieval: " + i);
    switch (type) {
    case BLOCKING :
    jmsMessage = (BytesMessage) consumer.receive();
    break;
    case IMMEDIATE :
    jmsMessage = (BytesMessage) consumer.receiveNoWait();
    break;
    case TIMED :
    jmsMessage = (BytesMessage) consumer.receive(timeout);
    break;
    // Process retrieved message
    if (jmsMessage != null) {
    if (log.isTraceEnabled()) log.trace("Message retrieved\n" + jmsMessage);
    // Extract message
    if (log.isTraceEnabled()) log.trace("Extracting result message container from JMS message.");
    byte[] extracted = new byte[(int) jmsMessage.getBodyLength()];
    jmsMessage.readBytes(extracted);
    // Decompress message
    if (jmsMessage.propertyExists(COMPRESSED_HEADER) && jmsMessage.getBooleanProperty(COMPRESSED_HEADER)) {
    if (log.isTraceEnabled()) log.trace("Decompressing message.");
    extracted = decompress(extracted);
    // Done processing message
    if (log.isTraceEnabled()) log.trace("Message added to retrieval container.");
    String signature = jmsMessage.getStringProperty(DIGITAL_SIGNATURE);
    processedMessages.add(new ResultMessage(extracted, signature));
    } else
    if (log.isTraceEnabled()) log.trace("No message was available.");
    // Package return container
    if (log.isTraceEnabled()) log.trace("Packing retrieved messages to return.");
    ResultMessage[] collectorMessages = new ResultMessage[processedMessages.size()];
    for (int i = 0 ; i < collectorMessages.length ; i++)
    collectorMessages[i] = processedMessages.get(i);
    if (log.isTraceEnabled()) log.trace("Returning " + collectorMessages.length + " messages.");
    return collectorMessages;
    } catch (NamingException ex) {
    sessionContext.setRollbackOnly();
    log.error("Unable to locate named queue: " + jndiDestination, ex);
    throw new ConsumerException("Unable to locate named queue: " + jndiDestination, ex);
    } catch (InvalidSelectorException ex) {
    sessionContext.setRollbackOnly();
    log.error("Invalid filter: " + filter, ex);
    throw new InvalidFilterException("Invalid filter: " + filter, ex);
    } catch (IOException ex) {
    sessionContext.setRollbackOnly();
    log.error("Message decompression failed.", ex);
    throw new ConsumerException("Message decompression failed.", ex);
    } catch (GeneralSecurityException ex) {
    sessionContext.setRollbackOnly();
    log.error("Message decryption failed.", ex);
    throw new ConsumerException("Message decryption failed.", ex);
    } catch (JMSException ex) {
    sessionContext.setRollbackOnly();
    log.error("Unable to consumer messages.", ex);
    throw new ConsumerException("Unable to consume messages.", ex);
    } catch (Throwable ex) {
    sessionContext.setRollbackOnly();
    log.error("Unexpected error.", ex);
    throw new ConsumerException("Unexpected error.", ex);
    } finally {
    try {
    if (session != null) session.close();
    } catch (JMSException ex) {
    log.error("Unexpected error.", ex);
    Thanks for your help.
    Edited by: vineet on Apr 7, 2011 10:06 AM

  • Very High DPC Latency

    I've been searching for months, i haven't found a solution. I have a dell 14r, and i've got very high dpc latency, i've disabled Intel speed step, i don't know if it was the cause, but helped a little, but a still have some high DPC.
    here's the DPC  Conclusion
    CONCLUSION
    Your system seems to have difficulty handling real-time audio and other tasks. You may experience drop outs, clicks or pops due to buffer underruns. One problem may be related to power management, disable CPU throttling settings in Control Panel and BIOS setup.
    Check for BIOS updates. 
    LatencyMon has been analyzing your system for  0:00:50  (h:mm:ss) on all processors.
    SYSTEM INFORMATION
    Computer name:                                        ALEF
    OS version:                                           Windows 8 , 6.2, build: 9200 (x64)
    Hardware:                                             Inspiron 5437, Dell Inc., 01PN4H
    CPU:                                                  GenuineIntel Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz
    Logical processors:                                   4
    Processor groups:                                     1
    RAM:                                                  6048 MB total
    CPU SPEED
    Reported CPU speed:                                   1596,0 MHz
    Measured CPU speed:                                   180,0 MHz (approx.)
    Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.
    WARNING: the CPU speed that was measured is only a fraction of the CPU speed reported. Your CPUs may be throttled back due to variable speed settings and thermal issues. It is suggested that you run a utility which reports your actual CPU frequency and temperature. 
    MEASURED INTERRUPT TO USER PROCESS LATENCIES
    The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a hardware request from the moment the interrupt service routine started execution. This includes the scheduling and execution of a DPC routine, the
    signaling of an event and the waking up of a usermode thread from an idle wait state in response to that event.
    Highest measured interrupt to process latency (µs):   1572,913052
    Average measured interrupt to process latency (µs):   22,366048
    Highest measured interrupt to DPC latency (µs):       1565,856753
    Average measured interrupt to DPC latency (µs):       11,824275
     REPORTED ISRs
    Interrupt service routines are routines installed by the OS and device drivers that execute in response to a hardware interrupt signal.
    Highest ISR routine execution time (µs):              364,749373
    Driver with highest ISR routine execution time:       ndis.sys - NDIS (Especificação de Interface de Driver de Rede), Microsoft Corporation
    Highest reported total ISR routine time (%):          0,106754
    Driver with highest ISR total time:                   ndis.sys - NDIS (Especificação de Interface de Driver de Rede), Microsoft Corporation
    Total time spent in ISRs (%)                          0,133510
    ISR count (execution time <250 µs):                   21131
    ISR count (execution time 250-500 µs):                0
    ISR count (execution time 500-999 µs):                3
    ISR count (execution time 1000-1999 µs):              0
    ISR count (execution time 2000-3999 µs):              0
    ISR count (execution time >=4000 µs):                 0
    REPORTED DPCs
    DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process to utilize the CPU while it is interrupted until the DPC has finished execution.
    Highest DPC routine execution time (µs):              747,735589
    Driver with highest DPC routine execution time:       ndis.sys - NDIS (Especificação de Interface de Driver de Rede), Microsoft Corporation
    Highest reported total DPC routine time (%):          0,297310
    Driver with highest DPC total execution time:         ndis.sys - NDIS (Especificação de Interface de Driver de Rede), Microsoft Corporation
    Total time spent in DPCs (%)                          0,652473
    DPC count (execution time <250 µs):                   263660
    DPC count (execution time 250-500 µs):                0
    DPC count (execution time 500-999 µs):                253
    DPC count (execution time 1000-1999 µs):              0
    DPC count (execution time 2000-3999 µs):              0
    DPC count (execution time >=4000 µs):                 0
     REPORTED HARD PAGEFAULTS
    Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted
    and blocked from execution.
    NOTE: some processes were hit by hard pagefaults. If these were programs producing audio, they are likely to interrupt the audio stream resulting in dropouts, clicks and pops. Check the Processes tab to see which programs were hit.
    Process with highest pagefault count:                 explorer.exe
    Total number of hard pagefaults                       1115
    Hard pagefault count of hardest hit process:          493
    Highest hard pagefault resolution time (µs):          12225544,827694
    Total time spent in hard pagefaults (%):              247,355044
    Number of processes hit:                              22
     PER CPU DATA
    CPU 0 Interrupt cycle time (s):                       0,761402
    CPU 0 ISR highest execution time (µs):                331,166667
    CPU 0 ISR total execution time (s):                   0,090588
    CPU 0 ISR count:                                      7756
    CPU 0 DPC highest execution time (µs):                678,561404
    CPU 0 DPC total execution time (s):                   0,432507
    CPU 0 DPC count:                                      129558
    CPU 1 Interrupt cycle time (s):                       1,093584
    CPU 1 ISR highest execution time (µs):                364,749373
    CPU 1 ISR total execution time (s):                   0,177768
    CPU 1 ISR count:                                      13378
    CPU 1 DPC highest execution time (µs):                650,636591
    CPU 1 DPC total execution time (s):                   0,566066
    CPU 1 DPC count:                                      21399
    CPU 2 Interrupt cycle time (s):                       1,091097
    CPU 2 ISR highest execution time (µs):                0,0
    CPU 2 ISR total execution time (s):                   0,0
    CPU 2 ISR count:                                      0
    CPU 2 DPC highest execution time (µs):                747,735589
    CPU 2 DPC total execution time (s):                   0,292306
    CPU 2 DPC count:                                      112231
    CPU 3 Interrupt cycle time (s):                       0,461547
    CPU 3 ISR highest execution time (µs):                0,0
    CPU 3 ISR total execution time (s):                   0,0
    CPU 3 ISR count:                                      0
    CPU 3 DPC highest execution time (µs):                338,436090
    CPU 3 DPC total execution time (s):                   0,020591
    CPU 3 DPC count:                                      725

    Hi,
    Please refer to the article below:
    http://blog.tune-up.com/windows-insights/title-poor-jerky-performance-fixing-unacceptably-high-dpc-latency-issues/
    Andy Altmann
    TechNet Community Support

  • Data Concurrency and Consistency ( SCN , DATA block)

    Hi guys, i am getting very very very confused about how oracle implement consistency / multiversioning with regards to SCN in a data block and transaction list in the data block..
    I will list out what i know so you guys can gauge me on where i am..
    When a SELECT statement is issued, SCN for the select query is determined. Then Blocks with higher SCN are rebuilt from the RBS.
    Q1) The SCN in the block implied here - is it different from the SCNs in the transaction list of the block ? where is this SCN store ? where is the transaction list store ? how is the SCN of the block related with the SCNs in the transaction list of the block ?
    Q2) can someone tell me what happen to the BLOCK SCN and the transaction list
    of the BLOCK when a transaction start to update to a row in the block occurs.
    Q3) If the BLOCK SCN reflects the latest change made to the block and If the SCN of the block is higher then the SCN of the SELECT query, it means that the block has change since the start of the SELECT query, but it DOESNT mean that the row (data) that the SELECT query requires has changed.
    Therefore why cant ORACLE just check to see whether the row has changed and if it has, rebuilt a block from the RBS ?
    Q4) when ORACLE compares the BLOCK SCN, does it only SCAN for the BLOCK SCN or does it also SEARCH through the TRANSACTION LIST ? or it does both ? and why ?
    Q5) is transaction SCN same as Transaction ID ? which is store in the RBS , the transaction SCN or ID ?
    Q6) in short i am confuse with the relationship between BLOCK SCN, transaction list SCN, their location, their usage and relationship of the BLOCK SCN and transaction list when doing a SELECT, their link with RBS..
    any gurus clear to give me a clearer view of what is actually happening ?

    Hi Aman
    Hmm agreed.So when commit is issued , what happens at that time?Simply put:
    - The SCN for the transaction is determined.
    - The transaction is marked as committed in the undo header (the commit SCN is also stored in the undo header).
    - If fast cleanout takes place, the commit SCN is also stored in the ITL. If not, the ITL (i.e. the modified data blocks) are not modified.
    So at commit, Oracle will replace the begin scn in the ITL with this scn
    and this will tell that the block is finally committed is it?The ITL does not contain the begin SCN. The undo header (specifically the transaction table) contains it.
    I lost here.In the ITL , the scn is transaction SCN or commit scn?As I just wrote, the ITL contains (if the cleanout occured) the commit SCN.
    This sounds like high RBA information?What is RBA?
    Commit SCNThis is the SCN associated with a committed transaction.
    Begin SCNThis is the SCN at which a transaction started.
    Transaction SCNAs I wrote, IMO, this is the same as the commit SCN.
    Also please explain that what exactly the ITL stores?If you print an ITL slot, you see the following information:
    BBED> print ktbbhitl[0]
    struct ktbbhitl[0], 24 bytes     @44
          struct ktbitxid, 8 bytes    @44
             ub2 kxidusn              @44       0x0009
             ub2 kxidslt              @46       0x002e
             ub4 kxidsqn              @48       0x0000fe77
          struct ktbituba, 8 bytes    @52
             ub4 kubadba              @52       0x00800249
             ub2 kubaseq              @56       0x3ed6
             ub1 kubarec              @58       0x4e
          ub2 ktbitflg                @60       0x2045 (KTBFUPB)
          union _ktbitun, 2 bytes     @62
             b2 _ktbitfsc             @62       0
             ub2 _ktbitwrp            @62       0x0000
          ub4 ktbitbas                @64       0x06f4c2a3- ktbitxid --> XID, the transaction holding the ITL slot
    - ktbituba --> UBA, used to locate the undo information
    - ktbitflg --> flags (active, committed, cleaned out, ...)
    - _ktbitfsc --> free space generated by this transaction in this block
    - _ktbitwrp+ktbitbas --> commit SCN
    HTH
    Chris

  • Which perspectives I should consider about Av Rd(ms) is very high just for

    db version: 11.1.7
    os: RH linux 5.5
    I was seeing i/o stats from AWR generated for one hour, and all the value of Av Rd(ms) for files i/o stats are under around 10, however except for one file, the Av Rd(ms) just for one data file is very high(38325.25), even Av Rd(ms) of the others data files which are on same mount point with that one file are also normal, so I think this should be caused by application, however I can not find out clue.
    so could you please give me some perspectives to be considered and to be researched? thanks so much!

    RLUO wrote:
    db version: 11.1.7
    os: RH linux 5.5
    I was seeing i/o stats from AWR generated for one hour, and all the value of Av Rd(ms) for files i/o stats are under around 10, however except for one file, the Av Rd(ms) just for one data file is very high(38325.25), even Av Rd(ms) of the others data files which are on same mount point with that one file are also normal, so I think this should be caused by application, however I can not find out clue.
    so could you please give me some perspectives to be considered and to be researched? thanks so much!Look at the v$event_histogram report for anything to to with file reads. It's possible that you will find that a single read request got an extremely high time - I've seen odd glitches occasionall, with a single block read taking (apparently) several weeks to complete - and that you can ignore the side effects .
    Regards
    Jonathan Lewis

  • Query Execution/Elapsed Time and Oracle Data Blocks

    Hi,
    I have created 3 tables with one column only. As an example Table 1 below:
    SQL> create table T1 ( x char(2000));
    So 3 tables are created in this way i.e. T1,T2 and T3.
    T1 = in the default database tablespace of 8k (11g v11.1.0.6.0 - Production) (O.S=Windows).
    T2 = I created in a Tablespace with Blocksize 16k.
    T3 = I created in a Tablespace with Blocksize 4k. In the same Instance.
    Each table has approx. 500 rows (So, table sizes are same in all the cases to test Query execution time ). As these 3 tables are created under different data block sizes so the ALLOCATED no. of data blocks are different in all cases.
    T1  =   8k  = 256 Blocks =  00:00:04.76 (query execution time/elapsed time)
    T2  = 16k=121 Blocks =  00:00:04.64
    T3 =   4k =  490 Blocks =  00:00:04.91
    Table Access is FULL i.e. I have used select * from table_name; in all 3 cases. No Index nothing.
    My Question is why query execution time is nearly the same in all 3 cases because Oracle has to read all the data blocks in each case to fetch the records and there is a much difference in the allocated no. of blocks ???
    In 4k block size example, Oracle has to read just 121 blocks and it's taking nearly the same time as it's taking to read 490 blocks???
    This is just 1 example of different data blocks. I have around 40 tables in each block size tablespace and the result are nearly the same. It's very strange for me because there is a much difference in the no. of allocated blocks but execution time is almost the same, only difference in milliseconds.
    I'll highly appreciate the expert opinions.
    Bundle of thanks in advance.
    Best Regards,

    Hi Chris,
    No I'm not using separate databases, it's 8k database with non-standard blocksizes of 16k and 4k.
    Actually I wanted to test the Elapsed time of these 3 tables, so for that I tried to create the same size
    tables.
    And how I equalize these is like I have created one column table with char(2000).
    555 MB is the figure I wanted to use for these 3 tables ( no special figure, just to make it bigger than the
    RAM used for my db at the db startup to be sure of not retrieving the records from cache).
    so row size with overhead is 2006 * 290,000 rows = 581740000(bytes) / 1024 = 568105KB / 1024 = 555MB.
    Through this math calculation I thought It will be the total table size. So I Created the same no. of rows in 3 blocksizes.
    If it's wrong then what a mes because I was calculating tables sizes in the same way from the last few months.
    Can you please explain a little how you found out the tables sizes in different block sizes.Though I understood how you
    calculated size in MB from these 3 block sizes
    T8K =97177 BLOCKS=759MB *( 97177*8 = 777416KB / 1024 = 759MB )*
    T16K=41639 BLOCKS=650MB
    BT4K=293656 BLOCKS=1147MB
    For me it's new to calculate the size of a table. Can you please tell me then how many rows I can create in each of
    these 3 tables to make them equal in MB to test for elapsed time.
    Then I'll again run my test and put the results here. Because If I've wrongly calculated table sizes then there is no need to talk about elapsed time. First I must equalize the table sizes properly.
    SQL> select sum(bytes)/1024/1024 "Size in MB" from dba_segments> 2 where segment_name = 'T16K';
    Size in MB
    655
    Is above SQL is correct to calculate the size or is it the correct alternative way to your method of calculating the size??
    I created the same table again with everything same and the result is :
    SQL> select num_rows,blocks from user_tables where table_name = 'T16K';NUM_ROWS BLOCKS
    290000 41703
    64 more blocks are allocated this time so may be that's y it's showing total size of 655 instead of 650.
    Thanks alot for your help.
    Best Regards,
    KAm
    Edited by: kam555 on Nov 20, 2009 5:57 PM

  • How to handle very high volume msgs in XI

    I have  a scenario where  6 million msgs/ hour(xml) have to be picked form source and to be sent to target ,
    How do we handle this high volume of data is there any good practice
    Pls suggest .

    Hi Anubhav,
    if you just have to send source file to target system without doing any mapping, then it dont use XI for it.......because you have a very high volume of msgs and for this such load, XI will start putting your msgs in wait status and may be the JAVA engine may go down because of insufficient memory.......
    So better have an FTP utility in target system.....connect to source system by this...and move the files from source to target by this FTP utility......
    But if you have to do a mapping from source msg to target msg, then use XI....but try doing the mapping by graphical mapping and dont go for ABAP mapping to reduce memory usage for your scenario.......
    Thanks,
    Rajeev Gupta

Maybe you are looking for

  • Printing Envelopes Issue with HP Officejet Pro 8600

    I have recently purchased the HP Officejet Pro 8600.  I am upgrading from the Photosmart series.  I use Windows 7 64bit operating system.  I have no error messages When I print a #10 envelope, the return address in the top left is partially cut off. 

  • Fault handling framework

    Hi all, I'm currently using the fault handling framework introduced with the 10.1.3 version of BPEL. I've done some testing and it works fine. When I get an error from invoking one of my partnerlinks the framework handles the error - cool :-). Now to

  • How to use output determination: track changes made on the contract

    We have a third party application which is requiring a delta change log of contract from SAP, does anyone know the mechanisms of using output determination to track all the changes on contact level? Thanks a lot!

  • Locating and reinstalling missing files in Lightroom 1.4 ....

    I would like to know how to re-install missing files once you locate a missing folder or set of files, I don't understand what to do next (once you have found the files) in order to put the file back into it's original location so the files will work

  • Embed a specific part of a webpage using iframes.

    Hi, I've had an in depth search of the forums and can't find anything to help me. I know to use iframes to embed a web page within a webpage but I want to choose a specific portion of a webpage - how do I do that? Thanks for your help.