High I/O  waits

Any thoughts about why, when Streams capture processes are started up, the disk activity pegs i/o waits constantly high. It is pretty much slow down the whole database server.
This is the state of CPU when we have streams instance up and without any dml activity.
CPU states: 0.4% idle, 17.9% user, 4.5% kernel, 77.2% iowait, 0.0% swap
Any suggestion to any parameters tweaking or recommendation is truly appreciated.

Hi
I have found this also, and have just raised an ITAR on it. On our new 10G servers the Capture process constantly re-reads the whole of the current log file. On one of our servers it is reading 110MB of data / second. As soon as we stop the 2 capture processes it IO drops to nothing.
Is this exactly what you are seeing. Does anyone out there have any ideas on a fix.
Thanks

Similar Messages

  • High I/O wait observed in Linux based Oracle Database Server

    Hi,
    We have just migrated Oracle Database from Solaris Server to Linux VM [ESX] server.
    We have observed that there is high I/O wait issues while database query is running on Linux VM, which was ideally zero in case of Solaris. The same Database was running with no i/o wait on solaris physical server.
    In the same ref.I would like to below points.
    - Recommendations for running Oracle on VM based Linux.
    - Recommendations from ESX Host side
    Please suggest.

    user558914 wrote:
    We have just migrated Oracle Database from Solaris Server to Linux VM [ESX] server.
    We have observed that there is high I/O wait issues while database query is running on Linux VM, which was ideally zero in case of Solaris. The same Database was running with no i/o wait on solaris physical server.What did you expect? A virtualised I/O subsystem to respond and perform like a real one?
    That would a very unrealistic expectation. And as comparisons go, as sensible as comparing the taste of an apple with the odour of the colour blue.
    Forget about comparisons. Only marketing, sales and the idiot believe the cr@p that introducing several s/w layers between the the target (e.g. sector on spinning rust) and the destination (e.g. Oracle) makea the path between target and destination, faster.
    To optimise the virtualised target, you need to make the path as short as possible. If your virtual disk is for example a file on a cooked file system on the host, then you are introducing the host's complete I/O layer for accessing that virtual drive. If your virtual disk is an actual (raw) partition or drive on the host, then path is faster - passing through the host kernel as direct I/O and bypassing the host's cache and file system drivers.
    I suggest that when you setup your virtualised environment, you do proper stress testing of the various configurations of a virtualised I/O subsystem, using something like fio.

  • High buffer busy wait

    hi all ,
    How to resolve high buffer busy wait in my DB.

    Simple: reduce your I/O requirements.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1860222500346889715
    So you need to find out what's causing your high I/O requirements, and then see if you can fix that.

  • High Buffer Busy Wait due to Concurrent INSERTS

    Hi All,
    One of my OLTP database is running on 11.1.0.7 (11.1.0.7.0 - 64bit Production) with RHEL 5.4.
    On frequent basis, i am observing 'BUFFER BUSY WAITS' and last time i tried to capture some dictionary information to dig the waits.
    1. Session Watis :
              Oracle                                                  Sec                                     Hash
    Sid,Serial User     OS User  Svr-Pgm    Wait Event      State-Seq   Wt Module                  Cmnd       Value          P1          P2   P3
    633,40830 OLTP_USE fateadm  21646-orac buffer busy wai Wtng-9999    1 ORDERS             ISRT  3932487748         384     1863905    1
    647, 1761 OLTP_USE fateadm  22715-orac buffer busy wai Wtng-3837    0 ORDERS             ISRT  3932487748         384     1863905    1
    872, 5001 OLTP_USE fateadm  21836-orac buffer busy wai Wtng-9999    1 ORDERS             ISRT  3932487748         384     1863905    1
    702, 1353 OLTP_USE fateadm  21984-orac buffer busy wai Wtng-9999    1 ORDERS             ISRT  3932487748         384     1863905    1
    337,10307 OLTP_USE fateadm  21173-orac buffer busy wai Wtng-9999    1 ORDERS             ISRT  3932487748         384     1863905    1
    751,43016 OLTP_USE fateadm  21619-orac buffer busy wai Wtng-9999    1 ORDERS             ISRT  3932487748         384     1863905    1
    820,17959 OLTP_USE fateadm  21648-orac buffer busy wai Wtng-9999    0 ORDERS             ISRT  3932487748         384     1863905    1
    287,63359 OLTP_USE fateadm  27053-orac buffer busy wai Wtng-9999    0 ORDERS             ISRT  3932487748         384     1863905    1
    629, 1653 OLTP_USE fateadm  22468-orac buffer busy wai Wtng-9999    1 ORDERS             ISRT  3932487748         384     1863905    1
    788,14160 OLTP_USE fateadm  22421-orac buffer busy wai Wtng-9999    0 ORDERS             ISRT  3932487748         384     1863905    1
    615, 4580 OLTP_USE fateadm  21185-orac buffer busy wai Wtng-9999    0 ORDERS             ISRT  3932487748         384     1863905    1
    525,46068 OLTP_USE fateadm  27043-orac buffer busy wai Wtng-9034    1 ORDERS             ISRT  3932487748         384     1863905    1
    919,23243 OLTP_USE fateadm  21428-orac buffer busy wai Wtng-6340    1 ORDERS             ISRT  3932487748         384     1863906    1
    610,34557 OLTP_USE fateadm  21679-orac buffer busy wai Wtng-6422    1 ORDERS             ISRT  3932487748         384     1863906    1
    803, 1583 OLTP_USE fateadm  21580-orac buffer busy wai Wtng-6656    1 ORDERS             ISRT  3932487748         384     1863906    1
    781, 1523 OLTP_USE fateadm  21781-orac buffer busy wai Wtng-9999    0 ORDERS             ISRT  3932487748         384     1863906    1
    369,11005 OLTP_USE fateadm  21718-orac buffer busy wai Wtng-9999    0 ORDERS             ISRT  3932487748         384     1863906    1
    823,35800 OLTP_USE fateadm  21148-orac buffer busy wai Wtng-9999    1 ORDERS             ISRT  3932487748         384     1863906    1
    817, 1537 OLTP_USE fateadm  22505-orac buffer busy wai Wtng-9999    1 ORDERS             ISRT  3932487748         384     1863906    1
    579,54959 OLTP_USE fateadm  22517-orac buffer busy wai Wtng-9999    0 ORDERS             ISRT  3932487748         384     1863906    1
    591,33597 OLTP_USE fateadm  27027-orac buffer busy wai Wtng-9999    1 ORDERS             ISRT  3932487748         384     1863906    1
    481, 3031 OLTP_USE fateadm  21191-orac buffer busy wai Wtng-3502    1 ORDERS             ISRT  3932487748         384     1863906    1
    473,24985 OLTP_USE fateadm  22629-orac buffer busy wai Wtng-9999    0 ORDERS             ISRT  3932487748         384     1863906    1
    868, 3984 OLTP_USE fateadm  27191-orac buffer busy wai Wtng-9999    0 ORDERS             ISRT  3932487748         384     1863906    1
    select owner,segment_name,segment_type from dba_extents where    file_id = 384 and   1863905 between block_id and block_id + blocks -1;
    OWNER                          SEGMENT_NAME                                                                      SEGMENT_TYPE
    ORDER                          ORDER_DETAILS                                                                      TABLE
    select TABLE_NAME,PARTITIONED,ini_trans ,degree,compression,FREELISTS from dba_TABLES WHERE TABLE_NAME='ORDER_DETAILS';
    TABLE_NAME                     PAR  INI_TRANS DEGREE                         COMPRESS  FREELISTS
    ORDER_DETAILS                   NO           1          1                     ENABLED           1
    Tablespace is not ASSM managed !
      select
       object_name,
       statistic_name,
       value
    from
       V$SEGMENT_STATISTICS
    where
       object_name = 'ORDER_DETAILS';
    OBJECT_NAME              STATISTIC_NAME                                                        VALUE
    ORDER_DETAILS             logical reads                                                     487741104
    ORDER_DETAILS             buffer busy waits                                                   4715174
    ORDER_DETAILS             db block changes                                                  200858896
    ORDER_DETAILS             physical reads                                                    143642724
    ORDER_DETAILS             physical writes                                                    20581330
    ORDER_DETAILS             physical reads direct                                              55239903
    ORDER_DETAILS             physical writes direct                                             19500551
    ORDER_DETAILS             space allocated                                                  1.6603E+11
    ORDER_DETAILS             segment scans                                                          9727
    ORDER_DETAILS table is ~ 153 GB non-partitioned table.
    It seems its not a READ BY OTHER SESSIONS wait but BUFFER BUSY due to write-wirte contention inside same block. I have never observed Cache Buffer Chain/ ITL-Wait/ High wait time on dbfile sequential/scattered reads.Table contains one PK (composite index on 3 columns) which seems to be highly fragmented.This non partitioned global Index has 3182037735 rows and blevel is 4.
    BHAVIK_DBA.FATE1NA>select index_name,status,num_rows,blevel,pct_free,ini_trans,clustering_factor from dba_indexes where index_name='IDX_ORDERS';
    INDEX_NAME                     STATUS     NUM_ROWS     BLEVEL   PCT_FREE  INI_TRANS CLUSTERING_FACTOR
    IDX_ORDERS                      VALID    3182037735          4          2          2        2529462377
    1 row selected.
    One of the index column value is being populated by sequence. (Monotonically increasing value)
    SEGMENT_NAME                                                                              MB
    IDX_ORDERS                                                             170590.438
    Index size is greater than table size !Tuning goal here is to reduce buffer busy waits and thus commit latencies.
    I think, i need to increase FREELISTS and PCT_FREE to address this issue, but not much confident whether it is going to solve the issue or not?
    Can i ask for any help here ?

    Hi Johnathan;
    Many thanks for your detailed write-up. I was expecting you !
    Your post here gave lot of information and wisdom that made me think last couple of hrs that is the reason for the delay in reply.
    I did visited your index explosion posts couple of times and that scenario only gave me insight that concurrent DML (INSERT) is cause of index fragmentation in my case.
    Let me also pick the opportunity to ask you to shed more light on some of the information you have highlighted.
    if you can work out the number of concurrent inserts that are really likely to happen at any one instant then a value of freelists that in the range of
    concurrency/4 to concurrency/2 is probably appropriate.May i ask you how did you derive this formula ? I dont want to miss learning opportunity here !
    Note - with multiple freelists, you may find that you now get buffer busy waits on the segment header block.I did not quite get this point ? Can you shed more light please? What piece in segment header block is going to result contention(BBW on SEGMENT HEADER) on all concurrent inserts ?
    The solution to this is to increase the number of freelist groups (making sure that
    freelists and freelist groups have no common factors).My prod db NON-RAC environment. Can i use FREELIST GROUPS here ? My little knowledge did not get, What "common factors" you are referring here ?
    The reads could be related to leaf block splits, but there are several possible scenarios that could lead to that pattern of activity - so the next step is to find out which blocks are being
    read. Capture a sample of the waits, then query dba_extents for the extent_id, file_id, and block_id (don't run that awful query with the "block_id + blocks" predicate) and cross-check the
    list of blocks to see if they are typically the first couple of blocks of an extent or randomly scattered throughout extents. If the former the problem is probably related to ASSM, if the
    latter it may be related to failed probes on index leaf block reuse (i.e. after large scale deletes).I have 10046 trace file with me (giving you some sample below) that can give some information. However, since the issue was critical, i killed the insert process and rebuilt both the indexes. Since, index is rebuilt, i am not able to find any information in dba_extents.
    select SEGMENT_NAME,SEGMENT_TYPE,EXTENT_ID from dba_extents where file_id=42 and block_id=1109331;
    no rows selected
    select SEGMENT_NAME,SEGMENT_TYPE,EXTENT_ID from dba_extents where file_id=42 and block_id=1109395 ;
    no rows selected
    select SEGMENT_NAME,SEGMENT_TYPE,EXTENT_ID from dba_extents where file_id=42 and block_id=1109459;
    no rows selected
    select SEGMENT_NAME,SEGMENT_TYPE,EXTENT_ID from dba_extents where file_id=10 and block_id=1107475;
    no rows selected
    select SEGMENT_NAME,SEGMENT_TYPE,EXTENT_ID from dba_extents where file_id=10 and block_id=1107539;
    no rows selected
    select object_name,object_Type from dba_objects where object_id=17599;
    no rows selected
    WAIT #4: nam='db file sequential read' ela= 49 file#=42 block#=1109331 blocks=1 obj#=17599 tim=1245687162307379
    WAIT #4: nam='db file sequential read' ela= 59 file#=42 block#=1109395 blocks=1 obj#=17599 tim=1245687162307462
    WAIT #4: nam='db file sequential read' ela= 51 file#=42 block#=1109459 blocks=1 obj#=17599 tim=1245687162307538
    WAIT #4: nam='db file sequential read' ela= 49 file#=10 block#=1107475 blocks=1 obj#=17599 tim=1245687162307612
    WAIT #4: nam='db file sequential read' ela= 49 file#=10 block#=1107539 blocks=1 obj#=17599 tim=1245687162307684
    WAIT #4: nam='db file sequential read' ela= 198 file#=10 block#=1107603 blocks=1 obj#=17599 tim=1245687162307905
    WAIT #4: nam='db file sequential read' ela= 88 file#=10 block#=1107667 blocks=1 obj#=17599 tim=1245687162308016
    WAIT #4: nam='db file sequential read' ela= 51 file#=10 block#=1107731 blocks=1 obj#=17599 tim=1245687162308092
    WAIT #4: nam='db file sequential read' ela= 49 file#=10 block#=1107795 blocks=1 obj#=17599 tim=1245687162308166
    WAIT #4: nam='db file sequential read' ela= 49 file#=10 block#=1107859 blocks=1 obj#=17599 tim=1245687162308240
    WAIT #4: nam='db file sequential read' ela= 52 file#=10 block#=1107923 blocks=1 obj#=17599 tim=1245687162308314
    WAIT #4: nam='db file sequential read' ela= 57 file#=42 block#=1109012 blocks=1 obj#=17599 tim=1245687162308395
    WAIT #4: nam='db file sequential read' ela= 52 file#=42 block#=1109076 blocks=1 obj#=17599 tim=1245687162308470
    WAIT #4: nam='db file sequential read' ela= 98 file#=42 block#=1109140 blocks=1 obj#=17599 tim=1245687162308594
    WAIT #4: nam='db file sequential read' ela= 67 file#=42 block#=1109204 blocks=1 obj#=17599 tim=1245687162308686
    WAIT #4: nam='db file sequential read' ela= 53 file#=42 block#=1109268 blocks=1 obj#=17599 tim=1245687162308762
    WAIT #4: nam='db file sequential read' ela= 54 file#=42 block#=1109332 blocks=1 obj#=17599 tim=1245687162308841
    WAIT #4: nam='db file sequential read' ela= 55 file#=42 block#=1109396 blocks=1 obj#=17599 tim=1245687162308920
    WAIT #4: nam='db file sequential read' ela= 54 file#=42 block#=1109460 blocks=1 obj#=17599 tim=1245687162308999
    WAIT #4: nam='db file sequential read' ela= 52 file#=10 block#=1107476 blocks=1 obj#=17599 tim=1245687162309074
    WAIT #4: nam='db file sequential read' ela= 89 file#=10 block#=1107540 blocks=1 obj#=17599 tim=1245687162309187
    WAIT #4: nam='db file sequential read' ela= 407 file#=10 block#=1107604 blocks=1 obj#=17599 tim=1245687162309618TKPROF for above trace
    INSERT into
                     order_rev
                     (aggregated_revenue_id,
                      legal_entity_id,
                      gl_product_group,
                      revenue_category,
                      warehouse_id,
                      tax_region,
                      gl_product_subgroup,
                      total_shipments,
                      total_units_shipped,
                      aggregated_revenue_amount,
                      aggregated_tax_amount,
                      base_currency_code,
                      exchange_rate,
                      accounting_date,
                      inventory_owner_type_id,
                      fin_commission_structure_id,
                      seller_of_record_vendor_id,
                      organizational_unit_id,
                      merchant_id,
                      last_updated_date,
                      revenue_owner_type_id,
                      sales_channel,
                      location)
                     VALUES
                     (order_rev.nextval,:p1,:p2,:p3,:p4,:p5,:p6,:p7,:p8,:p9,:p10,:p11,:p12,to_date(:p13, 'dd-MON-yyyy'),:p14,:p15,:p16,:p17,:p18,sysdate,:p19,:p20,:p21)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute    613      5.50      40.32      96672     247585     306916         613
    Fetch        0      0.00       0.00          0          0          0           0
    total      613      5.50      40.32      96672     247585     306916         613
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 446
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                    164224        0.04         62.33
      SQL*Net message to client                     613        0.00          0.00
      SQL*Net message from client                   613        0.03          0.90
      latch: cache buffers chains                     8        0.00          0.00
      latch: object queue header operation            2        0.00          0.00Is there any other way to find out culprit amongst the two you have listed (ASSM / failed probes on index leaf block reuse ) ?

  • Why does Lightroom 4.3 use very high cpu when waiting for user to click OK?

    I saved metadata for 52 photos. 
    Lightroom 4.3 couldn't save metadata for 2 of the photos and opened a window to tell me so. (It had no problem saving metadata on second attempt.)
    While it waited for my response and wasn't doing any work, it was using over 50% of my CPU.
    Why?
    Windows 7 is 32-bit Professional.
    Processor: Intel Core i5 750 Processor 2.66 GHz
    Here's my system info as reported by Lightroom. 
    Lightroom version: 4.3 [865747]
    Operating system: Windows 7 Business Edition
    Version: 6.1 [7601]
    Application architecture: x86
    System architecture: x86
    Logical processor count: 4
    Processor speed: 2.7 GHz
    Built-in memory: 3579.4 MB
    Real memory available to Lightroom: 716.8 MB
    Real memory used by Lightroom: 710.9 MB (99.1%)
    Virtual memory used by Lightroom: 847.9 MB
    Memory cache size: 2.0 MB
    Maximum thread count used by Camera Raw: 4
    System DPI setting: 96 DPI
    Desktop composition enabled: Yes
    Displays: 1) 1920x1080
    Application folder: C:\Program Files\Adobe\Adobe Photoshop Lightroom 4.3
    Library Path: D:\Users\Calvin\Pictures\My Lightroom\lightroom\lightroom4\lightroom4.lrcat
    Settings Folder: C:\Users\Calvin\AppData\Roaming\Adobe\Lightroom
    Adapter #1: Vendor : 10de
        Device : 640
        Subsystem : c9593842
        Revision : a1
        Video Memory : 1007
    AudioDeviceIOBlockSize: 1024
    AudioDeviceName: Speakers (Realtek High Definition Audio)
    AudioDeviceNumberOfChannels: 2
    AudioDeviceSampleRate: 44100
    Build: Uninitialized
    Direct2DEnabled: false
    GL_ALPHA_BITS: 0
    GL_BLUE_BITS: 8
    GL_GREEN_BITS: 8
    GL_MAX_3D_TEXTURE_SIZE: 2048
    GL_MAX_TEXTURE_SIZE: 8192
    GL_MAX_TEXTURE_UNITS: 4
    GL_MAX_VIEWPORT_DIMS: 8192,8192
    GL_RED_BITS: 8
    GL_RENDERER: GeForce 9500 GT/PCIe/SSE2
    GL_SHADING_LANGUAGE_VERSION: 3.30 NVIDIA via Cg compiler
    GL_VENDOR: NVIDIA Corporation
    GL_VERSION: 3.3.0
    OGLEnabled: true
    OGLPresent: true
    GL_EXTENSIONS: GL_ARB_arrays_of_arrays GL_ARB_base_instance GL_ARB_blend_func_extended GL_ARB_clear_buffer_object GL_ARB_color_buffer_float GL_ARB_compatibility GL_ARB_compressed_texture_pixel_storage GL_ARB_conservative_depth GL_ARB_copy_buffer GL_ARB_copy_image GL_ARB_debug_output GL_ARB_depth_buffer_float GL_ARB_depth_clamp GL_ARB_depth_texture GL_ARB_draw_buffers GL_ARB_draw_elements_base_vertex GL_ARB_draw_instanced GL_ARB_ES2_compatibility GL_ARB_ES3_compatibility GL_ARB_explicit_attrib_location GL_ARB_explicit_uniform_location GL_ARB_fragment_coord_conventions GL_ARB_fragment_layer_viewport GL_ARB_fragment_program GL_ARB_fragment_program_shadow GL_ARB_fragment_shader GL_ARB_framebuffer_no_attachments GL_ARB_framebuffer_object GL_ARB_framebuffer_sRGB GL_ARB_geometry_shader4 GL_ARB_get_program_binary GL_ARB_half_float_pixel GL_ARB_half_float_vertex GL_ARB_imaging GL_ARB_instanced_arrays GL_ARB_internalformat_query GL_ARB_internalformat_query2 GL_ARB_invalidate_subdata GL_ARB_map_buffer_alignment GL_ARB_map_buffer_range GL_ARB_multisample GL_ARB_multitexture GL_ARB_occlusion_query GL_ARB_occlusion_query2 GL_ARB_pixel_buffer_object GL_ARB_point_parameters GL_ARB_point_sprite GL_ARB_program_interface_query GL_ARB_provoking_vertex GL_ARB_robust_buffer_access_behavior GL_ARB_robustness GL_ARB_sampler_objects GL_ARB_seamless_cube_map GL_ARB_separate_shader_objects GL_ARB_shader_bit_encoding GL_ARB_shader_objects GL_ARB_shader_texture_lod GL_ARB_shading_language_100 GL_ARB_shading_language_420pack GL_ARB_shading_language_include GL_ARB_shading_language_packing GL_ARB_shadow GL_ARB_stencil_texturing GL_ARB_sync GL_ARB_texture_border_clamp GL_ARB_texture_buffer_object GL_ARB_texture_buffer_range GL_ARB_texture_compression GL_ARB_texture_compression_rgtc GL_ARB_texture_cube_map GL_ARB_texture_env_add GL_ARB_texture_env_combine GL_ARB_texture_env_crossbar GL_ARB_texture_env_dot3 GL_ARB_texture_float GL_ARB_texture_mirrored_repeat GL_ARB_texture_multisample GL_ARB_texture_non_power_of_two GL_ARB_texture_query_levels GL_ARB_texture_rectangle GL_ARB_texture_rg GL_ARB_texture_rgb10_a2ui GL_ARB_texture_storage GL_ARB_texture_storage_multisample GL_ARB_texture_swizzle GL_ARB_texture_view GL_ARB_timer_query GL_ARB_transpose_matrix GL_ARB_uniform_buffer_object GL_ARB_vertex_array_bgra GL_ARB_vertex_array_object GL_ARB_vertex_attrib_binding GL_ARB_vertex_buffer_object GL_ARB_vertex_program GL_ARB_vertex_shader GL_ARB_vertex_type_2_10_10_10_rev GL_ARB_viewport_array GL_ARB_window_pos GL_ATI_draw_buffers GL_ATI_texture_float GL_ATI_texture_mirror_once GL_S3_s3tc GL_EXT_texture_env_add GL_EXT_abgr GL_EXT_bgra GL_EXT_bindable_uniform GL_EXT_blend_color GL_EXT_blend_equation_separate GL_EXT_blend_func_separate GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_compiled_vertex_array GL_EXT_Cg_shader GL_EXT_depth_bounds_test GL_EXT_direct_state_access GL_EXT_draw_buffers2 GL_EXT_draw_instanced GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_framebuffer_blit GL_EXT_framebuffer_multisample GL_EXTX_framebuffer_mixed_formats GL_EXT_framebuffer_multisample_blit_scaled GL_EXT_framebuffer_object GL_EXT_framebuffer_sRGB GL_EXT_geometry_shader4 GL_EXT_gpu_program_parameters GL_EXT_gpu_shader4 GL_EXT_multi_draw_arrays GL_EXT_packed_depth_stencil GL_EXT_packed_float GL_EXT_packed_pixels GL_EXT_pixel_buffer_object GL_EXT_point_parameters GL_EXT_provoking_vertex GL_EXT_rescale_normal GL_EXT_secondary_color GL_EXT_separate_shader_objects GL_EXT_separate_specular_color GL_EXT_shadow_funcs GL_EXT_stencil_two_side GL_EXT_stencil_wrap GL_EXT_texture3D GL_EXT_texture_array GL_EXT_texture_buffer_object GL_EXT_texture_compression_dxt1 GL_EXT_texture_compression_latc GL_EXT_texture_compression_rgtc GL_EXT_texture_compression_s3tc GL_EXT_texture_cube_map GL_EXT_texture_edge_clamp GL_EXT_texture_env_combine GL_EXT_texture_env_dot3 GL_EXT_texture_filter_anisotropic GL_EXT_texture_integer GL_EXT_texture_lod GL_EXT_texture_lod_bias GL_EXT_texture_mirror_clamp GL_EXT_texture_object GL_EXT_texture_shared_exponent GL_EXT_texture_sRGB GL_EXT_texture_sRGB_decode GL_EXT_texture_storage GL_EXT_texture_swizzle GL_EXT_timer_query GL_EXT_vertex_array GL_EXT_vertex_array_bgra GL_EXT_import_sync_object GL_IBM_rasterpos_clip GL_IBM_texture_mirrored_repeat GL_KHR_debug GL_KTX_buffer_region GL_NV_blend_square GL_NV_conditional_render GL_NV_copy_depth_to_color GL_NV_copy_image GL_NV_depth_buffer_float GL_NV_depth_clamp GL_NV_ES1_1_compatibility GL_NV_explicit_multisample GL_NV_fence GL_NV_float_buffer GL_NV_fog_distance GL_NV_fragment_program GL_NV_fragment_program_option GL_NV_fragment_program2 GL_NV_framebuffer_multisample_coverage GL_NV_geometry_shader4 GL_NV_gpu_program4 GL_NV_half_float GL_NV_light_max_exponent GL_NV_multisample_coverage GL_NV_multisample_filter_hint GL_NV_occlusion_query GL_NV_packed_depth_stencil GL_NV_parameter_buffer_object GL_NV_parameter_buffer_object2 GL_NV_path_rendering GL_NV_pixel_data_range GL_NV_point_sprite GL_NV_primitive_restart GL_NV_register_combiners GL_NV_register_combiners2 GL_NV_shader_buffer_load GL_NV_texgen_reflection GL_NV_texture_barrier GL_NV_texture_compression_vtc GL_NV_texture_env_combine4 GL_NV_texture_expand_normal GL_NV_texture_multisample GL_NV_texture_rectangle GL_NV_texture_shader GL_NV_texture_shader2 GL_NV_texture_shader3 GL_NV_transform_feedback GL_NV_vertex_array_range GL_NV_vertex_array_range2 GL_NV_vertex_buffer_unified_memory GL_NV_vertex_program GL_NV_vertex_program1_1 GL_NV_vertex_program2 GL_NV_vertex_program2_option GL_NV_vertex_program3 GL_NVX_conditional_render GL_NVX_gpu_memory_info GL_SGIS_generate_mipmap GL_SGIS_texture_lod GL_SGIX_depth_texture GL_SGIX_shadow GL_SUN_slice_accum GL_WIN_swap_hint WGL_EXT_swap_control

    I don't think so. 
    I have enough experience with LR to know what to expect for various activities. 
    Previews for all photos had been rendered long before I initiated the Save Metadata.
    Metadata for 50 of the 52 files had already been written. 
    The high cpu had gone on for at least 5 minutes.   I switched to another app imediately after initiating the Save Metadata for the 52 files and only went back to LR when I heard my CPU fan running for a long time.
    When I got back to LR, I saw the "Could not save metadata" window.  I clicked "Show in Library" and OK.  As soon as I did that, CPU usage went back to normal. 
    I've experienced the exact same scenario, where LR can't save metadata for all photos and I've never had LR get stuck consuming a very large amount of CPU.
    As a test, I selected 118 other photos, changed metadata for all of them and then selected all and saved metadata.  LR took about 20 seconds to save metadata for all 118 and LR CPU usage never went above 17%.  The difference is that LR did not show the "Could not save metadata" window.

  • Exremely high (98%) roll wait time

    I am running a ECC 6.0 EHP4 server in solaris 5.10 OS, Oracle 10.2.0.4.
    My users experience extremely high roll wait time when they execute transactions like PA20/PA30. It took about 20 minutes for it to load.
    I ran stad and found out the majority of the portion is wasted at roll wait time. How do we know what caused the wait time?
    e.g. user execute TCODE PA20 and wait for the hourglass for about 20 minutes before the record is shown
    Other transactions ran smoothly.
    Your help is appreciated.

    After a RFC trace, it showed this RFC statement having very high duration:
    RFC Statement
    Function module name      HTTP_GET
    Source IP Address         a.b.c.d
    Source Server             ourSECCServer
    Destination IP Address    a.b.c.d (same ip as source)
    Destination Server        ourSECCServer
    Client/Server             Client
    Conversation ID
    RFC Trace Rec. Status     4
    Sent Bytes                9.476
    Received Bytes            3.353
    Total Sent Bytes
    Total Received Bytes
    ABAP program name         SAPLSFTP
    RFC Time                  224881.681
    Any ideas?

  • Very high data block waits

    I have one table xxxx in a tablespace tbx and the tablespace tbx has only 1 datafile. The table xxxx size is 890MB w/ 14 millions records. The datafile size is 2048MB. This table is a frequently access with insert/delete/select. The system spends alot of time waiting on this datafile. If I create a new tablespace abc with 20 datafiles worth about 100MB each, would it help reducing the data block wait count? The pctfree/pctused is 10/40 respectively.
    Can anyone please give me an how to resolve this?

    I am looking at an Oracle Statistics. We use SAN technology with RAID 0+1 Striped across all disks.
    First I use this query to get the wait statistics:
    select time, count, class
    from v$waitstat
    order by time, count;
    From this query above, I got this result and database just been up 02/17/2004, just about 4 days ago:
    TIME COUNT CLASS
    0 0 sort block
    0 0 save undo block
    0 0 save undo header
    0 0 free list
    0 0 bitmap block
    0 0 unused
    0 0 system undo block
    0 0 system undo header
    0 0 bitmap index block
    10 10 extent map
    48 656 undo header
    271 853 undo block
    301 730 segment header
    780382 1214405 data block
    Then I use this query to find which datafile is being hit the most:
    select count, file#, name
    from x$kcbfwait, v$datafile
    where indx + 1 = file#
    order by count desc;
    The query above returned:
    COUNT     FILE#     NAME
    473324     121     /xx/xx_ycm_tbs_03_01.dbf
    104179     120     /xx/xx_ycm_tbs_02_01.dbf
    93336     118     /xx/xx_idx_tbs_03_01.dbf
    93138     119     /xx/xx_idx_tbs_03_02.dbf
    80289     90     /xx/xx_datafile67.dbf
    64044     108     /xx/xx_ycm_tbs_01_01.dbf
    61485     41     /xx/xx_datafile25.dbf
    61103     21     /xx/xx_datafile8.dbf
    57329     114     /xx/xx_ycm_tbs_01_02.dbf
    29338     5     /xx/xx_datafile02.dbf
    29101     123     /xx/xx_idx_tbs_04_01.dbf
    file# 121 is in a tablespace with this only datafile and this tablespace hold only one table. file#120 is the same thing, it's in another tablespace and only one table in that tablespace.
    At the same time, i use TOP in Solaris I see iowait range between 5-25% during busy hour.

  • High virtual circuit wait event

    Hi,
    in my 11g Enterprise Edition Database I have a problem with some sessions that are almost always in virtual circuit wait event. What is this wait event and how can I troubleshoot it?
    IMPORTANT: I'm not using XDB or APEX
    Edited by: Insaponata on Jan 9, 2011 8:00 AM

    From awr report based on the last our of work I see:
    Top 5 Timed Foreground Events
    Event     Waits     Time(s)     Avg wait (ms)     % DB time     Wait Class
    virtual circuit wait     95,038     16,056     169     263.91     Network
    DB CPU          305          5.02     
    SQL*Net message from dblink     42,432     48     1     0.79     Network
    db file sequential read     21,990     48     2     0.78     User I/O
    db file scattered read     26,021     36     1     0.59     User I/O
    Do you think that this situation is normal? If no, how can I troubleshoot?

  • High query IO wait

    I have a query that take huge amount of IO waits:
    table address (ENT_ID, ADDR1, ADDR2,CITY, STATE, zip, COUNTRY, ADDR_ID, ACCT_ID, VALID_FROM_DT, VALID_THRU_DT, STAT, addr_hash, sys_delete_dt, mhh,ADDR_TYPE)
    single reversed indexes on ADDR_HASH, acct_id, addr_id
    Composite PK on ENT_ID, ADDR_TYPE, ACCT_ID, STAT
    here is the query:
    SELECT ENTITY_ID, ADDR1, CITY, STATE, zip, COUNTRY, ADDR_ID, ACCT_ID, VALID_FROM_DT, VALID_THRU_DT FROM ADDRESS WHERE ADDR_HASH = :1 AND SYS_DELETE_DT IS NULL;
    The query uses the right index on addr_hash to obtain TABLE ACCESS BY INDEX ROWID over the index range scan.
    Please advise what can I do to optimize this query?
    Thanks a lot,mj

    Yes, it was analyzed earlier yesterday immediatelly before the processes start hitting the DB. - 10.1.0.4/aix 5.3
    I do not see anything wrong with the execution to require so long IO waits.... Please, let me know what I did oversee...
    The IO wait time is measured by the query:
    select distinct sql_text, sql_id, elapsed_time, cpu_time, user_io_wait_time
    from sys.v_$sqlarea
    order by 5 desc;
    Execution Plan
    0 SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=1 Card=1 Bytes=3
    23)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'ADDRESS' (TABLE) (Cost=1
    Card=1 Bytes=323)
    2 1 INDEX (RANGE SCAN) OF 'IX_ADDR_HASH' (INDEX) (Cost=1 Car
    d=1)
    Statistics
    1 recursive calls
    0 db block gets
    4 consistent gets
    1 physical reads
    0 redo size
    449 bytes sent via SQL*Net to client
    235 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed
    Thanks a lot,mj

  • Statspack: High log file sync timeouts and waits

    Hi all,
    Please see an extract from our statpack report:
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    log file sync 349,713 215,674 74.13
    db file sequential read 16,955,622 31,342 10.77
    CPU time 21,787 7.49
    direct path read (lob) 92,762 8,910 3.06
    db file scattered read 4,335,034 4,439 1.53
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sync 349,713 150,785 215,674 617 1.8
    db file sequential read 16,955,622 0 31,342 2 85.9
    I hope the above is readable. I'm concerned with the very high number of Waits and Timeouts, particulary around the log file sync event. From reading around I suspect that the disk our redo log sits on isn't fast enough.
    1) Is this conclusion correct, are these timeouts excessively high (70% seems high...)?
    2) I see high waits on almost every other event (but not timeouts), is this pointing towards an incorrect database database setup (give our very high loads of 160 executes second?
    Any help would be much appreciated.
    Jonathan

    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    log file sync 349,713 215,674 74.13
    db file sequential read 16,955,622 31,342 10.77
    CPU time 21,787 7.49
    direct path read (lob) 92,762 8,910 3.06
    db file scattered read 4,335,034 4,439 1.53
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sync 349,713 150,785 215,674 617 1.8
    db file sequential read 16,955,622 0 31,342 2 85.9What's the time frame of this report on?
    It looks like your disk storage can't keep up with the volume of I/O requests from your database.
    The first few thing need to look at, what're IO intensive SQLs in your database. Are these SQLs doing unnecessary full table scan?
    Find out the hot blocks and the objects they belong.
    Check v$session_wait view.
    Is there any other suspicious activity going on in your Server ? Like other program other than Oracle doing high IO activities? Are there any core dump going on?

  • Buffer busy wait, 1st level bmp

    Hi All !
    OS: Linux redhat 5
    DB: 11gr2
    Block size: 8K
    In an application we use I can see high buffer busy waits over a various periods.
    I collect some info during this event.
    SQL_HASH_VALUE FILE# BLOCK# REASON
    769132182 6 17512 8
    3983195767 6 17512 8
    769132182 6 17512 8
    3240261994 6 17512 8
    3240261994 6 17512 8
    3240261994 6 17512 8
    769132182 6 17512 8
    ... I have total 35 sessions
    File6 / block 17512 =
    TABLESPACE_NAME SEGMENT_TYPE OWNER SEGMENT_NAME
    GBSLOB LOBSEGMENT GBSASP SYS_LOB0000017961C00006$$
    The sql are both inserts and updates to the same large table, blobs are involved (insert/update)
    blobs using securefile
    AWR reports this for a short period
    Buffer busy waits is the top wait event
    Buffer Wait Statistics DB/Inst: GGBISP01/ggbisp01 Snaps: 20925-20926
    -> ordered by wait time desc, waits desc
    Class Waits Total Wait Time (s) Avg Time (ms)
    1st level bmb 574,636 17,118 30
    free list 20,538 70 3
    undo header 41,150 7 0
    data block 263 1 3
    undo block 18 0 0
    I'm trying to find more details about this wait event, I believe it is related to usage of ASSM.
    Can anyone explain more when 1st level bmp is seen ?
    Thank you !
    Best regards
    Magnus Johansson

    MaJo wrote:
    SQL_HASH_VALUE      FILE#     BLOCK#     REASON
    769132182          6      17512          8
    3983195767          6      17512          8
    769132182          6      17512          8
    3240261994          6      17512          8
    3240261994          6      17512          8
    3240261994          6      17512          8
    769132182          6      17512          8
    ... I have total 35 sessions
    File6 / block 17512 =
    TABLESPACE_NAME                SEGMENT_TYPE       OWNER                          SEGMENT_NAME
    GBSLOB                         LOBSEGMENT         GBSASP                         SYS_LOB0000017961C00006$$The sql are both inserts and updates to the same large table, blobs are involved (insert/update)
    blobs using securefile
    AWR reports this for a short period
    Buffer busy waits is the top wait event
    Buffer Wait Statistics          DB/Inst: GGBISP01/ggbisp01  Snaps: 20925-20926
    -> ordered by wait time desc, waits desc
    Class                    Waits Total Wait Time (s)  Avg Time (ms)
    1st level bmb          574,636              17,118             30
    free list               20,538                  70              3
    undo header             41,150                   7              0
    data block                 263                   1              3
    undo block                  18                   0              0
    -------------------------------------------------------------I'm trying to find more details about this wait event, I believe it is related to usage of ASSM.
    Can anyone explain more when 1st level bmp is seen ?
    Your AWR shows an interesting mix of ASSM and freelist group blocks - are you running ASSM ?
    1st level bitmap blocks (bmb) are the blocks in a segment (usually the first one or two of each extent) that show the availability of free space in the other data blocks. Each bitmap block can identify up to 256 other blocks (the last time I checked), although you have to have a fairly large data segment before you reach this level of mapping.
    If you have a high rate of concurrent inserts and updates on a LOB column then you may be running into code that frequently updates bitmap blocks to show that data blocks have changed from empty to full. It's also possible that you've run into one of the many bugs that appeared when you mixed ASSM with LOB segments - you haven't given the exact version of 11.2, but you might want to check the latest versions and any bug reports for patches to your version.
    Regards
    Jonathan Lewis

  • Wait event "virtual circuit wait" in wait class "Network" was consuming sig

    Hello,
    We are facing this problem when there are 2 queries try to run at the same time.
    The first query takes longer to finish so 2nd has to wait for 1st to be finished and then only 2nd starts. It seems the jam is at netowork instead of server.
    I want to make sure before I start testing on network.
    I get following :
    Wait event "virtual circuit wait" in wait class "Network" was consuming significant database time. 98.4
    Wait class "Network" was consuming significant database time.
    and recommendations is stated as :
    Investigate the cause for high "virtual circuit wait" waits with P1 ("circuit#") value "21" and P2 ("type") value "2".
    I am checking OEM.
    Thanks,
    Shashi.

    Hello Sybrand,
    Can you suggest some changes to be done to test ?
    Here is my shared server config :
    SQL> show parameter SHARED
    NAME TYPE VALUE
    hi_shared_memory_address integer 0
    max_shared_servers integer
    shared_memory_address integer 0
    shared_pool_reserved_size big integer 135895449
    shared_pool_size big integer 0
    shared_server_sessions integer
    shared_servers integer 1
    Thanks,
    Shashi.

  • Tablespace Replication Problem - high disk I/O

    Hi.
    I'm doing some R&D on Oracle Streams. Have setup Tablespace replication between 2 10g R2 instances. data seems to replicate between the 2. These 2 instances have no applications running off of them apart from OEM and queries I run using SQL Developer and SQL*PLUS.
    The problem i'm seeing is that since setting up and switching on this replication config disk I/O is high.I'm using windows Performance Monitor to look at
    - % Disk time = 100%
    - Avg Disk Writes/sec = 20
    - Avg Disk Reads/sec = 30
    - CPU % = 1
    - % Commited Mem = 40%
    To me this just looks/sounds wrong.
    This has been like this for about 24hrs.
    OEM ADDM report says "Investigate the cause for high "Streams capture: waiting for archive log" waits. Refer to Oracle's "Database Reference" for the description of this wait event. Use given SQL for further investigation." I haven't found any reference to this anywhere.
    Anybody got any ideas on how to track this one down? Where in my db's can I look for more info?
    Platform details:
    (P4, 1GB RAM, IDE disk) x 2
    Windows Server 2003 x64 SP1
    Oracle 10.2.0.1 Enterprise x64
    Script used to setup replication:
    set echo on;
    connect streamadmin/xxxx;
    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => '"STM_QT"',
        queue_name  => '"STM_Q"',
        queue_user  => '"STREAMADMIN"');
    END;
    --connect streamadmin/xxxx@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=oratest2.xxxx.co.uk)(PORT=1521)))(CONNECT_DATA=(SID=oratest2.xxxx.co.uk)(server=DEDICATED)));
    connect streamadmin/xxxx@oratest2;
    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => '"STM_QT"',
        queue_name  => '"STM_Q"',
        queue_user  => '"STREAMADMIN"');
    END;
    connect streamadmin/xxxx;
    create or replace directory "EMSTRMTBLESPCEDIR_0" AS 'D:\ORACLE\DATA\ORATEST1';
    DECLARE 
        t_names DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN   
    t_names(1) := '"ADFERO_TS"';
    DBMS_STREAMS_ADM.MAINTAIN_TABLESPACES(
       tablespace_names             => t_names,
       source_directory_object       => '"DPUMP_DIR"',
       destination_directory_object  => '"DPUMP_DIR"',
       destination_database          => 'ORATEST2.xxxx.CO.UK' ,
       setup_streams                 => true,
       script_name                    => 'Strm_100407_1172767271909.sql',
       script_directory_object        => '"DPUMP_DIR"',
       dump_file_name                 => 'Strm_100407_1172767271909.dmp',
       capture_name                  => '"STM_CAP"',
       propagation_name              => '"STM_PROP"',
       apply_name                    => '"STM_APLY"',
       source_queue_name             => '"STREAMADMIN"."STM_Q"',
       destination_queue_name        => '"STREAMADMIN"."STM_Q"',
       log_file                      => 'Strm_100407_1172767271909.log',
       bi_directional                => true);
    END;
    /

    ok dont know why this didn't work before but here are the results.
    select segment_name, bytes from dba_segments where owner='SYSTEM' and segment_name like 'LOGMNR%' ORDER BY bytes desc
    SEGMENT_NAME                                                                      BYTES                 
    LOGMNR_RESTART_CKPT$                                                              14680064              
    LOGMNR_OBJ$                                                                       5242880               
    LOGMNR_COL$                                                                       4194304               
    LOGMNR_I2COL$                                                                     3145728               
    LOGMNR_I1OBJ$                                                                     2097152               
    LOGMNR_I1COL$                                                                     2097152               
    LOGMNR_RESTART_CKPT$_PK                                                           2097152               
    LOGMNR_ATTRIBUTE$                                                                 655360                
    LOGMNRC_GTCS                                                                      262144                
    LOGMNR_I1CCOL$                                                                    262144                
    LOGMNR_CCOL$                                                                      262144                
    LOGMNR_CDEF$                                                                      262144  
    LOGMNR_USER$                                                                      65536  
    160 rows selected
    select segment_name, bytes from dba_extents where segment_name=upper( 'logmnr_restart_ckpt$' );
    SEGMENT_NAME                                                                      BYTES                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              65536                 
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    LOGMNR_RESTART_CKPT$                                                              1048576               
    29 rows selectedMessage was edited by:
    JA
    Message was edited by:
    JA

  • Wait time frequent insert

    Hello
    This was my exam question and still not sure about the answer
    You notice there is a very high percentage of wait time for contention event in your Rac database that has frequent insert operations.
    Which two recommendations may reduce this problem ?
    a-)shorter transactions
    b-)increasing sequence cache sizes
    c-)using reverse key indexes
    d-)uniform and large extents sizes
    e-)automatic segment space managemnt
    f-)smaller extent sizes Any suggestion ?

    Again, what exam are we talking about? If you're talking about one of the Oracle certification exams, be aware that you're not allowed to discuss post exam questions on public forums or to discuss them with other candidates. That's a violation of your certification agreement.
    Assuming we are not talking about an Oracle certification exam, I'd point out to the instructor that any or all the answers might be correct depending on the wait event.
    Justin

  • High Update Response Time ticket from solman

    Hi Experts,
    We have monitoring setup from solman. We are getting an alert for ECC prod that the there is High Update Response Time. Can you please let me know how to fix it?
    Thanks,
    Asad

    Hi Asad ,
    How many update work process do you have . See a high amount of wait time for the update process
    You may increase their number if you have enough memory .
    Go to SM13 , Go to -> Administration of update system
    Goto - > Statistics
    Response Times: Rules of Thumb - CCMS Monitoring - SAP Library
    Less than 1 second is the recommended , though sometimes you may breach it .
    Thanks ,
    Manu

Maybe you are looking for

  • How to get the RowID in a VO

    Hi All, I have a expert mode VO. Its not a EO based VO. I need the row id in the VO query. But it throws me an ora-01445 error. I need the rowid because I need to pass it as a parameter to a procedure. How can i get this done. Any help is greatly app

  • Menu behind spry slideshow dreamweaver cs5?

    Using Dreamweaver CS5 Hi.  I have a site with a css3 dropdown menu that seems to work fine on all of my pages except my photo gallery page. The menu appears behind the Spry Slideshow with Filmstrip that I got from the Widget Browser. The site is not

  • FCP Out of Memory Problem

    Hello. I am running Final Cut 5 on a G5 (a little over a year old) with 2Gig of RAM. I see that others have had this problem, but there are so many different solutions posted I don't know where to begin. Let me go thru what I was doing. I was editing

  • I see few purple vertical lines on my air mac coming on and off?

    Hello dears? I recently see few lines of purple colour showing on my screen of my my Mac Air OS X 10-8-5 . Processor 1.6 GHz intel Core i5. Memory  4 GB 1333 MHz DDR 3.   Anyone with advice what to do is really appreciated.

  • Flash player just quits working fix

    Recent malware infection causes flash player to fail?  The kill-bit problem noted in this posthttp://forums.adobe.com/message/3432049#3432049  fixed it for me. Just thought I'd share.