Controlfile on ASM performance question

Seeing Controlfile Enqueue performance spikes, consideration are to move control file to separater diskgroup(need outage) ? or add some disk(from different luns,( i prefer this approach) in the same disk group , seems like slow disk is casing this issue...
2nd question :can snapshot controlfile be placed on ASM storage?

Following points may help:
- Separating the control file to another diskgroup may make things even worse in case that the total number of disks are insufficient in the new disk group.
- Those control file contention issues are usually nothing to do with the storage throughput you have but the number of operations requiring different levels of exclusion on the control files.
- Since multiple copies of controlfiles are updated concurrently a possible, sometimes, problem is that the secondary copy of controlfile is slower than the other. Please check that this is not the issue (different tiers of storage may cause such problems)
Regards,
Husnu Sensoy

Similar Messages

  • Duplicating controlfile in ASM

    Hello all,
    I've just created a 10g stand alone database with just one controlfile. I have just one DG but I want to multiplex it using RMAN copy.
    Can someone tell me what are the steps to duplicate the controlfile in ASM?
    On controlfile will reside in +DG/<DID>/controlfile and other will be in +DG/<DID>/datafile directories in the same DG.
    rman copy from to
    alter system set control_files='+DG/<DID>/controlfile/ct1.f','+DG/<DID>/datafile/ct2.f' scope=spfile
    Thanks

    see the oracle docs :
    Performing RMAN Recovery: Advanced Scenarios
    "RMAN uses the autobackup format and DBID to determine where to hunt for the control file autobackup. If one is found, RMAN restores the control file to all control file locations listed in the CONTROL_FILES initialization parameter.""

  • Simple performance question

    Simple performance question. the simplest way possible, assume
    I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
    When add is true, I must add a constant value to each element in the array.
    When add is false, I must subtract a constant value to each element in the array.
    Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
    private void process(){
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             if (add)
             matrix[i][ii][iii][...]  += constant;
             else
             matrix[i][ii][iii][...]  -= constant;
    private void process(){
      if (add)
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             matrix[i][ii][iii][...]  += constant;
    else
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
           matrix[i][ii][iii][...]  -= constant;
    }Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.

    erjoalgo wrote:
    I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
    matrix[i][ii][iii][...]  +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
    but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
    Winston

  • BPM performance question

    Guys,
    I do understand that ccPBM is very resource hungry but what I was wondering is this:
    Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
    More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
    Your opinion is appreciated.
    Thanks a lot,
    Viktor Varga

    Hi,
    In SXMB_ADM you can set the time out higher for the sync processing.
    Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
    Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
    when you go for BPM the whole workflow
    has to come into action so for example
    when your mapping last < 1 sec without bpm
    if you do it in a BPM the transformation step
    can last 2 seconds + one second mapping...
    (that's just an example)
    so the workflow gives you many design possibilities
    (brigde, error handling) but it can
    slow down the process and if you have
    thousands of messages the preformance
    can be much worse than having the same without BPM
    see below links
    http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
    http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
    BPM Performance tuning
    BPM Performance issue
    BPM performance question
    BPM performance- data aggregation persistance
    Regards
    Chilla..

  • Multiplexing Controlfiles in ASM

    Hi,
    How we can do multiplexing of control files in ASM.
    and How we can find free space on disk groups.
    Please let me know
    thank a lot.

    Hi;
    Please see below notes which could be helpful for your issue:
    How To Move Controlfile To ASM [ID 468458.1] *<< check step 6 and also referance part*
    Also see:
    ASM Technical Best Practices [ID 265633.1]
    Regard
    Helios

  • ASM performance evaluation

    Dear All:
    We got the 8 Luns of each 100GB distributed across a diskgroup for our data/index storage. We ran massive update/insert statements on the tables built with indexes so there were numerous dbfile sequential reads.
    We run the ADDM reports based on snapshots taken during these updates and Oracle alerted on I/O throughputs.
    ***The average response time for single block reads was 99 milliseconds.
    *** Average datafile I/O throughput was 720K per second per reads and 1.2M per second per writes.
    *** Even we already implemented ASM, oracle recommendation was to engage ASM, which is confusing.
    *** On some index files, the average response time is >500 Milliseconds for single block.
    Now what is meant by average response time per single block ? Is it database block or the stripe size for the diskgroup (1 MB I assume).
    On the dbconsole, ASM performance TAB, we could see the response time, throughput in MB per second. Is there any script to get those info ? And what is an ideal benchmark for ASM throughput, single block average response time.
    Any help, advise, suggestion will be highly appreciated.

    user12018084 wrote:
    On the dbconsole, ASM performance TAB, we could see the response time, throughput in MB per second. Is there any script to get those info ? And what is an ideal benchmark for ASM throughput, single block average response time. I do not understand how you can benchmark ASM using I/O.
    I/O is directly dependent on your I/O subsystem. The driver, the hardware, the cable, the storage array, the speed of the disks. For example, if you use a PCI-X HCA card for your connectivity to a storage systen and not PCI-E, you have a very specific max thruput imposed on your I/O layer. Which would not have existed if you used PCI-E instead. Or your storage array could be implementing RAID5 without ASIC support, resulting in a hefty overhead for parity calculation on write I/O. Or your RAID10 storage array could have 5 striped sets across 2 disks per set, instead of a single set across 10 disks.
    And now you want to measure I/O at ASM level (btw, who - which process- actually writes to disk?).. and then blame ASM for poor I/O performance?

  • How do I copy a controlfile to ASM

    I have a database in which the datafiles are in and ASM instance but two controlfiles are multiplexed to normal ext3 linux filesystems.
    How do I copy the controlfile to ASM to multiplex the controlfile there too ? Can I use rman convert to do this ?
    Regs
    johnnie d

    if your disk group is data1, then set control_file='data/control01.ctl' in init.ora file
    startup nomount your database using init.ora file
    RMAN>connect target;
    RMAN>restore controlfile from '<your_location>';
    RMAN>alter database mount;
    RMAN>alter database open;
    Seec metalink doc id 252219.1
    Thanks.

  • Swing performance question: CPU-bound

    Hi,
    I've posted a Swing performance question to the java.net performance forum. Since it is a Swing performance question, I thought readers of this forum might also be interested.
    Swing CPU-bound in sun.awt.windows.WToolkit.eventLoop
    http://forums.java.net/jive/thread.jspa?threadID=1636&tstart=0
    Thanks,
    Curt

    You obviously don't understand the results, and the first reply to your posting on java.net clearly explains what you missed.
    The event queue is using Thread.wait to sleep until it gets some more events to dispatch. You have incorrectly diagnosed the sleep waiting as your performance bottleneck.

  • Xcontrol: performance question (again)

    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Is there a way to reduce the cpu-load when using xcontrols? 
    If there isn't and if this is not a problem with my installation but a known issue, I think this would be a potential point for NI to fix in a future update of LV.
    Regards,
    soranito
    Message Edited by soranito on 04-04-2010 08:16 PM
    Message Edited by soranito on 04-04-2010 08:18 PM
    Attachments:
    XControl_performance_test.zip ‏60 KB

    soranito wrote:
    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Okay, I think I understand question  now.  You want to know why an equivalent xcontrol boolean consumes 10x more CPU resource than the LV base package boolean?
    Okay, try opening the project I replied yesterday.  I don't have access to LV at my desk so let's try this. Open up your xcontrol facade.vi.  Notice how I separated up your data event into two events?  Go the data change vi event, when looping back the action, set the isDataChanged (part of the data change cluster) to FALSE.  While the data input (the one displayed on your facade.vi front panel), set that isDataChanged to TRUE.  This is will limit the number of times facade will be looping.  It will not drop your CPU down from 10% to 0% but it should drop a little, just enough to give you a short term solution.  If that doesn't work, just play around with the loopback statement.  I can't remember the exact method.
    Yeah, I agree xcontrol shouldn't be overconsuming system resource.  I think xcontrol is still in its primitive form and I'm not sure if NI is planning on investing more times to bug fix or even enhance it.  Imo, I don't think xcontrol is quite ready for primetime yet.   Just too many issues that need improvement.
    Message Edited by lavalava on 04-06-2010 03:34 PM

  • Multiplex Controlfiles in ASM

    I have been trying to figure out how to multiplex a controlfile that is in an ASM instance. Normally I would simply shutdown the DB, copy an existing controlfile to my desired location, update the pfile or spfile to the desired location and startup the DB. Since my existing controlfile is in ASM though, I can't do the copy part.
    I have tried using RMAN to restore a controlfile, but I can't seem to get the syntax right.
    I have seen thread Re: How to multiplexing control files in ASM instance??? however I really don't want to issue a RESETLOGS just to be able to multiplex my existing controlfile. In this thread there is another suggestion without having to issue a RESETLOGS, but this does not seem to get this to work for me. Any suggestions? Thanks.

    I found the answer I was looking for. If anyone else had this question, the answer is in metalink document id 345180.1

  • MBP with 27" Display performance question

    I'm looking for advice regarding improving the performance, if possible, of my  Macbook Pro and new 27'' Apple display combination.  I'm using a 13" Macbook Pro 2.53Ghz with 4GB RAM, NVIDIA GeForce 9400M graphics card and I have 114GB of the 250GB of HD space available.  What I'm really wondering is is this enough spec to run the 27" display easily.  Apple says it is… and it does work, but I suspect that I'm working at the limit of what my MCB is capable of.  My main applications are Photoshop CS5 with Camera RAW and Bridge.  Everything works but I sometimes get lock ups and things are basically a bit jerky.  Is the bottle neck my 2.53Ghz processor or the graphics card?  I have experimented with the Open GL settings in Photoshop and tried closing all unused applications.  Does anyone have any suggestions for tuning things and is there a feasible upgrade for the graphics card if such a thing would make a difference?  I have recently started working with 21mb RAW files which I realise isn't helping.  Any thoughts would be appreciated.
    Matt.

    I just added a gorgeous LCD 24" to my MBP setup (the G5 is not Happy) The answer to your question is yes. Just go into Display Preferences and drag the menu bar over to the the 24 this will make the 24 the Primary Display and the MBP the secondary when connected.

  • Performance question about 11.1.2 forms at runtime

    hi all,
    Currently we are investigating a forms/reports migration from 10 to 11.
    Initialy we were using v. 11.1.1.4 as the baseline for the migration. Now we are looking at 11.1.2.
    We have the impression that the performance has decreased significantly between these two releases.
    To give an example:
    A wizard screen contains an image alongside a number of items to enter details. In 11.1.1.4 this screen shows up immediately. In 11.1.2 you see the image rolling out on the canvas whilst the properties of the items seem to be set during this event.
    I saw that a number of features were added to be able to tune performance which ... need processing too.
    I get the impression that a big number of events are communicating over the network during the 'built' of the client side view of the screen. If I recall well during the migration of 6 to 9, events were bundled to be transmitted over the network so that delays couldn't come from network roundtrips. I have the impression that this has been reversed and things are communicated between the client and server when they arrive and are not bundled.
    My questions are:
    - is anyone out there experiencing the same kind of behaviour?
    - if so, is there some kind of property(ies) that exist to control the behaviour and improve performance?
    - are there properties for performance monitoring that are set but which cause the slowness as a kind of sideeffect and maybe can be unset.
    Your feedback will be dearly appreciated,
    Greetigns,
    Jan.

    The profile can't be changed although I suspect if there was an issue then banding the line would be something they could utilise if you were happy to do so.
    It's all theoretical right now until you get the service installed. Don't forget there's over 600000 customers now on FTTC and only a very small percentage of them have faults. It might seem like lots looking on this forum but that's only because forums are where people tend to come to complain.
    If you want to say thanks for a helpful answer,please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution’

  • How can I add a new controlfile in ASM?

    In the ASM,I can't copy any file.How Can I create a new online controlfile? Thanks!

    You can use "alter database backup controlfile to trace" command to get the control file contents.
    Now you can use create controlfile:
    1) If you use resetlogs, use file creation context form for specification of log files.
    2) If you use resetlogs, use file reference context.
    Why do you want to create a new control file. Is it for multiplexing purpose?
    Reference: Chapter-12 of Oracle database administrator guide 10gR2
    search for title "Creating a Control File in ASM" ...
    Hope this will help you.
    Regards,
    Neeraj

  • Editing stills with motion effects, performance questions.

    I am editing a video in FCE that consists solely of still photos.
    I am creating motion effects (pans and pullbacks, etc) and dissolve
    transitions, and overlaying titles. It will be played back on dvd
    on a 16:9 monitor (standard dvd,not blueray hi-def). Some questions:
    What is the FCE best setup to use for best image quality: DV-NTSC?
    DV-NTSC Anamorphic? or is it HDV-1080i or 720p30 even though it
    won't be played back as hi-def?
    How do best avoid squiggly line problem with pan moves etc?
    On my G-5, 2gb RAM, single processor machine I seem to be having
    performance problems with playback: slow to render, dropping frames, etc
    Thanks for any help!

    Excellent summary MacDLS, thanks for the contribution.
    A lot of the photos I've taken on my camera are 3072 X 2304 (resolution 314) .jpegs.
    I've heard it said that jpegs aren't the best format for Motion, since they're a compressed format.
    If you're happy with the jpegs, Motion will be, too.
    My typical project could either be 1280 X 720 or SD. I like the photo to be a lot bigger than the
    canvas size, so I have room to do crops and grows, and the like. Is there a maximum dimension
    that I should be working with?
    Yes and no. Your originals are 7,000,000 pixels. Your video working space only displays about 950,000 pixels at any single instant.
    At that project size, your stills are almost 700% larger than the frame. This will tax any system as you add more stills. 150% is more realistic in terms of processing overhead and I try to only import HUGE images that I know are going to be tightly cropped by zooming in. You need to understand that an 1300x800 section of your original is as far as you can zoom in , the pixels will be 100% in size. If you zoom in further, all you get are bigger pixels. The trade off you make is that if you zoom way out on your source image, you've thrown away 75% of its content to scale it to fit the video format; you lose much much more if you go to SD.
    Finally, the manual says that d.p.i doesn't matter in Motion, so does this mean that it's worth
    actually exporting my 300 dpi photos to 72 dpi before working with them in Motion?
    Don't confuse DPI with resolution. Your video screen will only show about 900,000 pixels in HD and about 350,000 pixels in SD totally regardless of how many pixels there are in your original.
    bogiesan

  • 9 shared objects performance question

    I have 9 shared objects 8 of which contain dynamic data which i cannot really consolidate into a single shared object because the shared object often has to be cleared.....my question is what performance issues will I exsperience with this number of shared objects .....I maybe wrong in thinking that 9 shared objects is alot.....anybody with any exsperience using multiple shared objects plz respond.

    I've used many more than 9 SO's in an application without issue. I suppose what it really comes down to is how many clients are connected to those SO's and how often each one is being updated.

Maybe you are looking for

  • How to know the customer item in Basket in Adventure Works sample db

    Dear all, In AW sample DB there is a ShopingCartItem table where user can place product they want to buy. In from that ShopingCardItem there seems to be no information a USer ID which place them in or is there somewhere ? regards

  • Booting off external hard drive

    I am trying to boot off an external Seagate 160GB USB hard drive. I used SuperDuper to create a full system backup to my drive. When I reboot, I should hold option, right? When I hold option, the grey screen comes up when booting after the chime and

  • Im using BAPI_INB_DELIVERY_CHANGE instead of BDC to update the DATABASE

    can any help me in using  BAPI_INB_DELIVERY_CHANGE  ..i got stuck with it....

  • Could not save...disk is full

    I installed a new 1.5 terabyte hard drive and when I try to save either Photoshop or Lightroom files I get this error message. I do not get the message saving a file in a Sony or Microsoft program.  Is this something to do with Adobe programs? thanks

  • Is there any way to have Universe SDK running in UNIX/Linux system?

    Hi, The prospect needs to change the Universe connection source dynamically in batch mode within production environment. So they developed a program with Universe COM SDK. However, we found that neither VC nor VB code developed in Universe COM SDK ca