ORA-02393: exceeded call limit on CPU usage -- Concept Understanding is req

In our System CPU_PER_CALL is set to 1.5 Hours for Reporting Users.
I can see some query runs for 10 hours-15 hours and complete successfully and some queries fail exactly after 1.5 hours.
I want to understand what does CPU_PER_CALL Means. On what basis it calculates CPU_PER_CALL ( Fetch , Execute , parse). How a query is calculating time ?
With the same profile options some queries run for 10 hours but some queries fail after 1.5 hours.
Regards
Sourabh Gupta

The short answer is that different queries wait on different sorts of events. Let's assume that the only 2 wait events in the world are waits for CPU and waits for I/O (there are many other types of waits but most reporting queries will primarily be waiting for these two resources). If you have a query that runs for 15 hours but spends 14.5 hours waiting on I/O and only 0.5 hours on the CPU doing comparisons and/or calculations, the CPU usage for that query is only 0.5 hours. Another query might run for 1.51 hours and do 0.01 hours of I/O and spend 1.5 hours on the CPU calculating various aggregate values for that data. The second query would use 1.5 hours of CPU (and thus exceed your CPU_PER_CALL) while the first query would only use a third as much CPU.
Oracle profiles allow you to specify a number of different limits so that you can specify limits on CPU usage (CPU_PER_CALL/ CPU_PER_SESSION) or I/O usage (LOGICAL_READS_PER_CALL/ LOGICAL_READS_PER_SESSION) or a combination of the two (COMPOSITE_LIMIT).
Justin

Similar Messages

  • ORA-02393 Exceeded Call Limit on CPU Usage

    I have created a Profile and attached it to a user, in this example:
    Create Profile percall
    Limit
    CPU_PER_CALL 10
    IDLE_TIME 5;
    I have attached it to one user - USER1
    When USER1 runs a SQL Statement -
    SELECT COUNT(*) FROM TABLE1 A WHERE A.EFFDT = (SELECT MAX(B.EFFDT) WHERE B.EMPLID = A.EMPLID AND B.EFFDT <= SYSDATE);
    I get an error (Which I want to receive) ORA-02393 Exceeded Call Limit on CPU Usage.
    The SQL statement shows in the table DBA_COMMON_AUDIT_TRAIL, but shows a success even though the user received an error ORA-02393.
    What I want is a way for a DBA to be able to report on those ORA-02393 errors. I don't see any entries in the Log files, and don't notice any errors in the Oracle Tables.
    I would like to be able to show the user (after a week when they bring up the issue) what the SQL statement was and why it Exceeded the CPU Usage. If the error could place the SQL statement in a table or just display it in an error log with the Statement to verify that THIS is the statement which exceeded the CPU Usage.
    Thank you
    Aaron

    can you modify the procedure in which the SELECT resides.
    If so, trap & log the error.

  • Exceeded session limit on CPU usage

    Hi All,
    We are getting message while generating some reports. Pl. see the Error Text below for message. For time being we have  bumped the session limit to unlimited to take care of this problem for now. But the question is u201CIs there a way available in MII to refresh(cycle) the Data source connectionu201D  So that the DB session limit can be kept un altered?
    When we search this on different forums, we got a solution which we aleady implemented(bumping the session limit to unlimited). But we are looking for a solution from MII side.
    Any help will be appreciated
    Regards,
    Rajesh.
    Error Text:
    Error occurred while processing data stream, A SQL Error has occurred on query, ORA-02392: exceeded session limit on CPU usage, you are being logged off . com.lighthammer.Illuminator.logging.LHException: Error occurred while processing data stream, A SQL Error has occurred on query, ORA-02392: exceeded session limit on CPU usage, you are being logged off . at com.lighthammer.Illuminator.logging.ErrorHandler.handleError(Unknown Source) at com.lighthammer.Illuminator.logging.ErrorHandler.handleError(Unknown Source) at com.lighthammer.Illuminator.connectors.Proxy.Proxy.processRequest(Unknown Source) at com.lighthammer.Illuminator.services.handlers.IlluminatorService.processRequest(Unknown Source) at com.lighthammer.Illuminator.services.ServiceManager.runQuery(Unknown Source) at com.lighthammer.Illuminator.servlet.Illuminator.service(Unknown Source) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.lighthammer.Illuminator.servlet.ServletRunner.run(Unknown Source) at com.lighthammer.Illuminator.servlet.ServletRunner.runAsXmlQuery(Unknown Source) at com.lighthammer.xacute.actions.illuminator.queries.IlluminatorQueryObject.LoadDocument(Unknown Source) at com.lighthammer.xacute.actions.illuminator.queries.IlluminatorQueryObject.Invoke(Unknown Source) at com.lighthammer.xacute.core.Action.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.ActionSequence.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.ActionSequence.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.Conditional.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.ActionSequence.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.ActionSequence.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.ActionSequence.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Execute(Unknown Source) at com.lighthammer.Illuminator.connectors.Xacute.XacuteRequestHandler.processQueryRequest(Unknown Source) at com.lighthammer.Illuminator.connectors.Xacute.XacuteRequestHandler.QueryRequest(Unknown Source) at com.lighthammer.Illuminator.connectors.Xacute.XacuteConnector.processRequest(Unknown Source) at com.lighthammer.Illuminator.services.handlers.IlluminatorService.processRequest(Unknown Source) at com.lighthammer.Illuminator.services.ServiceManager.runQuery(Unknown Source) at com.lighthammer.Illuminator.servlet.Illuminator.service(Unknown Source) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.newatlanta.servletexec.SERequestDispatcher.forwardServlet(SERequestDispatcher.java:638) at com.newatlanta.servletexec.SERequestDispatcher.forward(SERequestDispatcher.java:236) at com.newatlanta.servletexec.SERequestDispatcher.internalForward(SERequestDispatcher.java:283) at com.newatlanta.servletexec.SEFilterChain.doFilter(SEFilterChain.java:96) at com.lighthammer.cms.system.CMSFilter.doFilter(Unknown Source) at com.newatlanta.servletexec.SEFilterChain.doFilter(SEFilterChain.java:60) at com.newatlanta.servletexec.ApplicationInfo.filterApplRequest(ApplicationInfo.java:2159) at com.newatlanta.servletexec.ApplicationInfo.processApplRequest(ApplicationInfo.java:1823) at com.newatlanta.servletexec.ServerHostInfo.processApplRequest(ServerHostInfo.java:937) at com.newatlanta.servletexec.ServletExec.ProcessRequest(ServletExec.java:1091) at com.newatlanta.servletexec.ServletExec.ProcessRequest(ServletExec.java:973) at com.newatlanta.servletexec.ServletExecService.processServletRequest(ServletExecService.java:167) at com.newatlanta.servletexec.ServletExecService.Run(ServletExecService.java:204) at com.newatlanta.servletexec.HttpServerRequest.run(HttpServerRequest.java:487)

    Hi,
    Kindly try out the below option from database side.
    Error : ORA-02392: exceeded session limit on CPU usage, you are being logged off
    Cause : An attempt was made to exceed the maximum CPU usage allowed by the CPU_PER_SESSION clause of the user profile.
    Action : If this happens often, ask the database administrator to increase the CPU_PER_SESSION limit of the user profile.
    If you looking for solution in MII end,
    Check with SAP MII administrator on log files.
    Check Data server tab for configuration details. (eg : Pool Size, Pool Max etc)
    Kindly let us know the version of SAP MII.
    Thanks
    Rajesh Sivaprakasam.

  • ORA-02394: exceeded session limit on IO usage

    I have one sql took much time to run and got this..
    ORA-02394: exceeded session limit on IO usage, you are being logged of
    when I checked profile options
    SQL> select PROFILE,RESOURCE_NAME,LIMIT from dba_profiles where RESOURCE_NAME='LOGICAL_READS_PER_SESSION';
    PROFILE RESOURCE_NAME
    LIMIT
    DEFAULT LOGICAL_READS_PER_SESSION
    UNLIMITED
    SQL> select PROFILE,RESOURCE_NAME,LIMIT from dba_profiles where RESOURCE_NAME='LOGICAL_READS_PER_CALL';
    PROFILE RESOURCE_NAME
    LIMIT
    DEFAULT LOGICAL_READS_PER_CALL
    UNLIMITED
    Is there anything we can do here bcaz both profiles have been set to unlimted?

    Hello,
    Oracle 8.1.7.4 is the final patchset of the last release for Oracle 8i, so it's a rather
    stable version.
    Do you have a way to tune this query so that it can run faster ?
    Do you have correct statistics on the optimizer ?
    Best regards,
    Jean-Valentin

  • Skype vdeo call and high cpu usage on Mac

    Hello,
    I have seen many posts around skype video call causing extremely high CPU usage which in turn causes a lot of heat and fan starts to go crazy. This does not seem to happen on Windows. I have trawled various forums and tried everything but to no avail.
    Finally the workaround that worked for me was simple - during a call when CPU usage is going through the roof and the fans going crazy - just share your screen for a few seconds with the caller and then unshare. CPU usage dramatically falls off and stabilises at around 38% to 40%. I have noticed this my Macbook Pro (2013 2.3ghz, 16gb RAM) running Yosemite.
    This is definitely a problem with Skype. Guys; please fix....

    Did the sharing and unsharing of screen stabilise your CPU usage? I also tried deleting the Library/Caches/com.skype* directory and whilst it helped initially, in the long run it did not work. The only thing that worked for me was sreen share and unshare.
    Personally I am just getting more and more disappointed with OS X. The way they have implemented HiDPI support with scaling (downscaling and then upscaling) is just insane and a lot of applications struggle. I am sure this is not the root cause of the high CPU usage issue for Skype. It's not long before I come off the Apple ecosystem altogether.

  • Limit the CPU usage of applications to keep OS functional ?

    Hello,
    I would first like to say that I am not knowledgeable in programming or in computer science vocabulary. So I will say things as I think they are, sorry about that.
    My problem is:
    When using any applications, there is always a moment where the application will freeze the computer.
    In my understanding, it is because all the CPU is directed towards this application.
    So the simple solution that comes with this thinking is: if I limit the CPU an application can use, then I can keep my finder working even if the application crashes.
    I understand that I might totally be off-track and in that case, I would love to hear an explanation on how it really works.
    Thank you !

    Good stuff guys. I guess what I was not considering is that if any archived logs are overwritten, it makes recovery from the last full backup perilous. Here are some more facts about the DB that may help determining the best solution:
    All raster content is stored in a raster tablespace. We load raster content (Sat imagery, etc.) from disk or FTP delivery but we do not do any editing to it. So, once it is in Oracle it is static. So, a daily or re-occuring backup of the raster content may not make sense. It may make sense to only back up this tablespace once new loads occur. Also, the raster tablespace makes up over 95% of our DB size (~220GBs).
    All other content (mainly geospatial vector information) is stored in 2 other tablespaces and this is the content that is edited routinely and this is the content I need backed up often....and it is small...no more than a few gigs.
    So, what if I ran the DB in archive log mode, except for the raster tablespace, and did nightly backups on everything but the raster TS, and backed up the raster TS only when new content is loaded??? Does this seem like a good idea? and if so, what would be the best way to periodically backup a >200GBTS?
    This will make redo much MUCH smaller and I can handle removing the archived logs once a backup is complete.
    Thanks again. It is fun learning more about this stuff!!
    Thanks!
    Message was edited by:
    [email protected]

  • ORA-02393 in an APEX PL/SQL Procedure

    Hi,
    I have a PL/SQL procedure in APEX that runs queries against a remote Database using a DB link. I do multiple calls to the DB in a FOR LOOP query. The procedure is failing with the error: "ORA-02393: exceeded call limit on CPU usage"
    I was wondering if this could be caused by the way that APEX handles db sessions. How or when does APEX close the db session? Or is it leaving it open?
    Do you have any suggestions on how to close the DB session so I don't run into this error?
    Appreciate your help.
    Alejandro

    You want to add this to your DAD
    PlsqlMaxRequestsPerSession 100
    Where 100 is the max number of requests a database session will serve before closing. 100 is just an example. The highest number you can have without causing the issue would be what you want.
    By default, Mod_plsql uses a connection pool, and the default value is 1000 page serves prior to closing the session. PlsqlMaxRequestsPerSession will set that to the value specified.
    Anton

  • CPU usage by Essbase Server

    Hello Experts,
    I have a query regarding the CPU usage by Essbase Server ? Can it be limited to certain % of the whole server by a setting ?
    Thanks in advance.
    Regards,
    Sudhir

    Hi,
    Is this not the same question as :- Query about limiting the essbase application use of CPU and RAM
    There is no essbase specific configuration to limit the cpu usage, depending on your OS you could at look at trying to limit CPU usage of a process but I am not sure how well that would work in practice.
    Another Re: "Dedicated" CPU for Essbase service that may be useful to you, not the same question but on processor usage.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Are there any message when free version of customer exceed 250MB limit?

    Are there any error message or waring message when customer exceed 250MB limit of SW usage?
    Are there any way that SW administrator to know user exceed 250MB limit of SW usage?
    If there is any message or waring, is it possible to get those by email?
    Best Regards

    Hi Ryota,
    sorry for the belated response. If a user is above the activity quota, they will not be able to create a new activity anymore. They will still be able to change existing activities though. Uploading new files may not be possible if the storage quota is exceeded. A user can see the current quota status in his/her settings. StreamWork administrators are not aware of the quota status of free users.
    In case of a professional or enterprise account the organization administrator can see the quota usage of each of his/her user in the administration panel. I am not aware of another notification (we assume that users will talk to their administrator).
    HTH
    Simon

  • CPU usage by SophosWebIntelligence

    SophosWebIntelligence uses up to 90% and more of CPU when using Safari, visiting different sites.  Resulting in fans speeding up and unbearable noise.  MacBookPro Mid 20092,8 GHz Intel Core 2 DUo, 8GB RAM, OSX 10.9.5Any solution available for this?

    Hi,
    Is this not the same question as :- Query about limiting the essbase application use of CPU and RAM
    There is no essbase specific configuration to limit the cpu usage, depending on your OS you could at look at trying to limit CPU usage of a process but I am not sure how well that would work in practice.
    Another Re: "Dedicated" CPU for Essbase service that may be useful to you, not the same question but on processor usage.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • FullOffline Backup - ORA-19566: exceeded limit of 0 corrupt blocks for file

    Dear SAP gurus,
    I am getting an error from the DBA Planning Calendar every time the job for "Full Offline backup" is run. And it is always as you can see from the log on the same file "oracle/SHD/sapdata4/sr3_16/sr3.data16".
    The oracle error is the following:
    ORA-19566: exceeded limit of 0 corrupt blocks for file /oracle/SHD/sapdata4/sr3_16/sr3.data16
    I found the SAP Note 969192 - RMAN Backup of SYSTEM tablespace terminates with ORA-19566
    but it does no apply because this is for the tablespace SYSTEM and not PSAPSR3.
    Please find below the log:
    BR0051I BRBACKUP 7.00 (46)
    BR0055I Start of database backup: begomwsv.ffd 2011-08-17 10.01.37
    BR0484I BRBACKUP log file: /oracle/SHD/sapbackup/begomwsv.ffd
    BR0477I Oracle pfile /oracle/SHD/102_64/dbs/initSHD.ora created from spfile /oracle/SHD/102_64/dbs/spfileSHD.ora
    BR0101I Parameters
    Name                           Value
    oracle_sid                     SHD
    oracle_home                    /oracle/SHD/102_64
    oracle_profile                 /oracle/SHD/102_64/dbs/initSHD.ora
    sapdata_home                   /oracle/SHD
    sap_profile                    /oracle/SHD/102_64/dbs/initSHD.sap
    backup_mode                    FULL
    backup_type                    offline_force
    backup_dev_type                disk
    backup_root_dir                /mnt/backup/oracle/SHD
    compress                       no
    disk_copy_cmd                  rman
    cpio_disk_flags                -pdcu
    exec_parallel                  0
    rman_compress                  no
    system_info                    shdadm/orashd eccdev01 Linux 2.6.16.60-0.87.1-smp #1 SMP Wed May 11 11:48:12 UTC 2011 x86_64
    oracle_info                    SHD 10.2.0.4.0 8192 17654 1114483454 eccdev01 UTF8 UTF8
    sap_info                       700 SAPSR3 0002LK0003SHD0011Y01548735220015Maintenance_ORA
    make_info                      linuxx86_64 OCI_102 Jan 29 2010
    command_line                   brbackup -u / -jid FLLOF20110817100136 -c force -t offline_force -m full -p initSHD.sap
    BR0116I ARCHIVE LOG LIST before backup for database instance SHD
    Parameter                      Value
    Database log mode              No Archive Mode
    Automatic archival             Disabled
    Archive destination            /oracle/SHD/oraarch/SHDarch
    Archive format                 %t_%s_%r.dbf
    Oldest online log sequence     17651
    Next log sequence to archive   17654
    Current log sequence           17654            SCN: 1114483454
    Database block size            8192             Thread: 1
    Current system change number   1114501246       ResetId: 664011854
    BR0118I Tablespaces and data files
    BR0202I Saving /oracle/SHD/sapdata3/sr3_15/sr3.data15
    BR0203I to /mnt/backup/oracle/SHD/begomwsv/sr3.data15 ...
    #FILE..... /oracle/SHD/sapdata3/sr3_15/sr3.data15
    #SAVED.... /mnt/backup/oracle/SHD/begomwsv/sr3.data15  #1/15
    BR0280I BRBACKUP time stamp: 2011-08-17 10.28.42
    BR0063I 15 of 48 files processed - 44100.117 of 121180.346 MB done
    BR0204I Percentage done: 36.39%, estimated end time: 11:15
    BR0001I ******************________________________________
    BR0202I Saving /oracle/SHD/sapdata4/sr3_16/sr3.data16
    BR0203I to /mnt/backup/oracle/SHD/begomwsv/sr3.data16 ...
    BR0278E Command output of 'SHELL=/bin/sh /oracle/SHD/102_64/bin/rman nocatalog':
    Recovery Manager: Release 10.2.0.4.0 - Production on Wed Aug 17 10:28:42 2011
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    RMAN>
    RMAN> connect target *
    connected to target database: SHD (DBID=1683093070, not open)
    using target database control file instead of recovery catalog
    RMAN> *end-of-file*
    RMAN>
    host command complete
    RMAN> 2> 3> 4> 5> 6>
    allocated channel: dsk
    channel dsk: sid=223 devtype=DISK
    executing command: SET NOCFAU
    Starting backup at 17-AUG-11
    channel dsk: starting datafile copy
    input datafile fno=00019 name=/oracle/SHD/sapdata4/sr3_16/sr3.data16
    released channel: dsk
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on dsk channel at 08/17/2011 10:30:30
    ORA-19566: exceeded limit of 0 corrupt blocks for file /oracle/SHD/sapdata4/sr3_16/sr3.data16
    RMAN>
    Recovery Manager complete.
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.30
    BR0279E Return code from 'SHELL=/bin/sh /oracle/SHD/102_64/bin/rman nocatalog': 1
    BR0536E RMAN call for database instance SHD failed
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.30
    BR0506E Full database backup (level 0) using RMAN failed
    BR0222E Copying /oracle/SHD/sapdata4/sr3_16/sr3.data16 to/from /mnt/backup/oracle/SHD/begomwsv failed due to previous errors
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0307I Shutting down database instance SHD ...
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0308I Shutdown of database instance SHD successful
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0304I Starting and opening database instance SHD ...
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.47
    BR0305I Start and open of database instance SHD successful
    Do you guys have any idea on how to solve this issue??
    Thanks in advance, Marc

    Hi,
    I am getting an error from the DBA Planning Calendar every time the job ...
    So when was your last successfull backup of this datafile. Check if still available.
    If this is some time ago, and may be you are currently without any backup, try to backup without rman at once,
    to have at least something to work with in case you get additional errors right now.
    Then you need to find out what object is affected. You are on the right way already. You need the statement,
    that goes to dba_extents to check what object the block belongs to.
    Has the DB been recovered recently, so the block might possibly belong to an index created with nologging ?
    (this could be the case on BW systems).
    If the last good backup of that file is still available and the redologs belonging to this backup up to current time are as well, you could try to recover that file. But I'd do this only after a good backup without rman and by not destroying the original file.
    If the last good backup was an rman backup, you can do a verify restore of that datafile in advance, to check if the corruption is really not inside the file to be restored.
    Check out the -w (verify) option of brrestore first, to understand how it works.
    (I am not sure it this is already available in version 7.00, may be you need to switch to 7.10 or 7.20)
    brrestore -c -m /oracle/SHD/sapdata4/sr3_16/sr3.data16  -b xxxxxxxx.ffr -w only_rmv
    You should do a dbv check of that file as well, to check if it gets more information. I.E if more blocks are
    affected. rman stops right after the first corruption, but usually you have a couple of those in line, esp. if these are
    zeroed ones. (This one would also work with version 7.00 brtools)
    brbackup -c -u / -t online -m /oracle/SHD/sapdata4/sr3_16/sr3.data16 -w only_dbv
    Good luck.
    Volker

  • Call Park service causing high CPU usage

    Hi,
    I've got a Lync 2013 pool with 4 servers running.  Everything is working fine except I notice that when I enable the call park service, the CPU usage goes from hovering at around 4% to jumping all over the place (anything from 11% to 30%).  I've
    not seen this behaviour on any other Lync platform I have deployed.
    All machines are virtuals, running 2 Sockets with 8 cores each.  Each server has 25GB of RAM.  Any perfmon report I run only confirm that the Call Park Service is using a lot of CPU resource (compared with other services)
    Any help greatly appreciated.
    Thanks
    Mike

    Please check if the CPU usage drops down when you stop the call park service.
    Please check the Call Park Configurations are correct by articles at
    http://technet.microsoft.com/en-us/library/gg399014.aspx
    Lisa Zheng
    TechNet Community Support

  • ORA-19566: exceeded limit of 999 corrupt blocks for file

    Hi All,
    I am new to Oracle RMAN & RAC Administration. Looking for your support to solve the below issue.
    We have 2 disk groups - +ETDATA & +ETFLASH in our 3 node RAC environments in which RMAN is configured in node-2 to take backup. We do not have RMAN catalog and the RMAN is fetching information from control file.
    Recently, the backup failed with the error ORA-19566: exceeded limit of 999 corrupt blocks for file +ETFLASH/datafile/users.6187.802328091.
    We found that the datafiles are present in both disk groups and from the control file info, we got to know that the datafiles in +ETDATA are currently in use and +ETFLASH is having old datafiles.
    RMAN> show all;
    RMAN configuration parameters for database with db_unique_name LABWRKT are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ETFLASH/CONTROLFILE/snapcf_LABWRKT.f';
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ETFLASH/controlfile/snapcf_labwrkt.f';
    Above configuration shows that SNAPSHOT CONTROLFILE is pointing to +ETFLASH. So I changed the configuration with the SNAPSHOT CONTROLFILE points to '+ETDATA/controlfile/snapcf_labwrkt.f'. At the end of backup, SNAPSHOT file was created in +ETDATA and I was expecting it to be the copy of control file being used which has dbf located in +ETDATA. But still the backup was pointing to old datafiles in +ETFLASH. Since we dont have RMAN catalog, resync also is not possible.
    When I ran it manually, it was successfull without any error and was pointing to the exisiting datafiles.
    RMAN> backup database plus archivelog all;
    I hope the issue will get resolved if the RMAN points only to the datafiles present in +ETDATA. If I am correct, please let me know how can i make it possible? Also please explain me why the newly created snapshot file does not reflect the existing control file info?

    Hi,
    I am getting an error from the DBA Planning Calendar every time the job ...
    So when was your last successfull backup of this datafile. Check if still available.
    If this is some time ago, and may be you are currently without any backup, try to backup without rman at once,
    to have at least something to work with in case you get additional errors right now.
    Then you need to find out what object is affected. You are on the right way already. You need the statement,
    that goes to dba_extents to check what object the block belongs to.
    Has the DB been recovered recently, so the block might possibly belong to an index created with nologging ?
    (this could be the case on BW systems).
    If the last good backup of that file is still available and the redologs belonging to this backup up to current time are as well, you could try to recover that file. But I'd do this only after a good backup without rman and by not destroying the original file.
    If the last good backup was an rman backup, you can do a verify restore of that datafile in advance, to check if the corruption is really not inside the file to be restored.
    Check out the -w (verify) option of brrestore first, to understand how it works.
    (I am not sure it this is already available in version 7.00, may be you need to switch to 7.10 or 7.20)
    brrestore -c -m /oracle/SHD/sapdata4/sr3_16/sr3.data16  -b xxxxxxxx.ffr -w only_rmv
    You should do a dbv check of that file as well, to check if it gets more information. I.E if more blocks are
    affected. rman stops right after the first corruption, but usually you have a couple of those in line, esp. if these are
    zeroed ones. (This one would also work with version 7.00 brtools)
    brbackup -c -u / -t online -m /oracle/SHD/sapdata4/sr3_16/sr3.data16 -w only_dbv
    Good luck.
    Volker

  • ORA-19566: exceeded limit of 0 corrupt blocks

    Hi All,
    We have been encountering some issues with RMAN backup; it has been erroring out with same errors (max corrupt blocks). As of now, I ran the db verify for affected files and found that indexes are failing. When I tried to find out the indexes from extent views, I was unable to find it. Looks like these blocks are in free space as I found it and also the V$backup corruption view shows the logical corruption.
    Waiting for you suggestion....
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE 10.2.0.3.0 Production
    TNS for HPUX: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    RMAN LOG:
    channel a3: starting piece 1 at 14-DEC-09
    RMAN-03009: failure of backup command on a2 channel at 12/14/2009 05:43:42
    ORA-19566: exceeded limit of 0 corrupt blocks for file /ub834/oradata/TERP/applsysd142.dbf
    continuing other job steps, job failed will not be re-run
    channel a2: starting incremental level 0 datafile backupset
    channel a2: specifying datafile(s) in backupset
    including current control file in backupset
    channel a2: starting piece 1 at 14-DEC-09
    channel a1: finished piece 1 at 14-DEC-09
    piece handle=TERP_1769708180_level0_292_1_1_20091213065437.rmn tag=TAG20091213T065459 comment=API Version 2.0,MMS Version 5.0.0.0
    channel a1: backup set complete, elapsed time: 01:14:45
    channel a2: finished piece 1 at 14-DEC-09
    piece handle=TERP_1769708180_level0_296_1_1_20091213065437.rmn tag=TAG20091213T065459 comment=API Version 2.0,MMS Version 5.0.0.0
    channel a2: backup set complete, elapsed time: 00:24:54
    RMAN-03009: failure of backup command on a4 channel at 12/14/2009 06:14:33
    ORA-19566: exceeded limit of 0 corrupt blocks for file /ub834/oradata/TERP/applsysd143.dbf
    continuing other job steps, job failed will not be re-run
    released channel: a1
    released channel: a2
    released channel: a3
    released channel: a4
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on a3 channel at 12/14/2009 06:41:00
    ORA-19566: exceeded limit of 0 corrupt blocks for file /ub806/oradata/TERP/icxd01.dbf
    Recovery Manager complete.
    Thanks,
    Vimlendu
    Edited by: Vimlendu on Dec 20, 2009 10:27 AM

    dbv file=/ora/oradata/binadb/RAT_TRANS_IDX01.dbf blocksize=8192
    The result:
    DBVERIFY: Release 10.2.0.3.0 - Production on Thu Nov 20 11:14:01 2003
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE =
    /ora/oradata/binadb/RAT_TRANS_IDX01.dbf
    Block Checking: DBA = 75520968, Block Type = KTB-managed data block
    **** row 80: key out of order
    ---- end index block validation
    Page 23496 failed with check code 6401
    DBVERIFY - Verification complete
    Total Pages Examined : 34560
    Total Pages Processed (Data) : 1
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 31084
    Total Pages Failing (Index): 1
    Total Pages Processed (Other): 191
    Total Pages Empty : 3284
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    Seems like I have 1 page failing. I try to run this script:
    select segment_type, segment_name, owner
    from sys.dba_extents
    where file_id = 18 and 23496 between block_id
    and block_id + blocks - 1;
    No rows returned.
    Then, I try to run this script:
    Select tablespace_name, file_id, block_id, bytes
    from dba_free_space
    where file_id = 18
    and 23496 between block_id and block_id + blocks - 1
    Resulting 1 row.
    Seems like I have the possible corrupt block on unused space.
    Edited by: Vimlendu on Dec 20, 2009 2:30 PM
    Edited by: Vimlendu on Dec 20, 2009 2:41 PM

  • How to remove the cpu usage limit?

    I have to run a C program on terminal as fast as possible. However, there seems to be a cpu usage limit for the terminal, the program supose to run around 15 seconds on a linux machine with similar configuration where cpu usage is at 85-95%. But, it runs one minute on my macbook pro with cpu usage less than 15%. Finally, my question is, how do I utilize all of the 85% idle cpu for this program or at least most of the idle cpu?

    Per se, MacOS X does not impose any CPU usage limits other than those from the processor scheduling priorities. Standard Unix scheduling priorities go from -20 to +20, with the default being 0. If you have administrator privileges, you increase your process' priority (a more negative value) with the nice or renice commands. See their man pages. On a four-core 15" or 17" MBP, even setting max -20 priority should not impact the rest of the system too much.
    You may also want to go over and discuss these things in the Unix forums:
    https://discussions.apple.com/community/mac_os/mac_os_x_technologies

Maybe you are looking for

  • Questions on MIDP Security

    Hi all, I've a few questions on MIDP security to seek help: 1) Suppose I have a midlet to store confidential details on the mobile phone, if it doesn't make any connections to the internet, would anyone be able to retrieve the information I'd on the

  • Can't connect to wireless network that works fine for other machines

    I received my MacBook Pro last week (2.16 GHz) and I've been very happy with it so far. However, when I brought it home, I've had nothing but issues trying to connect to my wireless network. It's a Netgear, and you can read more about it at the follo

  • EDI Orders

    Hi, My scenario is: we are getting different EDI orders from 5 different customers and should post them in ERP as sales orders: we are using AS2 adapter as sender and only one mail box is used to collect the data: For this, 1.Do we need to create 5 s

  • Job in Source system terminated - Request is set to red

    Hi I ran a process chain on the weekend which loaded several info packages into a cube.  They all ran successfully except for one. The error message said "Job in source system terminated - Request is set to red".  I've looked on Monitor and no record

  • HP management pack - network ports

    Hi all, sorry for my englisch . I have server with 4 network ports , but only 2 is connected . The scom alert me to 2 port is not connected. Can I overrite this behavior and control only 2 connected ports ? thanx Falcon