Issue with background job--taking more time

Hi,
We have a custom program which runs as the background job. It runs every 2 hours.
Itu2019s taking more time than expected on ECC6 SR2 & SR3 on Oracle 10.2.0.4. We found that it taking more time while executing native SQL on DBA_EXTENTS. When we tried to fetch less number of records from DBA_EXTENTS, it works fine.
But we need the program to fetch all the records.
But it works fine on ECC5 on 10.2.0.2 & 10.2.0.4.
Here is the SQL statement:
EXEC SQL PERFORMING SAP_GET_EXT_PERF.
  SELECT OWNER, SEGMENT_NAME, PARTITION_NAME,
         SEGMENT_TYPE, TABLESPACE_NAME,
         EXTENT_ID, FILE_ID, BLOCK_ID, BYTES
   FROM SYS.DBA_EXTENTS
   WHERE OWNER LIKE 'SAP%'
   INTO
   : EXTENTS_TBL-OWNER, :EXTENTS_TBL-SEGMENT_NAME,
   :EXTENTS_TBL-PARTITION_NAME,
   :EXTENTS_TBL-SEGMENT_TYPE , :EXTENTS_TBL-TABLESPACE_NAME,
   :EXTENTS_TBL-EXTENT_ID, :EXTENTS_TBL-FILE_ID,
   :EXTENTS_TBL-BLOCK_ID, :EXTENTS_TBL-BYTES
ENDEXEC.
Can somebody suggest what has to be done?
Has something changed in SAP7 (wrt background job etc) or do we need to fine tune the SQL statement?
Regards,
Vivdha

Hi,
there was an issue with LMT's but that was fixed  in 10.2.0.4
besides missing system statistics:
But WHY do you collect every 2 hours this information? The dba_extents view is based on really heavy used system tables.
Normally , you would start queries of this type against dba_extents ie. to identify corrupt blocks and such:
SELECT  owner , segment_name , segment_type
        FROM  dba_extents
       WHERE  file_id = &AFN
         AND  &BLOCKNO BETWEEN block_id AND block_id + blocks -1
Not sure what you want to achieve with it.
There are monitoring tools (OEM ?) around that may cover your needs.
Bye
yk

Similar Messages

  • Background job taking more time.

    Hi All,
    I have a background job for standard program RBDAPP01 which is scheduled to run after every 3 minutes.
    The problem is that it takes very long time at particular time everyday.
    Like normally it take 1 or 2 second.
    But at 11.14 it takes approx 1500 seconds to execute.
    Can anybody help me about what may be the reason for this?
    Regards,
    VIkas Maurya

    has it been sucessfully executed..if not then there must be an open loop in the back ground program. Generally a open loop is put in the program for debugging purpose in background programs, if its not removed, the program gets stuck....
    Or if it has been executed successfully...then u have to check the performance...u can make use of st05 or se30..for the same..

  • Report rdf with size 8mb taking more time to open

    Hello All,
    I have a rdf ( reports 6i) report with size 8.5mb taking more time to open and more time to access each field.
    Please let me know how do i solve this issue.
    Please do the needful.
    Thanks.

    Thanks for immediate response.
    pls let me know how do i know this.
    Right now i have the below details from report->help
    Report Builder 6.0.8.11.3
    ORACLE Server Release 8.0.6.0.0
    Oracle Procedure Builder 6.0.8.11.0
    Oracle ORACLE PL/SQL V8.0.6.0.0 - Production
    Oracle CORE Version 4.0.6.0.0 - Production
    Oracle Tools Integration Services 6.0.8.10.2
    Oracle Tools Common Area 6.0.5.32.1
    Oracle Toolkit 2 for Windows 32-bit platforms 6.0.5.35.0
    Resource Object Store 6.0.5.0.1
    Oracle Help 6.0.5.35.0
    Oracle Sqlmgr 6.0.8.11.3
    Oracle Query Builder 6.0.7.0.0 - Production
    PL/SQL Editor (c) WinMain Software (www.winmain.com), v1.0 (Production)
    Oracle ZRC 6.0.8.11.3
    Oracle Express 6.0.8.3.5
    Oracle XML Parser     1.0.2.1.0     Production
    Oracle Virtual Graphics System 6.0.5.35.0
    Oracle Image 6.0.5.34.0
    Oracle Multimedia Widget 6.0.5.34.0
    Oracle Tools GUI Utilities 6.0.5.35.0
    Thanks
    Edited by: Abdul Khan on Jan 26, 2010 11:54 PM

  • MRP Job taking more time

    Dear Folks,
    We run the MRP at MRP area level with total 72 plants..Normally this job takes 3 to 4 hour to complete...but suddenly since last two weeks it is taking more than 9 hours. with this delaying business is getting problem to send the scheudles at proper time and becomes critical issue.
    Now anybody have idea to check the root causes of this delay? And how to reduce the time... We have already processing this job with parallel processing.
    Reasonable answer will get full points.
    Regards
    TAJUDDIN

    Hi TAJUDDIN
    Unfortunately, I do not have any documents related to parallel processing.
    But I can explain how to do, So I hope following explanation help you.
    1. At first you need to check whether current parallel MRP works fine or not.
        To do this, please check MRP result in the spool.
         (In the last page of spool, you can see the result task use of each WP.
          To open last page of spool, just go to SM37 and find MRP job and then
          press spool button. Now you will see spool overview.  But before open
          spool please change page setting that you will see.
          <From the menu,  Goto=>Display request=>Setting> Then select last
          10 page.  By doing this operation, you can see last 10 page.
    2.In the button of the spool page, you will see task use of MRP.
       For example. if you use 2 application servers and each application server,
       if you assign 3 work process, you will see 6 task.
                             Number of calculated item   Runtime
       AP1 WP1        1000                                  30 min
       AP1 WP2        500                                     5  min
       AP1 WP3        200                                     3  min
       AP2 WP1        200                                     3  min
       AP2 WP2        200                                     3  min
       AP2 WP3        100                                     2  min
      In the log, if you observed above situation, this indicate unbalanced system use.
      (This situation occur depending on your BOM structure). So there is a possibility
      that dispatching system use more equally improve MRP performance.
      To do equal dispatch, you need to de-activate MRP package logic to  bottlneck
       Lowlevel code.(You can see bottlneck item in the spool.  So if you observe
       10 items belongs to lowlevel code 5, it is better to deactivate package logic
       for lowlevel code 5. Then for this lowlevel code, no package logic works and this
       bring to equal distribution of task use).
    The way to de-activate is described in SAP Note 568593  and 52033
       (Until 46C, you need to use modification <manual change coding>. From
        Enterprise, you can use BADI).
    Regarding on package logic, I recommned you to read SAP Note 52033
    (Depending on runtime of former task, MRP combine several item into 1 package.
    So if task 1 finish previous MRP around 60 sec, next MRP for this task will contain
      more material <package comes to contain more material>. But if bottlneck items
      are put in togather in 1 package, this task 1 might take more longer time
      cpmpared to others.  And MRP calculation occur low level code by low level
      code. So if this task 1 does not finish calculation < other task already finish
      MRP calculation>, other task cannot start MRP for next lowlevel. So other task
      have to wait until task 1 finish his MRP.  Due to this, you will see big task usage
      and runtime difference in spool).
    But this behavior depend on BOM structure. So you also have possibility not to see
    this behavior in your spool. In this case, no necessity to consider balancing..
    I hope this help you.
    best regards
    Keiji

  • Virsa CC 5.1: Issue with Background Job

    Dear All,
      I almost finished configuration of our new Compliance Calibrator dashboard (Java stack of NW'04). But, unfortunately, have an issue with SoD analysis now.
      Using SAP Best Practice recommendations, we have uploaded all functions, business processes and risks into CC database, and then successfully generated new rules (there are about 190 active ones in our Analysis Engine).
      I also configured JCo to R/3 backend and was able to extract the full list of our users, roles and profiles. But somehow background analysis job fails.
      In job history table I see the following message: "Error while executing the Job:null", and in job log there is an entry, saying that "Daemon idle time longer than RFC time out, terminating daemon 0". But it's quite strange, as we use default values: RFC Time Out = 30 min, and daemons are invoked every 60 seconds.
      Please, advise whether you had similar issues in your SAP environment and, if yes, how you resolved them.
    Thanks,
    Laziz

    Hi Laziz
    I am now just doing the first part of CC. May I know your details in how to configured Jco To R/3 Backend? Do you need to create any SM59 for connection T in R/3? If yes, I am lacking of the details to do that if you could help? Thank you.
    Regards
    Florence

  • BW Job Taking more time than normal execution time

    Hi,
    Customer is trying to extract data from R/3 with BW OCPA Extractor.
    Selections within Infopackage under tab Data Selection are
    0FISCPER = 010.2000
    ZVKORG = DE
    Then it is scheduled (Info Package), after this monitor button is selected for the scheduled date and time, which gathers information from R/3 system is taking approximately 2 hours which used to take normally minutes.
    This is pulling data from R/3 which is updating PSA, the concern is the time taken for pulling data from R/3 to BW.
    If any input is required please let me know and the earliest solution is appreciated.
    Thanks
    Vijay

    Hi Vijay,
    If you think the data transfer is the problem (i.e. extractor runs for a long time), try and locate the job on R/3 side using SM37 (user ALEREMOTE) or look in SM50 to see of extraction is still running.
    You can also test the extraction in R/3 using tcode RSA3 and use same selection-criteria.
    If this goes fast (as expected) the problem must be on BW side. Another thing you can do is check if a shortdump occurred in either R/3 or BW -> tcode ST22. This will often keep the traffic light on yellow.
    Hope this helps to solve your problem.
    Grtx,
    Marco

  • Dataloading taking more time from PSA to infocube through DTP

    Hi All,
    when i am loading data to infocube through PSA with DTP, it taking more time. Its taken already 4 days to load. Irrespective of the data the previous loads were completed in 2-3 hrs. This is the first time am facing this issue.
    Note: There is a start routine written in Transformations
    Plz let me know how to identify whether the code is written in global declaration or local. If so what is the procedure to correct.
    Thanks,
    Jack

    Hi Jack,
    To improve the performance of the data load, you can do the below:
    1. Compress Old data
    2. Delete and Rebuild Indexes
    3. Read with binary search
    4. Do no use LOOP withing LOOP.
    5. Check sy-subrc after read etc.
    Hope this helps you.
    -Vikram

  • XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD

    HI
    XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD.
    The sql has 5 union clauses.
    It takes 20-30 minutes in TOAD compared to running through Concurrent Program in XML Publisher in EBS taking around 4-5 hours.
    The Scalable Flag at report level is turned on with the JVM options set to -Xmx1024m -Xmx1024m in Concurrent Program definition.
    Other configurations for Data Template like XSLT, Scalable, Optimization are turned on though didn't bounce the OPP Server for these to take effect as I am not sure whether it is needed.
    Thanks in advance for your help.

    But the question is that how come it is working in TOAD and takes only 15-20 minutes?
    with initialization of session ?
    what about sqlplus ?
    Do I have to set up the the temp directory for the XML Publisher report to make it faster?
    look at
    R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues (Doc ID 1410160.1)
    BI Publisher - Troubleshooting Oracle Business Intelligence (XML) Publisher For The Oracle E-Business Suite (Doc ID 364547.1)

  • Post Goods Issue (VL06O) - taking more time approximate 30 to 45 minutes

    Dear Sir,
    While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
    We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
    Kindly provide suitable solution for the same.
    Regards,
    Vijay Sanguri

    Hi
    See Note 113048 - Collective note on delivery monitor and search notes related with performance.
    Do a trace with tcode ST05 (look for help from a basis consultant) and search the bottleneck. Search possible sources of performance problems in userexits, enhancements and so on.
    I hope this helps you
    Regards
    Eduardo

  • Taking More Time while inserting into the table (With foriegn key)

    Hi All,
    I am facing problem while inserting the values into the master table.
    The problem,
    Table A -- User Master Table (Reg No, Name, etc)
    Table B -- Transaction Table (Foreign key reference with Table A).
    While inserting the data's in Table B, i need to insert the reg no also in table B which is mandatory. I followed the logic which is mentioned in the SRDemo.
    While inserting we need to query the Table A first to have the values in TableABean.java.
    final TableA tableA= (TableA )uow.executeQuery("findUser",TableA .class, regNo);
    Then, we need to create the instance for TableB
    TableB tableB= (TableB)uow.newInstance(TableB.class);
    tableB.setID(bean.getID);
    tableA.addTableB(tableB); --- this is for to insert the regNo of TableA in TableB.. This line is executing the query "select * from TableB where RegNo = <tableA.getRegNo>".
    This query is taking too much time if values are more in the TableB for that particular registrationNo. Because of this its taking more time to insert into the TableB.
    For Ex: TableA -- regNo : 101...having less entry in TableB means...inserting record is taking less than 1 sec
    regNo : 102...having more entry in TableB means...inserting record is taking more than 2 sec
    Time delay is there for different users when they enter transaction in TableB.
    I need to avoid this since in future it will take more time...from 2 sec to 10 sec, if volume of data increases mean.
    Please help me to resolve this issue...I am facing it now in production.
    Thanks & Regards
    VB

    Hello,
    Looks like you have a 1:M relationship from TableA to TableB, with a 1:1 back pointer from TableB to TableA. If triggering the 1:M relationship is causing you delays that you want to avoid there might be two quick ways I can see:
    1) Don't map it. Leave the TableA->TableB 1:M unmapped, and instead just query for relationship when you do need it. This means you do not need to call tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA), so that the TableB->TableA relation gets set. Might not be the best option, but it depends on your application's usage. It does allow you to potentially page the TableB results or add other query query performance options when you do need the data though.
    2) You are currently using Lazy loading for the TableA->TableB relationship - if it is untriggered, don't bother calling tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA). This of course requires using TopLink api to a) verify the collection is an IndirectCollection type, and b) that it is hasn't been triggered. If it has been triggered, you will still need to call tableA.addTableB(tableB), but it won't result in a query. Check out the oracle.toplink.indirection.IndirectContainer class and it's isInstantiated() method. This can cause problems though in highly concurrent environments, as other threads may have triggered the indirection before you commit your transaction, so that the A->B collection is not up to date - this might require refreshing the TableA if so.
    Change tracking would probably be the best option to use here, and is described in the EclipseLink wiki:
    http://wiki.eclipse.org/Introduction_to_EclipseLink_Transactions_(ELUG)#Attribute_Change_Tracking_Policy
    Best Regards,
    Chris

  • CDP Performance Issue-- Taking more time fetch data

    Hi,
    I'm working on Stellent 7.5.1.
    For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
    public void getManager(final HashMap binderMap)
    throws VistaInvalidInputException, VistaDataNotFoundException,
    DataException, ServiceException, VistaTemplateException
         String collectionID =
    getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
         long firstStartTime = System.currentTimeMillis();
    HashMap resultSetMap = null;
    String isNonRecursive = getStringLocal(VistaFolderConstants
    .ISNONRECURSIVE_KEY);
    if (isNonRecursive != null
    && isNonRecursive.equalsIgnoreCase(
    VistaContentFetchHelperConstants.STRING_TRUE))
    VistaLibraryContentFetchManager libraryContentFetchManager =
    new VistaLibraryContentFetchManager(
    binderMap);
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
              resultSetMap = libraryContentFetchManager
    .getFolderContentItems(m_workspace);
    * used to add the resultset to the binder.
    addResultSetToBinder(resultSetMap , true);
    else
         long startTime = System.currentTimeMillis();
    * isStandard is used to decide the call is for Standard or
    * Extended.
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
    String isStandard = getTemplateInformation(binderMap);
    long endTimeTemplate = System.currentTimeMillis();
    binderMap.put(VistaFolderConstants.IS_STANDARD,
    isStandard);
    long endTimebinderMap = System.currentTimeMillis();
    VistaContentFetchManager contentFetchManager
    = new VistaContentFetchManager(binderMap);
    long endTimeFetchManager = System.currentTimeMillis();
    resultSetMap = contentFetchManager
    .getAllFolderContentItems(m_workspace);
    long endTimeresultSetMap = System.currentTimeMillis();
    * used to add the resultset and the total no of content items
    * to the binder.
    addResultSetToBinder(resultSetMap , false);
    long endTime = System.currentTimeMillis();
    if (perfLogEnable.equalsIgnoreCase("true"))
         Log.info("Time taken to execute " +
                   "getTemplateInformation=" +
                   (endTimeTemplate - startTime)+
                   "ms binderMap=" +
                   (endTimebinderMap - startTime)+
                   "ms contentFetchManager=" +
                   (endTimeFetchManager - startTime)+
                   "ms resultSetMap=" +
                   (endTimeresultSetMap - startTime) +
                   "ms getManager:getAllFolderContentItems = " +
                   (endTime - startTime) +
                   "ms overallTime=" +
                   (endTime - firstStartTime) +
                   "ms folderID =" +
                   collectionID);
    Edited by: 838623 on Feb 22, 2011 1:43 AM

    Hi.
    The Select statment accessing MSEG Table is Slow Many a times.
    To Improve the performance of  MSEG.
    1.Check for the proper notes in the Service Market Place if you are working for CIN version.
    2.Index the MSEG table
    2.Check and limit the Columns in the Select statment .
    Possible Way.
    SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
    EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
    FROM MSEG
    INTO CORRESPONDING FIELDS OF TABLE ITAB
    WHERE WERKS EQ P_WERKS AND
    MBLNR IN S_MBLNR AND
    BWART EQ '105' .
    Delete itab where itab EQ '5002361303'
    Delete itab where itab EQ  '5003501080' 
    Delete itab where itab EQ  '5002996300'
    Delete itab where itab EQ '5002996407'
    Delete itab where itab EQ '5003587026'
    Delete itab where itab EQ  '5003587026'
    Delete itab where itab EQ  '5003493186'
    Delete itab where itab EQ  '5002720583'
    Delete itab where itab EQ '5002928122'
    Delete itab where itab EQ '5002628263'.
    Select
    Regards
    Bala.M
    Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM

  • Transport released it taking more time

    Hi Every One,
    As i want to transport the program with teir package , smartforms etc.
    I  created one transport request with transport of copies and include all package in that and released
    Its taking more time
    Status in the se01 showing request is released
    log shows
    Checks at Operating System Level         23.04.2010 12:45:48    (0) Successfully Completed
    Pre-Export Methods                       23.04.2010 12:45:54    (0) Successfully Completed
    Checks at Operating System Level         23.04.2010 12:47:23    (0) Successfully Completed
    Pre-Export Methods                       23.04.2010 12:47:27    (4) Ended with Warning
    Export                                   23.04.2010 15:28:29 Not yet executed
    RDDIMPDP program is running parallel in background.
    So what i have to check any suggestions please
    Regards
    Shashi

    Hi Niraj,
    i sheduled RDDIMPDP job for every 2 minute and it is running fine
    what i done is i created another request and transported. its went fine and issue is solved.
    but this request is still showing as before now i want to ckear this. how i can do this. shall i delete this request directly.. Any problem with this
    I am pasting Log report last few lines
    WARNING:
    sapprd\sapmnt\trans\tmp\TRPKK900160.TRP is already in use (10), I'm waiting 4 sec (20100423124802). My name: pid 3788 on sapprd (APServiceTRP)
    WARNING:
    sapprd\sapmnt\trans\tmp\TRPKK900160.TRP is already in use (20), I'm waiting 1 sec (20100423124826). My name: pid 3788 on sapprd (APServiceTRP)
    WARNING:
    sapprd\sapmnt\trans\tmp\TRPKK900160.TRP is already in use (30), I'm waiting 4 sec (20100423124850). My name: pid 3788 on sapprd (APServiceTRP)
    START INFORM SAP-SYSTEM OF TRP Q      20100423155230              BIPLAB       sapprd 20100423155222     
    START tp_getprots          TRP Q      20100423155230              BIPLAB       sapprd 20100423155222     
    STOP  tp_getprots          TRP Q      20100423155240              BIPLAB       sapprd 20100423155222     
    STOP  INFORM SAP-SYSTEM OF TRP Q      20100423155240              BIPLAB       sapprd 20100423155222  
    Regards
    Shashi

  • Source System Extraction taking more time

    Hi All,
    I Have an Issue, please help me out in this.
    We have two source systems in our process. One is working fine. We dont have any issue with that. But another source system taking more time to extract the data in the source system than usual. From few days only we are facing this problem. Record count is also normal. I am unable find root cause for this, please help me out regarding this issue.
    It is Full update. We are using Custom program to extract the data. It is running daily.
    Thanks in Advance.
    Regards,
    Suresh
    Edited by: sureshd on Jul 14, 2011 6:04 PM
    Edited by: sureshd on Jul 14, 2011 6:05 PM

    When a long running job is running in the source system side you can trace it through ST05. Simultaneously you can monitor the screen of SM50/SM66 to see what are the tables being accessed during the job.
    After the loading is completed, you can deactivate the trace and then go to the Display trace option where you would see various number of SQL statements along with their tables. Check which particular statement is an expensive one by looking at the time stamp.
    Then next job is to decide why that table is being accessed for so much of time and how to improve the run time of the table. check the fields inside the table and their selectivity and see wether any index already exist on that. You can do this by going into SE11. If index is not available on those specific fields ask your Basis Guy to create a secondary one on top of it which may help in better performance. hope it helps.

  • Log applying service is taking more time in phy. Standby

    Hi Gurus,
    My Database version as follows
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    We have datagaurd setup as well - Huge archive logs are generating in our primary database - Archive logs are shipping to standby with no dealy - But applying the archive logs are taking more in our physical standby database - Can you please help me why it was taking more time  to apply archivlogs (sync) in standby ? - What could be possible reasons..?
    Note : Size of standby redo logs are same as redo log file of primary database - Also standy by redo one or more than online redo log primary.
    I also confirmed from network guy for network issue - He said that network is good.
    Please let me know if any other information required? - Since i need to report my higer leve stating this is cause for delay in applying archive logs.
    Thanks

    No we don't have delay option in log_archive_dest
    here is alert log
    edia Recovery Waiting for thread 1 sequence 42017 (in transit)
    Thu Sep 19 09:00:09 2013
    Recovery of Online Redo Log: Thread 1 Group 6 Seq 42017 Reading mem 0
      Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0601.log
      Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0601.log
    Thu Sep 19 09:00:49 2013
    RFS[1]: Successfully opened standby log 5: '/xyz/u002/oradata/xyz/stb_redo/redo0501.log'
    Thu Sep 19 09:00:54 2013
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[2]: Successfully opened standby log 7: '/xyz/u002/oradata/xyz/stb_redo/redo0701.log'
    Thu Sep 19 09:00:58 2013
    Media Recovery Waiting for thread 1 sequence 42018 (in transit)
    Thu Sep 19 09:00:58 2013
    Recovery of Online Redo Log: Thread 1 Group 5 Seq 42018 Reading mem 0
      Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0501.log
      Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0501.log
    Media Recovery Waiting for thread 1 sequence 42019 (in transit)
    Thu Sep 19 09:01:08 2013
    Recovery of Online Redo Log: Thread 1 Group 7 Seq 42019 Reading mem 0
      Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0701.log
      Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0701.log
    Thu Sep 19 09:01:08 2013
    RFS[1]: Successfully opened standby log 5: '/xyz/u002/oradata/xyz/stb_redo/redo0501.log'
    Thu Sep 19 09:01:22 2013
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[2]: Successfully opened standby log 6: '/xyz/u002/oradata/xyz/stb_redo/redo0601.log'
    Thu Sep 19 09:01:26 2013
    RFS[1]: Successfully opened standby log 5: '/xyz/u002/oradata/xyz/stb_redo/redo0501.log'
    Thu Sep 19 09:01:26 2013
    Media Recovery Log /xyz/u002/oradata/xyz/arch/ARCH1_42020_821334023.LOG
    Media Recovery Waiting for thread 1 sequence 42021 (in transit)
    Thu Sep 19 09:01:30 2013
    Recovery of Online Redo Log: Thread 1 Group 5 Seq 42021 Reading mem 0
      Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0501.log
      Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0501.log
    Thu Sep 19 09:01:51 2013
    Media Recovery Waiting for thread 1 sequence 42022 (in transit)
    Thu Sep 19 09:01:51 2013
    Recovery of Online Redo Log: Thread 1 Group 6 Seq 42022 Reading mem 0
      Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0601.log
      Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0601.log
    Thu Sep 19 09:01:57 2013
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[2]: Successfully opened standby log 5: '/xyz/u002/oradata/xyz/stb_redo/redo0501.log'
    Thu Sep 19 09:02:01 2013
    Media Recovery Waiting for thread 1 sequence 42023 (in transit)
    Thu Sep 19 09:02:01 2013
    Recovery of Online Redo Log: Thread 1 Group 5 Seq 42023 Reading mem 0
      Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0501.log
      Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0501.log

  • Oracle 11g: Oracle insert/update operation is taking more time.

    Hello All,
    In Oracle 11g (Windows 2008 32 bit environment) we are facing following issue.
    1) We are inserting/updating data on some tables (4-5 tables and we are firing query with very high rate).
    2) After sometime (say 15 days with same load) we are feeling that the Oracle operation (insert/update) is taking more time.
    Query1: How to find actually oracle is taking more time in insert/updates operation.
    Query2: How to rectify the problem.
    We are having multithread environment.
    Thanks
    With Regards
    Hemant.

    Liron Amitzi wrote:
    Hi Nicolas,
    Just a short explanation:
    If you have a table with 1 column (let's say a number). The table is empty and you have an index on the column.
    When you insert a row, the value of the column will be inserted to the index. To insert 1 value to an index with 10 values in it will be fast. It will take longer to insert 1 value to an index with 1 million values in it.
    My second example was if I take the same table and let's say I insert 10 rows and delete the previous 10 from the table. I always have 10 rows in the table so the index should be small. But this is not correct. If I insert values 1-10 and then delete 1-10 and insert 11-20, then delete 11-20 and insert 21-30 and so on, because the index is sorted, where 1-10 were stored I'll now have empty spots. Oracle will not fill them up. So the index will become larger and larger as I insert more rows (even though I delete the old ones).
    The solution here is simply revuild the index once in a while.
    Hope it is clear.
    Liron Amitzi
    Senior DBA consultant
    [www.dbsnaps.com]
    [www.orbiumsoftware.com]Hmmm, index space not reused ? Index rebuild once a while ? That was what I understood from your previous post, but nothing is less sure.
    This is a misconception of how indexes are working.
    I would suggest the reading of the following interasting doc, they are a lot of nice examples (including index space reuse) to understand, and in conclusion :
    http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf
    "+Index Rebuild Summary+
    +•*The vast majority of indexes do not require rebuilding*+
    +•Oracle B-tree indexes can become “unbalanced” and need to be rebuilt is a myth+
    +•*Deleted space in an index is “deadwood” and over time requires the index to be rebuilt is a myth*+
    +•If an index reaches “x” number of levels, it becomes inefficient and requires the index to be rebuilt is a myth+
    +•If an index has a poor clustering factor, the index needs to be rebuilt is a myth+
    +•To improve performance, indexes need to be regularly rebuilt is a myth+"
    Good reading,
    Nicolas.

Maybe you are looking for

  • Gl Account balances Confirmation

    Hello All, We have some OLD GL accounts with amount balance in that and range from 1000 to 8999999 with 4 account groups like Assets, liabilities, Exp and Income..... Now the client wants to divided them into 4 account groups with the correct number

  • How many frameworks available in Java?

    How many frameworks available in Java? Which are most commonly used? Edited by: Passion on Jun 16, 2008 2:50 AM

  • Best Practice: Sharing Action Task

    My application has two Action implementations. One is "SaveAs" which shows a JFileChooser and then saves to file using SwingWorker to keep GUI responsive. The other is "Exit" which does exactly the same (but only if the lokal document is "dirty") and

  • SAP ABAP Online Training | Online SAP ABAP Training in usa, uk, Canada, Malaysia, Australia, India, Singapore.

    SAP ABAP (Advanced Business Application Programming) is one of the most sought-after modules of SAP. In accordance with its manifold returns the trend for training of SAP ABAP Online Training  is constantly on a sharp upsurge. In the first phase of t

  • Third Party Themes

    I have been searching around for some time now and am wondering if anyone knows of any Third Party that are making new themes for use with iMovie 6. I saw one called Eye Candy but all the links are wrong or broken and can never really see what these