Redo logs switches too frequent after migrating the db to different server

Dear Experts,
A couple of days back, we migrated our database (belonging to EBusiness Suite) to a different server, to get good performance benefit. The database is 10.2.0.4 and it was migrated from AIX 5.3 to Linux x86 64.
Users are happy with the performance, but I am getting below errors in the alert logs
a) Thread 1 cannot allocate new log, sequence 498
b) Private strand flush not complete
c) ORACLE Instance PROD - Can not allocate log, archival requiredOracle support is saying the issue is coming because of too frequent log switches. I am wondering how the log switches has become frequent in the new server. In the old server it was about 10 mins for every switch, now it is as frequent as 1 minute?
Any idea, what could be the reason behind this? Do you agree this issue is coming because frequent log switches?
Thanks
ARS

Kanchana Devasurendra wrote:
Hi ARS,
Please check the following item in your new database.
1. log_archive_max_processes Most properbly value set for this parameter is low ( may be it's set to 1.) Please increase up ( 4)I am curious to know what makes you think 4 is the right magic number for log_archive_max_processes - after all, he's only got one archive destination.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
A general reminder about "Forum Etiquette / Reward Points": http://forums.oracle.com/forums/ann.jspa?annID=718
If you never mark your questions as answered people will eventually decide that it's not worth trying to answer you because they will never know whether or not their answer has been of any use, or whether you even bothered to read it.
It is also important to mark answers that you thought helpful - again it lets other people know that you appreciate their help, but it also acts as a pointer for other people when they are researching the same question, moreover it means that when you mark a bad or wrong answer as helpful someone may be prompted to tell you (and the rest of the forum) what's so bad or wrong about the answer you found helpful.

Similar Messages

  • Frequent redo log switches

    Oracle 9.2.0.1 on W2k3 server. the redo log is switching every minute, even without any discernable database activity. It's in archive log mode and the redo logs are 100mb in size, so the archive logs are filling up my hard drive. I'm having a hard time figuring out why the redo logs are switching so often. there are 3 redo log groups. Thanks for any help you can give me.

    If the redo logs are defined at 100M in size and the archived redo logs are 100M in size then the online redo logs are being filled. As suggested log miner is one way to determine what is happening.
    Are the redo logs switching all the time or only during periods of peek activity? If the rapid log switches are only happening during certain time periods like 9:30 - 10:30 or the time correspond to the running of certain batch jobs then you should probably increase the size of your online redo logs.
    If the archived redo logs are small then obvious something is forcing log switches prior to their filling. I would check the spfile setting for log_checkpoint_interval, log_checkpoint_timeout, and fast_start_mttr_target to be sure no one had made a mistake changing one of the values prior to running the log miner.
    HTH -- Mark D Powell --

  • Online redo log switching

    Hello.
    I have a question about online redo logs.
    Assume that I have two small redo log groups in my database. I am processing a big batch load consisting of many inserts, but no checkpoing. The first redo log group gets full, it's switched to the second one. The second one gets full too and it need to be switched to the first. No commit until this time, so (I guess) the data has not been saved to datafiles.
    Is that possible? Will the database hung ?
    If not the first redo log group gets overwritten, doesn't it? What if, after the commit, the database crashes. I will not be able to restore the operations from the first redo group...?

    Depends,
    If you are running in archive log mode then when you fill the first redo log group it is copied to the archvielog destination, then when the second is filled its copied and we switch back to the first if the archiver hasn't completed you will get a short 'hang' whillst the archiver completes then the transaction will proceed.
    if in no archivelog mode you will just switch back and forth between the two log files.
    You may be confused because you think that the datablocks in the buffer cache and datafiles are not updated untill the transaction is completed this is not true, the assumption is that most transactions will be completed and commited for a usefull discussion of the concepts see.
    Re: basic concepts

  • 10G redo log switchs with no activity

    We have been testing 8i to 10G migrations and two things I have noticed that are different bwteeen the 2 databases are:
    1) redo log switching occurring even though the database has no users and just sitting there. This is not happeniong in our 8i databases.
    2) trace files of format instance_m001_ and instance_m000_ in ../bdump. Our 8i bdump's normally do not have trc files unless a problem occurs.
    Is it normal for 10G to automatically switch redo logs with no activity and are these .trc files also normal? In general I get nervous with trc files. -quinn

    Still haven't figured out the redo log switches, they seem to be slowing down, but the excess trace files are apparently cuased by a bug and can be fixed with Patch 3432729.

  • Redo log switch frequency

    Hi all experts,
    I have a 10.2.0.4 database. I want to find out how often redo log switch is happening in the database. Is there a query on some sys/system view that could show me that?
    Thanks,

    Hi,
    You can check bellow mention query which will give you idea of log switch completion in your database.
    select to_char(first_time,'YYYY-MON-DD') day,
    to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') "00",
    to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",
    to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",
    to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",
    to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",
    to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",
    to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",
    to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",
    to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "08",
    to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",
    to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",
    to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",
    to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",
    to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",
    to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",
    to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",
    to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') "16",
    to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",
    to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') "18",
    to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",
    to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",
    to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",
    to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",
    to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23"
    from v$log_history group by to_char(first_time,'YYYY-MON-DD');
    Hope this might help you.
    Thanks,
    Vishal Joshi
    Oracle Apps DBA

  • Issue in Custom Blog site definition based on SharePoint 2010 blog site definition after migrating the sites to SharePoint 2013 and site collection upgrade

    I have created a custom blog site definition using SharePoint 2010 blog site definition with Configuration ID 31 in onet.xml (new value). This was working fine for SharePoint 2010.
    We created new SharePoint 2013 farm and deployed the all Custom solutions in
    14/15 folders. After migrating the sites to SharePoint 2013 using Content DB approach, site created previously using my custom definition are working fine.
    But after running site collection upgrade these sites stop working. When I post a comment then comments not getting listed on post detail page. However comments are getting added to Comments List but
    PostTitle column  of Comment is not getting populated.
    Also, when we create a new site in SharePoint 2013 using my custom blog template then that is also not getting provisioned.  default.aspx and look-up between post and comment list are not working.
    If any one has faced such issue then please share your findings and any solution to fix this.
    Thanks in Advance :)

    Hi ,
    According to your description, my understanding is that the blog based on custom blog site definition didn’t work correctly after migrating custom blog site definition to SharePoint 2013.
    If you customized the Onet.xml file in a previous version's site definition, you should modify some sections in the file to work in the current version, like  <BaseTypes> and  <ListTemplate>  etc. More information, please refer
    to the link below:
    http://msdn.microsoft.com/en-us/library/office/aa543837(v=office.14).aspx
    For that the PostTitle column  of Comment is not getting populated, please try to modify the view, then compare the result.
    I hope this helps.
    Thanks,
    Wendy
    Wendy Li
    TechNet Community Support

  • I didn't get log statements in console after deploying the war file IN WAS6

    I am new to web sphere 6.o server, I didn't get log statements in console after deploying the war file into the WAS

    Albenn
    Welcome to the Apple user discussion forums
    I'll hit a few of your questions - others will have to do others since I'm not up on some of the issues you ask about
    4) is there an easy way to review/delete items (in 6, i run a basic slideshow and delete on the fly -- is this still workable in 7?)?
    Yes
    5) Are there significant functionality benefits of 7 over 6 (e.g., does the noise function or shadow/highlight really offer significant benefits? The Events seems obvious, but I'm more interested in "under the hood" experience
    Yes
    Loseless editing - web galleries - events
    a) Is import any faster? As my library is growing in size, the import is getting overly slow
    It is fine for me (only you can compare to your current) - I have 20542 items using 26.3 GB in 271 events
    7) Is there a full-screen edit mode?
    yes
    8) Can you configure events so that it's not just by day?
    yes
    9) Is 6's "roll" structure as mirrored in the file system still available in the Events approach, or is the database structure that once you've converted a library, there's no going back?
    Events replace rolls (if your rolls are named - NOT the default rollxxx name - they will come across as is - to me events are rolls with much more capability and a new name - some people are VERY putoff by the change (but most love it) for reasons that I simply do not comprehend
    10) What is the effective (or advised) maximum library size? I've probably got over 60GB of images in my libraries now, with 20,000 images or so. If 7 can readily digest them (and preserve all my albums, etc.), then I'd love to combine them into one, unless that will kill performance, make the database unstable, drive backups into interminable waits, etc.
    The advertised max is 250,000 photos - I have about the same number of photos as you in about 1/3 the space - there are a few users here with libararies your size. With iPhoto Library Manager it is easy to maintain multiple libraries if you choose
    LN
    Message was edited by: LarryHN

  • Reconfigure DPM after migrating Sharepoint to another SQL Server

    How can I Reconfigure DPM after migrating Sharepoint to another SQL Server?

    Hi,
    I have tried this.
    When I removed the SharePoint datasource from Protection group there was no option to keep or lose data.
    I ran ConfigureSharePoint -EnableSharePointProtection  and ConfigureSharePoint -ResolveAllSQLAliases , then refreshed view in DPM console and added SharePoint back to protection group.
    The modify protection group screen showed success. looking in Monitoring, I see Consistency check failed with this error "DPM cannot continue protection for SharePoint Farm Sharepoint Farm\sqpl2008srv1\SharePoint_Config on spwfe1.internal.ermanz.govt.nz.
    (ID 30184)" this followed with Recovery point failing as well and description was same error (as above).
    I then removed the datasource again and created a new Protection Group and added SharePoint, but got same result. When creating the PG, it did not format space or do initial copy and CC's and RP's failed as before.
    When DB's were moved to another SQL Server, SharePoint retained the same SQL name as before (uses aliases)

  • Dump is coming after Moving the request to Quality server.

    Dear All,
    Dump is coming in a ABAP code after moving the same to quality server.
    Kindly refer the attached screenshot.
    Kindly suggest.
    Thanks and Regards
    Jai

    see the below code,
    I have changed the value of l_age1 TYPE bsid-dmbtr to l_age1 TYPE p LENGTH 10 DECIMALS 2.
    *& Report  ZFIRDEBAGEING
    * 1. Program Name:ZFIRDEBAGEING             2.  Creation Dt:18/03/2013 *
    * 3. Module Name :FI                        4.  Modified Dt:2/04/2013  *
    * 5. Developer Name: Kallol Chakrabarty     6.  Modified By:           *
    * 7.Background / Online :Online             8. Trans Code : ZCAGE      *
    * 9. Frequency  : Regular    *
    * Request Number :            - Created                                *
    * Remarks : Customer Ageing Report                                     *
    REPORT zfirdebageing.
    TYPE-POOLS : slis.
    TABLES: bsid,bseg.
    TYPES : BEGIN OF tt_bsid,
              belnr TYPE belnr_d,
              gjahr TYPE gjahr,
              bukrs TYPE bukrs,
              dmbtr TYPE dmbtr,
              kunnr TYPE kunnr,
              budat TYPE budat,
              zfbdt TYPE dzfbdt,
              zterm TYPE dzterm,
              zbd1t TYPE dzbd1t,
              shkzg TYPE shkzg,
            END OF tt_bsid,
            BEGIN OF tt_bseg,
              belnr TYPE belnr_d,
              gjahr TYPE gjahr,
              bukrs TYPE bukrs,
              werks TYPE werks_d,
              prctr TYPE prctr,
              segment TYPE fb_segment,
            END OF tt_bseg,
            BEGIN OF tt_faglseg,
              langu TYPE spras,
              segment TYPE fb_segment,
              name TYPE text50,
            END OF tt_faglseg,
            BEGIN OF tt_cepct,
              spras  TYPE  spras,
              prctr  TYPE  prctr,
              ltext  TYPE  ltext,
            END OF tt_cepct,
            BEGIN OF tt_final,
              kunnr TYPE kunnr,
              segment TYPE fb_segment,
              name TYPE text50,
              prctr  TYPE  prctr,
              ltext  TYPE  ltext,
              name1 TYPE name1_gp,
              ort01 TYPE ort01_gp,
              age1 TYPE dmbtr,
              age2 TYPE dmbtr,
              age3 TYPE dmbtr,
              age4 TYPE dmbtr,
              age5 TYPE dmbtr,
              age6 TYPE dmbtr,
              age7 TYPE dmbtr,
              total TYPE dmbtr,
              total1 TYPE dmbtr,
              total2 TYPE dmbtr,
              zfbdt TYPE dzfbdt,
              zterm TYPE dzterm,
              zbd1t TYPE dzbd1t,
              budat TYPE budat,
            END OF tt_final,
            BEGIN OF tt_kna1,
              kunnr TYPE kunnr,
              name1 TYPE name1_gp,
              ort01 TYPE ort01_gp,
            END OF tt_kna1.
    DATA : wa_bsid TYPE tt_bsid,
            it_bsid TYPE TABLE OF tt_bsid,
            wa_bseg TYPE tt_bseg,
            it_bseg TYPE TABLE OF tt_bseg,
    "       wa_tmp1 TYPE tt_bseg, "Commented by ++KC 18.03.2013 after extended check
            it_tmp1 TYPE TABLE OF tt_bseg,
            wa_faglseg TYPE tt_faglseg,
            it_faglseg TYPE TABLE OF tt_faglseg,
            wa_cepct TYPE  tt_cepct,
            it_cepct TYPE TABLE OF tt_cepct,
            it_tmp TYPE TABLE OF  tt_bsid,
            wa_final TYPE tt_final,
            it_final TYPE TABLE OF tt_final,
            wa_final1 TYPE tt_final,
            it_final1 TYPE TABLE OF tt_final,
            wa_kna1 TYPE tt_kna1,
            it_kna1 TYPE TABLE OF tt_kna1.
    *& ALV Data Declaration                                                *
    DATA: it_fieldcat TYPE slis_t_fieldcat_alv,
           wa_fieldcat TYPE slis_fieldcat_alv,
           is_layout   TYPE slis_layout_alv,
           wa_event    TYPE slis_alv_event,
           it_event    TYPE slis_t_event.
    SELECTION-SCREEN: BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
    SELECT-OPTIONS : s_kunnr FOR bsid-kunnr.
    PARAMETERS : p_bukrs TYPE bseg-bukrs OBLIGATORY,
                        p_dateon  TYPE bsid-budat DEFAULT sy-datum OBLIGATORY.
    SELECTION-SCREEN : END OF BLOCK b1.
    SELECTION-SCREEN: BEGIN OF BLOCK b2 WITH FRAME TITLE text-003.
    SELECT-OPTIONS : s_werks FOR bseg-werks,
                        s_sgmnt   FOR  bseg-segment,
                        s_prctr   FOR  bseg-prctr.
    SELECT-OPTIONS : s_umskz FOR bsid-umskz.
    SELECTION-SCREEN : END OF BLOCK b2.
    DATA: v_days TYPE string,
           v_date1 TYPE bsid-budat,
           v_date2 TYPE bsid-budat,
           v_date3 TYPE bsid-budat,
           v_date4 TYPE bsik-budat,
           v_date5 TYPE bsik-budat,
           v_date6 TYPE bsik-budat,
           v_date7 TYPE bsik-budat,
    *     l_age1 TYPE bsid-budat,
    *     l_age2 TYPE bsid-dmbtr,
    *     l_age3 TYPE bsid-dmbtr,
    *     l_age4 TYPE bsid-dmbtr,
    *     l_age5 TYPE bsid-dmbtr,
    *     l_age6 TYPE bsid-dmbtr,
    *     l_age7 TYPE bsid-dmbtr,
           l_age1 TYPE p LENGTH 10 DECIMALS 2,
           l_age2 TYPE p LENGTH 10 DECIMALS 2,
           l_age3 TYPE p LENGTH 10 DECIMALS 2,
           l_age4 TYPE p LENGTH 10 DECIMALS 2,
           l_age5 TYPE p LENGTH 10 DECIMALS 2,
           l_age6 TYPE p LENGTH 10 DECIMALS 2,
           l_age7 TYPE p LENGTH 10 DECIMALS 2.
    CONSTANTS : v_age1(4) TYPE c VALUE 15 ,
                 v_age2(4)    TYPE c VALUE  30,
                 v_age3(4)    TYPE c VALUE  45,
                 v_age4(4)    TYPE c VALUE  90,
                 v_age5(4)    TYPE c VALUE  180,
                 v_age6(4)    TYPE c VALUE  360,
                 v_age7(4)    TYPE c VALUE  360.
    INITIALIZATION.
       sy-title = 'CUSTOMER AGEING'.
    *--------------- S-T-A-R-T O-F S-E-L-E-C-T-I-O-N ----------------------*
    START-OF-SELECTION.
       PERFORM get_data.
       PERFORM process_data.
       PERFORM alv_display.
    *&      Form  GET_DATA
    *       text
    *  -->  p1        text
    *  <--  p2        text
    FORM get_data .
       SELECT  belnr
               gjahr
               bukrs
               dmbtr
               kunnr
               budat
               zfbdt
               zterm
               zbd1t
               shkzg
               FROM bsid INTO TABLE it_bsid
               WHERE kunnr IN  s_kunnr
               AND bukrs = p_bukrs
    *          AND zfbdt <= p_dateon
                   AND budat <= p_dateon
               AND umskz IN s_umskz.
       SELECT      belnr
                   gjahr
                   bukrs
                   dmbtr
                   kunnr
                   budat
                   zfbdt
                   zterm
                   zbd1t
                   shkzg
                   FROM bsad APPENDING CORRESPONDING FIELDS OF TABLE it_bsid
                   WHERE kunnr IN  s_kunnr
                   AND bukrs = p_bukrs
                   AND augdt > p_dateon
                   AND umskz IN s_umskz.
       IF it_bsid[] IS NOT INITIAL.
         it_tmp[] = it_bsid[].
         DELETE ADJACENT DUPLICATES FROM it_tmp COMPARING kunnr.
         SELECT kunnr
                name1
                ort01 FROM kna1 INTO TABLE it_kna1
                                      FOR ALL ENTRIES IN it_tmp
                                      WHERE kunnr = it_tmp-kunnr.
         SELECT belnr
                gjahr
                bukrs
                werks
                prctr
                segment
                FROM bseg INTO TABLE it_bseg
                          FOR ALL ENTRIES IN it_bsid
                          WHERE belnr = it_bsid-belnr
                          AND   gjahr = it_bsid-gjahr
                          AND   werks IN s_werks
                          AND   prctr IN s_prctr
                          AND   segment IN s_sgmnt
                          AND umskz IN s_umskz.
         IF it_bseg[] IS NOT INITIAL.
           it_tmp1[] = it_bseg[].
           SORT it_tmp1 BY segment.
           DELETE ADJACENT DUPLICATES FROM it_tmp1 COMPARING segment.
           SELECT langu
                  segment
                  name
                  FROM fagl_segmt INTO TABLE it_faglseg
                                  FOR ALL ENTRIES IN it_tmp1
                                  WHERE langu = 'EN'
                                  AND segment = it_tmp1-segment.
           REFRESH it_tmp1.
           it_tmp1[] = it_bseg[].
           SORT it_tmp1 BY prctr.
           DELETE ADJACENT DUPLICATES FROM it_tmp1 COMPARING prctr.
           SELECT  spras
                   prctr
                   ltext
                   FROM cepct INTO TABLE it_cepct
                              FOR ALL ENTRIES IN it_tmp1
                              WHERE spras = 'EN'
                              AND   prctr = it_tmp1-prctr.
         ENDIF.
       ENDIF.
    ENDFORM. " GET_DATA
    *&      Form  PROCESS_DATA
    *       text
    *  -->  p1        text
    *  <--  p2        text
    FORM process_data .
       v_date1 = p_dateon - v_age1.                              " 15 days  " Cnanges made by Jaiprakash
       v_date2 = p_dateon - v_age2.                              " 30 days  " Cnanges made by Jaiprakash
       v_date3 = p_dateon - v_age3.                              " 45 days  " Cnanges made by Jaiprakash
       v_date4 = p_dateon - v_age4.                              " 90 days  " Cnanges made by Jaiprakash
       v_date5 = p_dateon - v_age5.                              " 180 days " Cnanges made by Jaiprakash
       v_date6 = p_dateon - v_age6.                              " 360 days " Cnanges made by Jaiprakash
       v_date7 = p_dateon - v_age7.                              " 360 days " Changes made by Jaiprakash
       IF NOT it_bseg IS INITIAL.
    *    DELETE it_bseg FROM wa_bseg WHERE segment = ''.
         DATA: lv_add TYPE i.
         DATA: lv_date TYPE sy-datum.
         LOOP AT it_bsid INTO wa_bsid.
           wa_final-zterm = wa_bsid-zterm.
           wa_final-zbd1t = wa_bsid-zbd1t.
           CLEAR wa_bseg.
           READ TABLE it_bseg INTO wa_bseg WITH KEY belnr = wa_bsid-belnr.
           IF sy-subrc EQ 0.
             wa_final-segment = wa_bseg-segment.
             wa_final-prctr   = wa_bseg-prctr.
             IF wa_bsid-shkzg = 'H'.
               wa_bsid-dmbtr = -1 * wa_bsid-dmbtr.
             ENDIF.
             CLEAR wa_faglseg.
             READ TABLE it_faglseg INTO wa_faglseg WITH KEY segment = wa_final-segment.
             IF sy-subrc EQ 0.
               wa_final-name = wa_faglseg-name.
             ENDIF.
             CLEAR wa_cepct.
             READ TABLE it_cepct INTO wa_cepct WITH KEY prctr = wa_final-prctr.
             IF sy-subrc EQ 0.
               wa_final-ltext = wa_cepct-ltext.
             ENDIF.
    * Calculation for the age buckets of <15, <30, <45, <90, <180 , <360 and >360 days
    *        IF     wa_bsid-zfbdt <= p_dateon AND wa_bsid-zfbdt > v_date1.
    *          l_age1 = wa_bsid-dmbtr + l_age1.
    *        ELSEIF wa_bsid-zfbdt <= v_date1 AND wa_bsid-zfbdt > v_date2.
    *          l_age2 = wa_bsid-dmbtr + l_age2.
    *        ELSEIF wa_bsid-zfbdt <= v_date2 AND wa_bsid-zfbdt > v_date3.
    *          l_age3 = wa_bsid-dmbtr + l_age3.
    *        ELSEIF wa_bsid-zfbdt <= v_date3 AND wa_bsid-zfbdt > v_date4.
    *          l_age4 = wa_bsid-dmbtr + l_age4.
    *        ELSEIF wa_bsid-zfbdt <= v_date4 AND wa_bsid-zfbdt > v_date5.
    *          l_age5 = wa_bsid-dmbtr + l_age5.
    *        ELSEIF wa_bsid-zfbdt <= v_date5 AND wa_bsid-zfbdt > v_date6.
    *          l_age6 = wa_bsid-dmbtr + l_age6.
    *        ELSEIF wa_bsid-zfbdt <= v_date7.
    *          l_age7 = wa_bsid-dmbtr + l_age7.
    *        ENDIF.
             IF     wa_bsid-zfbdt <= p_dateon AND wa_bsid-zfbdt > v_date1.
               l_age1 = wa_bsid-dmbtr + l_age1.
             ELSEIF wa_bsid-zfbdt <= v_date1 AND wa_bsid-zfbdt > v_date2.
               l_age2 = wa_bsid-dmbtr + l_age2.
             ELSEIF wa_bsid-zfbdt <= v_date2 AND wa_bsid-zfbdt > v_date3.
               l_age3 = wa_bsid-dmbtr + l_age3.
             ELSEIF wa_bsid-zfbdt <= v_date3 AND wa_bsid-zfbdt > v_date4.
               l_age4 = wa_bsid-dmbtr + l_age4.
             ELSEIF wa_bsid-zfbdt <= v_date4 AND wa_bsid-zfbdt > v_date5.
               l_age5 = wa_bsid-dmbtr + l_age5.
             ELSEIF wa_bsid-zfbdt <= v_date5 AND wa_bsid-zfbdt > v_date6.
               l_age6 = wa_bsid-dmbtr + l_age6.
             ELSEIF wa_bsid-zfbdt <= v_date7.
               l_age7 = wa_bsid-dmbtr + l_age7.
             ENDIF.
             wa_final-kunnr = wa_bsid-kunnr.
             CLEAR wa_kna1.
             READ TABLE it_kna1 INTO wa_kna1 WITH KEY kunnr = wa_bsid-kunnr.
             IF sy-subrc EQ 0.
               wa_final-name1 = wa_kna1-name1.
               wa_final-ort01 = wa_kna1-ort01.
             ENDIF.
             lv_add = wa_final-zbd1t.
    *     lv_add = wa_final-zbd1t.
             CLEAR:lv_date.
             CALL FUNCTION 'FKK_ADD_WORKINGDAY'
               EXPORTING
                 i_date      = wa_final-budat
                 i_days      = lv_add
    *           I_CALENDAR1 =
    *           I_CALENDAR2 =
               IMPORTING
                 e_date      = lv_date
    *           E_RETURN    =
             wa_final-age1  = l_age1.
             wa_final-age2  = l_age2.
             wa_final-age3  = l_age3.
             wa_final-age4  = l_age4.
             wa_final-age5  = l_age5.
             wa_final-age6  = l_age6.
             wa_final-age7  = l_age7.
             wa_final-total = wa_final-age1 + wa_final-age2 + wa_final-age3 + wa_final-age4 + wa_final-age5 + wa_final-age6 + wa_final-age7.
    *          wa_final-age1  = l_age1.
    *          wa_final-age2  = l_age2.
    *          wa_final-age3  = l_age3.
    *          wa_final-age4  = l_age4.
    *          wa_final-age5  = l_age5.
    *          wa_final-age6  = l_age6.
    *          wa_final-age7  = l_age7.
    *         wa_final-total1 = wa_final-age1 + wa_final-age2 + wa_final-age3 + wa_final-age4 + wa_final-age5 + wa_final-age6 + wa_final-age7.
    *        wa_final-age1  = l_age1.
    *         wa_final-age2  = l_age2.
    *         wa_final-age3  = l_age3.
    *         wa_final-age4  = l_age4.
    *         wa_final-age5  = l_age5.
    *         wa_final-age6  = l_age6.
    *         wa_final-age7  = l_age7.
    *         wa_final-total1 = wa_final-age1 + wa_final-age2 + wa_final-age3 + wa_final-age4 + wa_final-age5 + wa_final-age6 + wa_final-age7.
             IF p_dateon GT lv_date.
               wa_final-age1  = l_age1.
               wa_final-age2  = l_age2.
               wa_final-age3  = l_age3.
               wa_final-age4  = l_age4.
               wa_final-age5  = l_age5.
               wa_final-age6  = l_age6.
               wa_final-age7  = l_age7.
               wa_final-total1 = wa_final-age1 + wa_final-age2 + wa_final-age3 + wa_final-age4 + wa_final-age5 + wa_final-age6 + wa_final-age7.
             ELSE.
               wa_final-age1  = l_age1.
               wa_final-age2  = l_age2.
               wa_final-age3  = l_age3.
               wa_final-age4  = l_age4.
               wa_final-age5  = l_age5.
               wa_final-age6  = l_age6.
               wa_final-age7  = l_age7.
               wa_final-total2 = wa_final-age1 + wa_final-age2 + wa_final-age3 + wa_final-age4 + wa_final-age5 + wa_final-age6 + wa_final-age7.
             ENDIF.
             APPEND wa_final TO it_final.
             CLEAR: wa_final,l_age1,l_age2,l_age3,l_age4,l_age5,l_age6,l_age7.
           ENDIF.
         ENDLOOP.
       ENDIF.
       IF it_final IS NOT INITIAL.
         SORT it_final BY kunnr segment.
         LOOP AT it_final INTO wa_final.
           MOVE wa_final TO wa_final1.
           wa_final1-prctr = ''.
           wa_final1-ltext = ''.
           wa_final1-segment = ''.
           wa_final1-name = ''.
           wa_final1-zterm = ''.
           AT END OF name1.
             SUM.
             wa_final1-total = wa_final-total.
             wa_final1-total1 = wa_final-total1.
             wa_final1-total2 = wa_final-total2.
             wa_final1-age1 = wa_final-age1.
             wa_final1-age2 = wa_final-age2.
             wa_final1-age3 = wa_final-age3.
             wa_final1-age4 = wa_final-age4.
             wa_final1-age5 = wa_final-age5.
             wa_final1-age6 = wa_final-age6.
             wa_final1-age7 = wa_final-age7.
             COLLECT wa_final1 INTO it_final1.
             CLEAR wa_final1.
           ENDAT.
         ENDLOOP.
       ENDIF.
    ENDFORM. " PROCESS_DATA
    **&      Form  ALV_DISPLAY
    **       text
    **  -->  p1        text
    **  <--  p2        text
    FORM alv_display .
       DATA : v_col TYPE i VALUE 1.
       CLEAR wa_fieldcat.
       v_col = v_col + 1.
       wa_fieldcat-col_pos   = v_col.
       wa_fieldcat-seltext_m = 'Customer Code'.
       wa_fieldcat-fieldname = 'KUNNR'.
       wa_fieldcat-tabname   = text-002.
       wa_fieldcat-key       = 'X'.
       wa_fieldcat-outputlen = 14.
       APPEND wa_fieldcat TO it_fieldcat.
       CLEAR wa_fieldcat.
       wa_fieldcat-col_pos   = v_col.
       wa_fieldcat-seltext_m = 'Customer Name'.
       wa_fieldcat-fieldname = 'NAME1'.
       wa_fieldcat-tabname   = text-002.
       wa_fieldcat-key       = 'X'.
       wa_fieldcat-outputlen = 14.
       APPEND wa_fieldcat TO it_fieldcat.
       CLEAR wa_fieldcat.
       v_col = v_col + 1.
       wa_fieldcat-col_pos   = v_col.
       wa_fieldcat-seltext_m = 'City'.
       wa_fieldcat-fieldname = 'ORT01'.
       wa_fieldcat-tabname   = text-002.
       wa_fieldcat-key       = 'X'.
       wa_fieldcat-outputlen = 35.
       APPEND wa_fieldcat TO it_fieldcat.
    ***   CLEAR wa_fieldcat.
    ***   v_col = v_col + 1.
    ***   wa_fieldcat-col_pos   = v_col.
    ***   wa_fieldcat-seltext_m = 'Payment Term'.
    ***   wa_fieldcat-fieldname = 'ZTERM'.
    ***   wa_fieldcat-tabname   = text-002.
    ***   wa_fieldcat-key       = 'X'.
    ***   wa_fieldcat-outputlen = 14.
    ***   APPEND wa_fieldcat TO it_fieldcat.
    *  CLEAR wa_fieldcat.
    *  v_col = v_col + 1.
    *  wa_fieldcat-col_pos   = v_col.
    *  wa_fieldcat-seltext_m = 'No. Of Days'.
    *  wa_fieldcat-fieldname = 'ZBD1T'.
    *  wa_fieldcat-tabname   = text-002.
    *  wa_fieldcat-key       = 'X'.
    *  wa_fieldcat-outputlen = 14.
    *  APPEND wa_fieldcat TO it_fieldcat.
    * CLEAR wa_fieldcat.
    * v_col = v_col + 1.
    * wa_fieldcat-col_pos   = v_col.
    * wa_fieldcat-seltext_m = 'Baseline Date'.
    * wa_fieldcat-fieldname = 'ZFBDT'.
    * wa_fieldcat-tabname   = text-002.
    * wa_fieldcat-key       = '

  • Redo Log Switch 결과...

    환경 : 8.1.7.3.0 (no archive log 모드)
    log_checkpoint_timeout = 0
    log_checkpoint_interval = 999999999
    redo log size = 200M
    현재 check point는 log switch 상태에서만 가능하도록 설정된 것 같습니다.
    거래량이 적어서 그런지 log switch는 30시간 주기입니다.
    제가 실행한 것은 아래 4번째 로그가 current일때
    alter system checkpoint를 하고 조금 있다가..
    alter system switch logfile를 하여 1번째가 current가 된 상황입니다.
    3/16일 14시 까지도 계속 active상태입니다...
    1. 문제가 생긴건지요??? 도움부탁합니다...
    2. no archive log mode에서도 switch 주기를 줄이는 것이 복구에 도움이 되나요...?
    ===========================================
    STATUS , FIRST_CHANGE#,FIRST_TIME
    CURRENT , 8846777646687,2007-03-15 16:57:55
    INACTIVE, 8846777587798,2007-03-14 10:34:40
    INACTIVE, 8846777609448,2007-03-14 17:17:38
    ACTIVE , 8846777643690,2007-03-15 16:01:22

    no archivemode에서 정상복구를 바라는 것인지요?
    잘못된 정책이란 생각이 듭니다.
    no archive mode에서 v$log의 first_change# 중에 가장 작은 것
    이 v$recover_file의 change# 보다 크거나 같다면 복구불능입니다.
    배치작업이라도 있어서 log switch가 한번의 cycle을 돌게 되면 이전
    백업으로는 복구불능입니다. archive mode로 지금 바로 바꾸시지요..
    log_checkpoint_timeout은 checkpoint에 대한 timeout 시간값을
    지정하는 것입니다.
    LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the number of redo log file blocks that can exist between an incremental checkpoint and the last block written to the redo log. This number refers to physical operating system blocks, not database blocks.
    checkpoint는 아시는 바와 같이 데이터파일과
    리두로그 컨트롤파일의 SCN을 일치시키는 것입니다. 주요한 것은
    DBWR프로세스가 데이터파일에 write를 하겠구요.
    물론 checkpoint와 인스턴스 복구는 관련이 있습니다. checkpoint timeout을
    적당히주면 instance recovery에서 좀 더 빠르게 instance 복구후 DB가
    open되겠습니다. 만약 설정하신대로 하신다면 DB를 abort로
    내리고 open하게 되면
    instance recovery시에 더 많은 시간이 필요하겠습니다.
    게다가 트랜잭션으로 인한 log switch하는 시간이 30시간보다
    작다면 timeout을 준들 영향을 주지 않겠지요. redo log가 꽉
    차게 되면 log switch를 자동으로 하게 되는데 log switch를
    하기전에 checkpoint를 주게 되어 있으니까요.
    그런데 checkpoint와 물리적/논리적 복구와는 다른 개념입니다.
    checkpoint는 위에서 말씀드린 instance recovery와 관련이 있고
    물리적/논리적 복구에서는 archive file이 떨어져 있는가 current redo log가
    존재하는가에 따라서 복구가능여부를 결정되는 것이지요..
    그리고 ACTIVE 상태라는 것은 문서상의 정의에서는 archive mode일 경우
    archiving이 되는 중일 경우, 그리고 이 상태는 complete recovery시에 redo log
    적용시 필요한 정보가 있다는 것입니다.
    no archive mode에서 복구정책을 적용 하겠다는 것은 위험한 발상인 것 같습니다.
    물론 DSS시스템의 경우에는 이미 정책을 no archive mode로
    만들어두고 주말마다 offline backup을 하기도 합니다.
    하지만 DSS에서는 하루에 300번 이상의 log switch가 일어나는
    경우가 있을 정도이니 아무리 백업이 되어 있다 한들 완전복구는
    불능이겠지요. offline backup을 했을 때까지만 복구가 됩니다.
    $LOG
    This view contains log file information from the control files.
    Column Datatype Description
    GROUP#
    NUMBER
    Log group number
    THREAD#
    NUMBER
    Log thread number
    SEQUENCE#
    NUMBER
    Log sequence number
    BYTES
    NUMBER
    Size of the log (in bytes)
    MEMBERS
    NUMBER
    Number of members in the log group
    ARCHIVED
    VARCHAR2(3)
    Archive status (YES |NO)
    STATUS
    VARCHAR2(16)
    Log status:
    UNUSED - Online redo log has never been written to. This is the state of a redo log that was just added, or just after a RESETLOGS, when it is not the current redo log.
    CURRENT - Current redo log. This implies that the redo log is active. The redo log could be open or closed.
    ACTIVE - Log is active but is not the current log. It is needed for crash recovery. It may be in use for block recovery. It might or might not be archived.
    CLEARING - Log is being re-created as an empty log after an ALTER DATABASE CLEAR LOGFILE statement. After the log is cleared, the status changes to UNUSED.
    CLEARING_CURRENT - Current log is being cleared of a closed thread. The log can stay in this status if there is some failure in the switch such as an I/O error writing the new log header.
    INACTIVE - Log is no longer needed for instance recovery. It may be in use for media recovery. It might or might not be archived.
    no archive mode에서도 복구하는 여러가지 방법들이 있기는 합니다.
    예를들어 current redo log가 깨졌을 때에 recovery 방법이라던지
    등등이 문서에 있긴하지요. 하지만 no archivemode에서 백업을 붓고
    복구하는 방법은 찾아보기 힘드실 것입니다. 위에서도 말씀드렸듯이
    no archive mode에서
    v$recover_file의 CHANGE# > v$logl의 minimum FIRST_CHANGE# 이면 데이터파일은 복구가능합니다.
    그러나 CHANGE# <= minimum FIRST_CHANGE# 이면 복구 불가능합니
    다. 그러니 백업을 붓고 복구를 하는 방법에 대한 문서는 거의
    찾기 힘듭니다. advance 방법에 대한 문서에서만 adjust_scn을 쓴다던지 하는 등이 나와있을 뿐입니다.
    글 수정:
    민천사(민연홍)
    아무래도 졸면서 썼나봅니다.;;
    interval과 timeout은 엄연히 다른데요. 왜 timeout과 interval을
    혼동했는지..;;
    LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the number of redo log file blocks that can exist between an incremental checkpoint and the last block written to the redo log. This number refers to physical operating system blocks, not database blocks.
    LOG_CHECKPOINT_TIMEOUT specifies (in seconds) the amount of time that has passed since the incremental checkpoint at the position where the last write to the redo log (sometimes called the tail of the log) occurred. This parameter also signifies that no buffer will remain dirty (in the cache) for more than integer seconds.

  • FileVault doesn't log me automatically in after Migration Assistant

    I've got a new MacBook Pro with Retina Display with Mavericks which initially I just setup without transfering information. Then I enabled FileVault. After restarting the EFI asks for my password to unlock the drive and then automatically logs me in. All is well.
    Until I use Migration Assistant, that is. It set back a Time Machine backup from my early 2011 Macbook Pro without the 'computer settings', and now after the EFI password it dumps me at the Mac OS login screen. According to http://support.apple.com/kb/HT5989 I can choose to disable automatic login, but this key is not present on my system, so for all I know automatic login should work.
    In console, I am seeing these suspicious-looking lines:
    14-02-04 14:18:34,277 SecurityAgent[237]: Could not get the user record from OpenDirectory.
    14-02-04 14:18:34,277 SecurityAgent[237]: Will sleep 3 seconds and try again (retryCount = 5)
    This retryCount starts at 9, and would go a long way to explaing why I'm staring at the grey apple for so long at boot time. I should point out that I haven't set up anything OpenDirectory related: my account should be simply a local machine account, nothing more.
    Has anyone encountered this before, or have any ideas how to solve this?

    Settings/General/Restrictions - Require Password=Immediately

  • Redo log switching

    11gR2
    Found out log switched every a few mins (2, 3 mins) at peak periods. I will make the recommendation for larger redo logs such that switches will be every 20 to 30 mins .
    Wanted to know what would be negative side of large redo logs?

    >
    11gR2
    Found out log switched every a few mins (2, 3 mins) at peak periods. I will make the recommendation for larger redo logs such that switches will be every 20 to 30 mins .
    Wanted to know what would be negative side of large redo logs?
    >
    Patience, grasshopper!
    If it ain't broke, don't fix it. So first make sure it is broken, or about to break.
    Unless you have an emergency on your hands you don't want to implement a change like that without careful examination of your current log file usage and history.
    You need to provide more information such as typical size of your log files, number of log groups, number of members in each group, log archive policy, etc.
    1. How often do these 'peak periods' occur? Do they occur fewer than 5 or 6 times a day? Or dozens of times?
    2. How long do they last? A few minutes? Or a few hours?
    3. What is the typical, non-peak rate of switches? This is really your base-line that you need to compare things to.
    4. What has the switch pattern been over the last few weeks or months?
    5. What has the growth in DB activity benn over the last few weeks or months? What do expect over the next few months?
    6. What is your goal in reducing the frequency of log switches?
    Basic negative sides include longer time to archive each log file (the fewer logs in each group the more impact here) and the length of time to recover in the event you need to. With large log files there is more for Oracle to wade through to find the relevant data for restoring the DB to a given point in time.
    Your suggestion of every 20 - 30 minutes means 2 to 3 times per hour. If you currently switch 10 or 12 times per hour you are making a very big change.
    Although you don't want to 'tweak' the logs unnecessarily you also don't want to make such large changes.
    Everything in moderation. If your current switch rate is 10 or 12 times per hour you may want to first cut this to maybe 1/2 to 1/3: that is, to 4 or 5 time per hour. It all depends on the answers to questions like I ask above. If you post the answers to know if will be helpful to anyone trying to advise you.

  • Missing Discoverer reports after migrating the EUL to new Database.

    Our Environment is Discoverer Relational- OLTP
    We have recently migrated the source database from Oracle 9.2.0.6 to 10.2.00.0 RAC, this made us to migrate the Discoverer to new database in production.
    Below are the database objects we migrated to new database.
    1. EUL schema
    2. Schemas referred by the Objects in the EUL (schemas containing functions, tables, views etc)
    3. Based on trigger we created to filter active discoverer database accounts, migrated only active database users to new database.
    Note: we had almost 4000 Discoverer users from 2005, since many of these users moved out of company or not using discoverer permanently, and there are users who used discoverer once in year or once in 6 months.
    Finally we come up with 300 Active Discoverer users to be migrated to new database.
    Issues: After migration
    Some of the migrated users did not get their reports migrated which are shared by other users? [Since we didn’t migrate these users schemas as they are not active].
    We have queried the EUL5_DOCUMENTS table in new database for a particular inactive users account (account which is not migrated) is not showing any reports owned by him.
    Since we have migrated EUL schema and active users new database, is this will effect the EUL, [Information related to non active users gets deleted from the EUL in New Database]
    Some users who are not figured in the active migrated list are requesting to get access to discoverer on new database. Can you please suggest the correct way to migrate these accounts.
    Exporting the accounts will work?
    Thanks in advance.
    Sunil

    Kanchana Devasurendra wrote:
    Hi ARS,
    Please check the following item in your new database.
    1. log_archive_max_processes Most properbly value set for this parameter is low ( may be it's set to 1.) Please increase up ( 4)I am curious to know what makes you think 4 is the right magic number for log_archive_max_processes - after all, he's only got one archive destination.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    A general reminder about "Forum Etiquette / Reward Points": http://forums.oracle.com/forums/ann.jspa?annID=718
    If you never mark your questions as answered people will eventually decide that it's not worth trying to answer you because they will never know whether or not their answer has been of any use, or whether you even bothered to read it.
    It is also important to mark answers that you thought helpful - again it lets other people know that you appreciate their help, but it also acts as a pointer for other people when they are researching the same question, moreover it means that when you mark a bad or wrong answer as helpful someone may be prompted to tell you (and the rest of the forum) what's so bad or wrong about the answer you found helpful.

  • Changes not reflected on the page after migrating the page to server

    Hi all,
    I have migrated a page to server after making some adding a messageTextInput field but the changes are not getting reflected.
    I followed the following steps to migrate the page:
    1. Copied the page.xml from local machine to server.
    2. Ran the below import command :
    adjava oracle.jrad.tools.xml.importer.XMLImporter $JAVA_TOP/oracle/apps/aear/Refunds/webui/RefundCreatePG.xml -rootdir $JAVA_TOP -username apps -password apps -dbconnection "(DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=aebsw1d.aetna.com)(PORT=1571)) (CONNECT_DATA= (SID=AEBSDEV2)))" -rootPackage $JAVA_TOP
    It said
    Import completed.
    3. Bounced the apache server.
    4. Ran the below script to see the page definition:
    DECLARE
    BEGIN
    jdr_utils.printDocument('/aebsu03/app/AEBSDEV2/apps/apps_st/comn/java/classes/oracle/apps/aear/Refunds/webui/RefundCreatePG',1000);
    EXCEPTION
    WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE(SQLERRM);
    END;
    It has my messageTextinput field info. But I can't see it on front end. Even though when I run the page from my jdev I am able to see that field.
    Plz guide me if I am missing anything.
    Thanks in advance

    Hi,
    Go to 'about this page' from front end and check the path for this PG.xml and compare that path with the one that you are importing.
    As per my understanding, if this is a custom page, then the path you are using is not correct.
    Let me know if you need more clarification.
    --Sushant
    [email protected]

  • After release the release version different than debug version ????

    Hello guys,
    I am having a HUGE issue with my release build version of my application, when i run my application through eclipse it runs perfectly, however after making a release build of the application and running it behaves different than the version run from eclipse ???????? has anyone ever had this problem before and if so what can you do to prevent this behavior. thank you for any advice on this issue

    Hello,
    Certain Firefox problems can be solved by performing a ''Clean reinstall''. This means you remove Firefox program files and then reinstall Firefox. Please follow these steps:
    '''Note:''' You might want to print these steps or view them in another browser.
    #Download the latest Desktop version of Firefox from http://www.mozilla.org and save the setup file to your computer.
    #After the download finishes, close all Firefox windows (click Exit from the Firefox or File menu).
    #Delete the Firefox installation folder, which is located in one of these locations, by default:
    #*'''Windows:'''
    #**C:\Program Files\Mozilla Firefox
    #**C:\Program Files (x86)\Mozilla Firefox
    #*'''Mac:''' Delete Firefox from the Applications folder.
    #*'''Linux:''' If you installed Firefox with the distro-based package manager, you should use the same way to uninstall it - see [[Installing Firefox on Linux]]. If you downloaded and installed the binary package from the [http://www.mozilla.org/firefox#desktop Firefox download page], simply remove the folder ''firefox'' in your home directory.
    #Now, go ahead and reinstall Firefox:
    ##Double-click the downloaded installation file and go through the steps of the installation wizard.
    ##Once the wizard is finished, choose to directly open Firefox after clicking the Finish button.
    More information about reinstalling Firefox can be found [https://support.mozilla.org/en-US/kb/troubleshoot-and-diagnose-firefox-problems?esab=a&s=troubleshooting&r=3&as=s#w_5-reinstall-firefox here].
    <b>WARNING:</b> Do not run Firefox's uninstaller or use a third party remover as part of this process, because that could permanently delete your Firefox data, including but not limited to, extensions, cache, cookies, bookmarks, personal settings and saved passwords. <u>These cannot be recovered unless they have been backed up to an external device!</u>
    Please report back to see if this helped you!
    Thank you.

Maybe you are looking for