BOUNDED RECOVERY: CHECKPOINT: in ggserr.log and report file

Hi all, I recently setup GG Version 11.1.1.1 bidirectional and I have 2 extracts, 2 replicats and 2 pump parameter files. In the errorlog and report file I see
" +BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p3321868_extr: st+
+art=SeqNo: 230, RBA: 58836496, SCN: 0.8528654 (8528654), Timestamp: 2011-10-17 20:02:44.000000, end=SeqNo: 230, RBA: 58836992, SCN: 0.8528654 (8528654), Time+
+stamp: 2011-10-17 20:02:44.000000+."
And replication stops after this.
I just stopped the extract process and started with "*start <extract> BRRESET*" and it works fine for sometime and again I see the same message in ggserr.log
Can anyone please help me regarding this.
Thanks,
VKR

No other error messages? Nothing in the replicat report showing an error?
Given that you've used BRRESET, you've either dug into the Reference guide or found this note on Oracle Support:
Extract Abends With Bounded Recovery Errors In The Report File (Doc ID 1293772.1)
Extract may be stuck on a zero-length record.
If it keeps happening, I'd create an SR with support.

Similar Messages

  • Announcing new activity logging and reporting capabilities for Office 365

    Announcing new activity logging and reporting capabilities for Office 365We are pleased to announce the rollout of new activity logging and reporting capabilities for Office 365, including the Office 365 activity report, comprehensive logging capability, PowerShell command or cmdlet and a preview of the Office 365 Management Activity API.Thisnew capability provides you increased transparency, allowing you to monitor and investigate actions taken on your data, and comply with laws and regulations.Office 365 activity reportThe Office 365 activity report enables you to investigate a user’s activity by searching for a user, file or other resource across SharePoint Online, One Drive for Business, Exchange Online and Azure Active Directory, and then download the activities to a CSV (comma separate values) file. You can filter by date range,...
    This topic first appeared in the Spiceworks Community

    Hi,
    Do you have any specific questions? We'll certainly try to help you, but we won't do your homework for you. That wouldn't help you learn at all.
    I recommend looking over the learning materials here, they're quite good for getting started with PowerShell:
    http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx
    Don't retire TechNet! -
    (Don't give up yet - 13,085+ strong and growing)

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • EXPDP generates new dmp file and reports "file already exists" error

    Hello everyone,
    Hope you all had a wonderful holiday. I got some problems with datapump expdp 10.2.0.4. It would be appreciated if you could provide some advice. Thanks in advance.
    I newly created a 10.2.0.4 database. The database can startup and be connected via Toad without problem. I can also use impdp to import some data to the new database. But when I'm trying to use expdp to export a schema from the database, I got the following errors:
    expdp parfile=expdp_scott_mfp1.parExport: Release 10.2.0.4.0 - 64bit Production on Monday, 26 December, 2011 22:10:49
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Real Application Clusters, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/u02/exports/mfp1/expdp_scott_mfp1_12262011.dmp"
    ORA-27038: created file already exists
    Additional information: 1
    Every time I run the expdp, it just creates the dmp file (expdp_scott_mfp1_12262011.dmp) specified in the parfile under the EXPORT directory and reports "file already exists " error.
    Your advice is highly appreciated.
    Thanks.
    Edited by: 904668 on Dec 27, 2011 8:47 AM

    i thought I found the problem. Used same file name on dump file and log file. How stupid of me. Sorry for bothering. Thanks and happy new year!

  • How to get Log and Output File Names for a concurrent request

    Hi,
    I am submitting a concurrent frm OAF with the following code in AM
    try{
    OADBTransaction tx = getOADBTransaction();
    Connection conn = tx.getJdbcConnection();
    ConcurrentRequest cr = new ConcurrentRequest(conn);
    Vector parameters = new Vector();
    parameters.addElement("10");
    nRequestID= cr.submitRequest("CIE","DTFEMP","","",false,parameters);
    tx.commit();
    }catch(RequestSubmissionException e)
    How do i get the handle to log and output files for the abvoe concurrent request ?
    One more thing is there a way where we can evaluate the environment variables
    like in the above example once i get a the request id
    logfile = $APPLCSF/$APPLOUT/"l"+requestID+".log"
    and
    outputfile=$APPLCSF/$APPLOUT/"o"+requestID+".out"
    is there a way i can get the values of $APPLCSF and $APPLOUT from the os ?
    Thanks
    Tom...
    Thanks
    Tom ...

    You can query the Fnd_Concurrent_Requests table using Request_ID, which has the log & out file directory details.
    Hth
    Srini

  • Timestamp in Java log and trace files.

    Hi SAP'ies
    Running PI 7.11 on AIX 6.3 we face an issue with the content of log and trace files from Java.
    Eg.
    The file DefaultTrace_00.0.trc is timestamped 18-05-10 11:16:15. (The same time as the time of the OS/AIX)
    Looking inside the file the last statement is timestamped 2010 05 18 09:16:15.
    How can we ensure that the content of these Java files are timestamped with OS time?
    Looking into ABAP files like dev_w0 the timestamp of the file and the content are equal.
    Best regards,
    Teddy Løv Andersen

    Hello All
    Best way to convert the default trace time is , visit the site
    http://www.csgnetwork.com/epochtime.html
    here enter , for example if you have the following in default trace
    #1.#00265510DE7300120000000F000022030004C77AD7309DF5#1345230317066#com.sap.portal.fpn.rdl
    1345230317066   is time stamp , enter this in the above site to get the time
    Fri Aug 17 2012 21:05:17 GMT+0200
    Regards

  • Useful logs and trace files

    Hello experts, for our Netweaver AS administration, I am in charge of periodically checking logs and trace files. I would like to know which are the most useful logs and trace files and the information each one will hold. I am familiar with "DefaultTrace.trc", and as of today it is the only one I have used, but I believe I should also be looking at other logs and trace files.
    Any suggestions?

    Hi Pedro,
    If you are talking about JAVA only system defaulttrace is the best log/trace to look, there are other log files like application log, but maybe the best way to check you logs is using NWA (NetWeaver Administrator) on the following URL on your JAVA system:
    http://<hostname>:<port>/nwa
    From there you need to go to Monitoring -> Logs and Traces and then Predefined View/SAP logs.
    My other recommendation is to change the severity level to ERROR for all you JAVA component within the Visual Administrator -> ServeNode -> Services -> Log Configurator -> Locations, otherwise it is possible that you see a lot of garbage on the defaulttraces. Anyway you can change the severity level per component, on demand, to investigate any possible problem.
    The work directory is very imporant and maybe you can also check the file "dev_serverX" that also will give you information about any out of memory conditions and garbage collection activity if you have these values set for the server node using the config tool:
    -verbose:gc
    -XX:+PrintGCDetails
    -XX:+PrintGCTimeStamps
    You can find more information on here:
    http://help.sap.com/saphelp_nw70/helpdata/en/ac/e9d8a51c732e42bd0e7de54b9ff4e2/content.htm
    Hopefully this help you, let me know if you need more information,
    Zareh

  • Win 2008  WL 10.3.3 stdout appearing in .log and .out files

    Recently noticed a ballooning [ServerName].out file in the logs directory. In weblogic management console I do have it configured to redirect stdout and stderr to weblogic logging (.log file). Both the .log and .out file contain the same stdout/stderr information. I would like to eliminate the .out file if possible (since WL only rotates the .log), but cannot find where it is configured. The managed servers are NOT windows services (no -log option).
    Did not find any logging parameters in JAVA_OPTIONS or paramters in the startManagedSvc.cmd file.
    Is this something needing to be corrected at the application level? (log4j)

    opie wrote:
    Recently noticed a ballooning [ServerName].out file in the logs directory. In weblogic management console I do have it configured to redirect stdout and stderr to weblogic logging (.log file). Both the .log and .out file contain the same stdout/stderr information. I would like to eliminate the .out file if possible (since WL only rotates the .log), but cannot find where it is configured. The managed servers are NOT windows services (no -log option).
    Did not find any logging parameters in JAVA_OPTIONS or paramters in the startManagedSvc.cmd file.
    Is this something needing to be corrected at the application level? (log4j)Depends on what you are actually seeing in those files. Are you outputting log4j messages to a log file AND the console?
    Here is a snippet of the log4j configuration file that denotes writing to the console:
        <appender name="ConsoleAppender" class="org.apache.log4j.ConsoleAppender">
            <layout class="org.apache.log4j.PatternLayout">
                <param name="ConversionPattern" value="%d{yyyy-MM-dd hh:mm:ss} %-5p [%t] - %C{1}.%M -> %m%n" />
            </layout>
          </appender>
        <root>
            <level value="ALL" />
            <appender-ref ref="ConsoleAppender" />
        </root>Edited by: ForumKid2 on Dec 29, 2010 11:36 AM

  • Basic question on log and temp files

    Hi,
    I need to create a new database in this way:
    - 2 GB for the data file
    - 1 GB for log and temp files
    This is the command I'm issuing to create the db:
    create tablespace xyz datafile '<path>\xyz.dbf' size 2000M reuse;
    How can I say the size of log and temp files?
    Thank you
    Nicola
    Ps-I'm on windows platform

    user575754 wrote:
    - 1 GB for log and temp filesWhat's the link between log file size and tempfile ?
    This is the command I'm issuing to create the db:
    create tablespace xyz datafile '<path>\xyz.dbf' size 2000M reuse;This command does not actually create a database, but a tablespace. If you ran it, that means your db aleady exists.
    So, not sure to follow you on this thread.
    Nicolas.

  • Location of Redo log and control files?

    Dear all,
    I am checking the location of redo log and control files, but found that the redo log file (like log02a.dbf ....) in the same directory of data files. However, I couldn't find any control files in the data files directries.
    What could be the location of control files?
    Amy

    select name
      from v$controlfile
    or
    show parameter control_filesKhurram

  • External portal capturing internal portal URL in Log and trace file

    Hi,
    We are facing one issue in portal like we have two portals for internal (Intranet) and external (Internet) users.
    Once users logged in the application and try to get the information about mylink from the external portal link (internet) they should not get any information about the internal portal.
    But in log and trace file we can see the external portal link capturing the internal portal URL.
    We need to find, from where system capturing the internal portal URL.
    Thanks.

    The tkproffed trace file is in seconds.
    "set timing" is in hh:mi:ss.uu format. So 00:00:01.01 is 1.01 seconds.
    You have to remember that most of these measurements are rounded. While your trace file says it contains one second of trace data, you know it's more.
    One excellent resource for trace files is "Optimizing Oracle Performance" by Cary Millsap & Jeff Holt. (http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/059600527X ) I thought I knew trace files before, but this book brings your knowledge to a whole new level.
    There is also an excellent WP by Cary Millsap ( http://method-r.com/downloads/doc_details/10-for-developers-making-friends-with-the-oracle-database-cary-millsap ) that gives you some insight.

  • How to configure logs and trace files

    Hello people,
       We have just implemented ESS-MSS, we have around 25000 people using this service and every 2 days my logs and trace file in server gets full and portal gets down.
    Please suggest how to solve this problem,how can i reduce trace and log files,,,,,any configuration or setting is there to configure this...please suggest and explain how can it be done.
    Biren

    Hi,
    You can control what messages gets logged depending on the severity.
    This can be configured using Log Configurator, check this how you can set severity to different locations.
    Netweaver Portal Log Configuration & Viewing (Part 1)
    Regards,
    Praveen Gudapati

  • How do you extract SQL from Oracle Forms and Reports files?

    I am developing an "as is" data model for a government client for a 14 year old system that has three databases, 20 schemas, over 1500 tables, and over 23,000 columns. Needless to say, I do not plan to perform a manual mapping of data to screens and reports.
    Most of the system has been developed in Oracle Forms and Reports. I am trying to map the live tables and columns to forms and reports.
    The process here has been to save the forms and reports files as .fmb and .rdf files. The client does not have an available copy of Oracle Designer, which I understand could be used to extract the SQL.
    Is there a utility somewhere that can parse the .fmb and .rdf files to extract the SQL?
    Thanks,
    Jim Gearing

    Jim,
    I don't know of any utility that will do this. You can convert and save each fmb as a fmt so you can view/search the contents , but I don't recommend that approach.
    On the other hand, you download a copy of Oracle Designer:
    http://www.oracle.com/technology/software/index.html
    It usually is included with Oracle Forms and Reports.

  • How to overwrite a log and bad file in external table in oracle 10g

    Hi,
    I have used external table in oracle 10g.whenever use select query in external table orace internally create one log file in specified directory,
    but this log file is growing.How can i overwrite the log file(old to replace with new).I need overwrite a log and bad file in external table.
    kindly give the solutions.
    By
    Siva

    I don't believe that is possible with the LOGFILE clause, but it may be with the BADFILE clause. Here is an excerpt from the documentation :
    The LOGFILE clause names the file that contains messages generated by the external tables utility while it was accessing data in the datafile. If a log file already exists by the same name, the access driver reopens that log file and appends new log information to the end. This is different from bad files and discard files, which overwrite any existing file. NOLOGFILE is used to prevent creation of a log file.
    If you specify LOGFILE, you must specify a filename or you will receive an error.
    If neither LOGFILE nor NOLOGFILE is specified, the default is to create a log file. The name of the file will be the table name followed by _%p and it will have an extension of .log.

  • Server.log and access file previous record are overwrite

    Hi,
    I am having problem that my server.log and access file in all instance have been overwrite by latest record. Suppose all the system.out.print will append to the server.log. However, my problem is the log which has been written to server.log in earlier time (mayb morning till afternoon) is replaced. From the server.log, I am only able to view the log which start from 11 pm. The same case happen to access file as well. This incident is not happen everyday but sometimes.
    I am wondering what is happen and how i can solve the problem.
    Any help/guidance is highly appreaciate.
    Thanks.

    Hi,
    Did anyone know the solution for this issues..
    Thanks.

Maybe you are looking for