Confused about the log files

I have written an application that has a Primary and Secondary database. The application creates tens-of-thousands of records in the Primary database, with a 1-to-1 relationship in the Secondary. On subsequent runs it will either update existing Primary records (which should not update the secondary as that element does not change) or it will create new records.
The application actually works correctly, with the right data, the right updates and the right logical processing. The problem is the log files.
The input data I am testing with is originally 2Mb as a CSV file and with a fresh database it creates almost 20Mb of data. This is about right for the way it splits the information up and indexes it. If I run the application again with exactly the same data, it should just update all the entries and create nothing new. My understanding is that the updated records will be written to the end of the logs, and the old ones in the earlier logs would be redundant and the cleaner thread would clean them up. I am explicitly cleaning as per the examples. The issue is that each run, the data just doubles in size! Logically it is fine, physically it is taking a ridiculous amount of space. RUnning DbSpace shows that the logs are mostly full (over 90%) where I would expect most to be empty, or sparsely occupied as the new updates are written to new files. cleanLog() does nothing. I am at a total loss!
Generally the processing I am doing on the primary is looking up the key, if it is there updating the entry, if not creating one. I have been using a cursor to do this, and using the putCurrent() method for existing updates, and put() for new records. I have even tried using Database.delete() and the full put() in place of putCurrent() - but no difference (except it is slower).
Please help - it is driving me nuts!

Let me provide a little more context for the questions I was asking. If this doesn't lead us further into understanding your log situation, perhaps we should take this offline. When log cleaning doesn't occur, the basic questions are:
a. is the application doing anything that prohibits log cleaning? (in your case, no)
b. has the utilization level fallen to the point where log cleaning should occur? (not on the second run, but it should on following runs)
c. does the log utilization level match what the application expects? (no, it doesn't match what you expect).
1) Ran DbDump with and withour -r. I am expecting the
data to stay consistent. So, after the first run it
creates the data, and leaves 20mb in place, 3 log
files near 100% used. After the second run it should
update the records (which it does from the
applications point of view) but I now have 40mb
across 5 log files all near 100% usage.I think that it's accurate to say that both of us are not surprised that the second run (which updates data but does not change the number of records) creates a second 20MB of log, for a total of 40MB. What we do expect though, is that the utilization reported by DbSpace should fall to closer to 50%. Note that since JE's default minimum utilization level is 50%, we don't expect any automatic log cleaning even after the second run.
Here's the sort of behavior we'd expect from JE if all the basics are taken care of (there are enough log files, there are no open txns, the application stays up long enough for the daemon to run, or the application does batch cleanLog calls itself, etc).
run 1 - creates 20MB of log file, near 100% utilization, no log cleaning
run 2 - updates every record, creates another 20MB of log file, utilization falls, maybe to around 60%. No log cleaning yet, because the utilization is still above the 50% threshold.
run 3 - updates every record, creates another 20MB of log file, utilization falls below 50%, log cleaning starts running, either in the background by the daemon thread, or because the app calls Environment.cleanLog(), without any need to set je.cleaner.forceCleanFiles.
So the question here is (c) from above -- you're saying that your DbSpace utilization level doesn't match what you believe your application is doing. There are three possible answers -- your application has a bug :-), or with secondaries and whatnot, JE is representing your data in a fashion you didn't expect, or JE's disk space utilization calculation is inaccurate.
I suggested using DbDump -r as a first sanity check of what data your application holds. It will dump all the valid records in the environment (though not in key order, no -r is slower, but dumps in key order). Keys and data should up on different lines, so the number of lines in the dump files should be twice the number of records in the environment. You've done this already in your application, but this is an independent way of checking. It also makes it easier to see what portion of data is in primary versus secondary databases, because the data is dumped into per-database files. You could also load the data into a new, blank environment to look at it.
I think asked you about the size of your records because a customer recently reported a JE disk utilization bug, which we are currently working on. It turns out that if your data records are very different in size (in this case, 4 orders of magnitude) and consistently only the larger or the smaller records are made obsolete, the utilization number gets out of whack. It doesn't really sound like your situation, because you're updating all your records, and they don't sound like they're that different in size. But nevertheless, here's a way of looking at what JE thinks your record sizes are. Run this command:
java -jar je.jar DbPrintLog -h <envhome> -S
and you'll see some output that talks about different types of log entries, and their sizes. Look at the lines that say LN and LN_TX at the top. These are data records. Do they match the sizes you expect? These lines do include JE's per-record headers. How large that is depends on whether your data is transactional or not. Non-transactional data records have a header of about 35 bytes, whereas transactional data records have 60 bytes added to them. If your data is small, that can be quite a large percentage. This is quite a lot more than for BDB (Core), partly because BDB (Core) doesn't have record level locking, and partly because we store a number of internal fields as 64 bit rather than 16 or 32 bit values.
The line that's labelled "key/data" shows what portion JE thinks is the application's data. Note that DbPrintLog, unlike DbSpace, doesn't account for obsoleteness, so while you'll see a more detailed picture of what the records look like in the log, you may see more records than you expect.
A last step we can take is to send you a development version of DbSpace that has a new feature to recalculate the utilization level. It runs more slowly than the vanilla DbSpace, but is a way of double checking the utilization level.
In my first response, I suggested trying je.cleaner.forceCleanFiles just to make it clear that the cleaner will run, and to see if the problem is really around the question of what the utilization level should be. Setting that property lets the cleaner bypass the utilization trigger. If using it really reduced the size of your logs, it reinforces that your idea of what your application is doing is correct, and casts suspicion on the utilization calculation.
So in summary, let's try these steps
- use DbDump and DbPrintLog to double check the amount and size of your application data
- make a table of runs, that shows the log size in bytes, number of log files, and the utilization level reported by DbSpace
- run a je.cleaner.forceCleanFiles cleanLog loop on one of the logs that seems to have a high utilization level, and see how much it reduces to, and what the resulting utilization level is
If it all points to JE, we'll probably take it offline, and ask for your test case.
Regards,
Linda

Similar Messages

  • I'm a bit confused about standby log files

    Hi all,
    I'm a bit confused about something and wondering if someone can explain.
    I have a Primary database that ships logs to a Logical Standby database.
    Everything appears to be working properly. If I check the v$archived_log table in the Primary and compare it to the dba_logstdby_log view in the Logical Standby, I'm seeing that logs are being applied.
    On the logical standby, I have the following configured for log_archive_dest_n parameters:
    *.log_archive_dest_1='LOCATION=/u01/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_2='LOCATION=/u02/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_3='LOCATION=/u03/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_4='SERVICE=PNX8A_WDC ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PNX8A_WDC'
    *.log_archive_dest_5='LOCATION=/u01/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_6='LOCATION=/u02/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_7='LOCATION=/u03/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    Here is my confusion now. Before converting from a Physical standby database to a Logical Standby database, I was under the impression that I needed the standby logs (i.e. log_archive_dest_5, 6 and 7 above) because a Physical Standby database would receive the redo from the primary and write it into the standby logs before applying the redo in the standby logs to the Physical standby database.
    I've now converted to a Logical Standby database. What's happening is that the standby logs are accumulating in the directory pointed to by log_archive_dest_6 above (/u02/oracle/standbylogs/ORADB1). They do not appear to be getting cleaned up by the database.
    In the Logical Standby database I do have STANDBY_FILE_MANAGEMENT parameter set to AUTO. Can anyone explain to me why standby log files would continue to accumulate and how I can get the Logical Standby database to remove them after they are no longer needed on the LSB db?
    Thanks in advance.
    John S

    JSebastian wrote:
    I assume you mean in your question, why on the standby database I am using three standby log locations (i.e. log_archive_dest_5, 6, and 7)?
    If that is your question, my answer is that I just figured more than one location would be safer but I could be wrong about this. Can you tell me if only one location should be sufficient for the standby logs? The more I think of this, that is probably correct because I assume that Log Transport services will re-request the log from the Primary database if there is some kind of error at the standby location with the standby log. Is this correct?As simple configure as below. Why more multiple destinations for standby?
    check notes Step by Step Guide on How to Create Logical Standby [ID 738643.1]
    >
    LOG_ARCHIVE_DEST_1='LOCATION=/arch1/boston VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston'
    LOG_ARCHIVE_DEST_2='SERVICE=chicago LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago'
    LOG_ARCHIVE_DEST_3='LOCATION=/arch2/boston/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=boston'
    The following table describes the archival processing defined by the initialization parameters shown in Example 4-2.
         When the Boston Database Is Running in the Primary Role      When the Boston Database Is Running in the Logical Standby Role
    LOG_ARCHIVE_DEST_1      Directs archival of redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/boston/.      Directs archival of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/boston/.
    LOG_ARCHIVE_DEST_2      Directs transmission of redo data to the remote logical standby database chicago.      Is ignored; LOG_ARCHIVE_DEST_2 is valid only when boston is running in the primary role.
    LOG_ARCHIVE_DEST_3      Is ignored; LOG_ARCHIVE_DEST_3 is valid only when boston is running in the standby role.      Directs archival of redo data received from the primary database to the local archived redo log files in /arch2/boston/.
    >
    Source:-
    http://docs.oracle.com/cd/B19306_01/server.102/b14239/create_ls.htm

  • Confusing  about achived log file backup

    From a book, I see
    "we can not combine archived redo log files and datafiles into a single backup",
    But, I do have a command
    "backup...........plus archivedlog"
    they seams contradict with each other,
    why is that?

    They do not conflict which each other:
    "we can not combine archived redo log files and datafiles into a single backup", referes to backup pieces. Oracle cannot combine archivelog and for example tablespace backup in a single backup piece.
    the following command, just says rman to perform a backup of tablespace and archivelogs, but as a result it will create at least two backup pieces one for tablespace and the second for archive redo logs.
    RMAN> backup tablespace users plus archivelog delete input skip inaccessible format "C:\%U.bkf";
    Starting backup at 29-JUN-09
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=128 RECID=142 STAMP=690573687
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\0SKIOKQ3_1_1.BKF tag=TAG20090629T004553 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:02:45
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00128_0686744258.001 RECID=142 STAMP=690573687
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00129_0686744258.001 RECID=143 STAMP=690588250
    Finished backup at 29-JUN-09
    Starting backup at 29-JUN-09
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00004 name=C:\APP\MOB\ORADATA\ORCL\USERS01.DBF
    channel ORA_DISK_1: starting piece 1 at 29-JUN-09
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\APP\MOB\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2009_06_29\O1_MF_NNNDF_TAG20090629T004911_54HWVKFO_.BKP tag=TAG20090629T004911 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
    Finished backup at 29-JUN-09
    Starting backup at 29-JUN-09
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=148 RECID=162 STAMP=690770984
    channel ORA_DISK_1: starting piece 1 at 29-JUN-09
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\0UKIOL1B_1_1.BKF tag=TAG20090629T004946 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00148_0686744258.001 RECID=162 STAMP=690770984
    Finished backup at 29-JUN-09
    Starting Control File and SPFILE Autobackup at 29-JUN-09
    piece handle=C:\APP\MOB\PRODUCT\11.1.0\DB_1\DATABASE\C-1213135877-20090629-00 comment=NONE
    Finished Control File and SPFILE Autobackup at 29-JUN-09With kind regards
    Krystian Zieja

  • Log4j problem for backing up the log file

    This is my log4j.properties. it doesn't seem to back up the log file and create a new log file when it reaches to MAX size. can anybody look at it?
    Thanks..
    log4j.rootCategory=debug, stdout, R
    # Print only messages of priority WARN or higher for your category
    log4j.category.your.category.name=WARN
    # Specifically inherit the priority level
    #log4j.category.your.category.name=INHERITED
    #### First appender writes to console
    log4j.appender.stdout=org.apache.log4j.ConsoleAppender
    log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
    # Pattern to output the caller's file name and line number.
    log4j.appender.stdout.layout.ConversionPattern=[%d] [%c] %-5p - %m%n
    #### Second appender writes to a file
    log4j.appender.R=org.apache.log4j.RollingFileAppender
    log4j.appender.R.File=/opt/mc/logs/AMSWAS.log
    # Control the maximum log file size
    log4j.appender.R.MaxFileSize=200000KB
    log4j.appender.R.MaxBackupIndex=1
    log4j.appender.R.layout=org.apache.log4j.PatternLayout
    log4j.appender.R.layout.ConversionPattern=[%d] [%c] %-5p - %m%n

    Hello again Tom and thanks for your help!
    No, I didn't optimize any media though I've rendered proxies for all media. I have about 14TB of R3D footage on five external harddrives, it's a feature. The proxies and sound files which I copied to the library makes the library file that big. I can't consolidate all media, as far as I know, since I don't have a drive even close to the size of all the R3D footage.
    In terms of copying I have only tried the good old "copy/paste" method. I have never used any of those programs you mentioned. Can those programs be used to copy certain files or will they copy an entire drive?
    Will the automatic back-ups FCPX does every 15 minutes save my timelines if something would go wrong and the library file would dissappear? I don't fully understand how that back-up process works. I could always render new proxies, though it would take time, but re-editing all those timelines is a whole other thing. Important to note here is that I'm used to Premiere Pro and the "old" FCP which is why all of this is so confusing.
    Thank you again!

  • Bursting Program Errors Out with nothing in the Log file

    Hi All
    I have a RDF which calls a bursting program in After report trigger, the problem i'm facing is that the bursting program is completing successfully for a set of parameters and when again the program is run for the same set of parameters the bursting program is ending in error, i have checked the log file but nothing in the log file
    below is my log file
    +---------------------------------------------------------------------------+
    XML Publisher: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    XDOBURSTREP module: XML Publisher Report Bursting Program
    +---------------------------------------------------------------------------+
    Current system time is 30-SEP-2013 15:11:15
    +---------------------------------------------------------------------------+
    XML/BI Publisher Version : 5.6.3
    Request ID: 574533
    All Parameters: Dummy for Data Security=N:ReportRequestID=574532:DebugFlag=Y
    Report Req ID: 574532
    Debug Flag: Y
    Updating request description
    Updated description
    Retrieving XML request information
    Node Name:T1DEVEBSAPP1
    Preparing parameters
    null output =/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574533.out
    inputfilename =/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574532.out
    Data XML File:/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574532.out
    Set Bursting parameters..
    Temp. Directory:/tmp
    [093013_031118656][][STATEMENT] Oracle XML Parser version ::: Oracle XML Developers Kit 10.1.3.130 - Production
    [093013_031118657][][STATEMENT] setOAProperties called..
    Bursting propertes.....
    {user-variable:cp:territory=US, user-variable:cp:ReportRequestID=574532, user-variable:cp:language=en, user-variable:cp:responsibility=20678, user-variable.OA_MEDIA=http://t1devebsapp1.travelzoo.com:8001/OA_MEDIA, burstng-source=EBS, user-variable:cp:DebugFlag=Y, user-variable:cp:parent_request_id=574532, user-variable:cp:locale=en-US, user-variable:cp:user=SETUPUSER, user-variable:cp:application_short_name=XDO, user-variable:cp:request_id=574533, user-variable:cp:org_id=81, user-variable:cp:reportdescription=Travelzoo Invoice Print Selected Invoices-Child, user-variable:cp:Dummy for Data Security=N}
    Start bursting process..
    Bursting process complete..
    Generating Bursting Status Report..
    +---------------------------------------------------------------------------+
    Start of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    End of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    Executing request completion options...
    Output file size:
    602
    Finished executing request completion options.
    +---------------------------------------------------------------------------+
    Concurrent request completed
    Current system time is 30-SEP-2013 15:11:41
    +---------------------------------------------------------------------------+
    and the output says..
    "Error!! Could not deliver the output for Delivery channel:null "
    Wondering What might be the issue, any help on this is greatly appreciated.
    -Ragul

    Can you find any details about the error from the "View Detail" button (the same window where you check the log and output files)?
    I found the Workflow logs, I am not sure what I am looking for, but i am not seeing any errors reported.The event viewer is supposed to send an email, so do you see anything in the logs that could be related?
    Thanks,
    Hussein

  • (Cisco Historical Reporting / HRC ) All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054

    Hi All,
    I am getting an error message "All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054"  when trying to log into HRC (This user has the reporting capabilities) . I checked the log files this is what i found out 
    The log file stated that there were ongoing connections of HRC with the CCX  (I am sure there isn't any active login to HRC)
    || When you tried to login the following error was being displayed because the maximum number of connections were reached for the server .  We can see that a total number of 5 connections have been configured . ||
    1: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Current number of connections (5) from historical Clients/Scheduler to 'CRA_DATABASE' database exceeded the maximum number of possible connections (5).Check with your administrator about changing this limit on server (wfengine.properties), however this might impact server performance.
    || Below we can see all 5 connections being used up . ||
    2: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:[DB Connections From Clients (count=5)]|[(#1) 'username'='uccxhrc','hostname'='3SK5FS1.ucsfmedicalcenter.org']|[(#2) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#3) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#4) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#5) 'username'='uccxhrc','hostname'='47BMMM1.ucsfmedicalcenter.org']
    || Once the maximum number of connection was reached it threw an error . ||
    3: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Number of max connection to 'CRA_DATABASE' database was reached! Connection could not be established.
    4: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Database connection to 'CRA_DATABASE' failed due to (All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054.)
    Current exact UCCX Version 9.0.2.11001-24
    Current CUCM Version 8.6.2.23900-10
    Business impact  Not Critical
    Exact error message  All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054
    What is the OS version of the PC you are running  and is it physical machine or virtual machine that is running the HRC client ..
    OS Version Windows 7 Home Premium  64 bit and it’s a physical machine.
    . The Max DB Connections for Report Client Sessions is set to 5 for each servers (There are two servers). The no of HR Sessions is set to 10.
    I wanted to know if there is a way to find the HRC sessions active now and terminate the one or more or all of that sessions from the server end ? 

    We have had this "PRX5" problem with Exchange 2013 since the RTM version.  We recently applied CU3, and it did not correct the problem.  We have seen this problem on every Exchange 2013 we manage.  They are all installations where all roles
    are installed on the same Windows server, and in our case, they are all Windows virtual machines using Windows 2012 Hyper-V.
    We have tried all the "this fixed it for me" solutions regarding DNS, network cards, host file entries and so forth.  None of those "solutions" made any difference whatsoever.  The occurrence of the temporary error PRX5 seems totally random. 
    About 2 out of 20 incoming mail test by Microsoft Connectivity Analyzer fail with this PRX5 error.
    Most people don't ever notice the issue because remote mail servers retry the connection later.  However, telephone voice mail systems that forward voice message files to email, or other such applications such as your scanner, often don't retry and
    simply fail.  Our phone system actually disables all further attempts to send voice mail to a particular user if the PRX5 error is returned when the email is sent by the phone system.
    Is Microsoft totally oblivious to this problem?
    PRX5 is a serious issue that needs an Exchange team resolution, or at least an acknowledgement that the problem actually does exist and has negative consequences for proper mail flow.
    JSB

  • Is there a way to automatically collect the log files from a AirPort or TimeCapsule base station?

    Hi there,
    the headline says basically all: Is there a way to collect the log files from a Time Capsule 7 AirPort from time to time? They are overwritten quite soon but I want a complete log of all activities to my access point.
    The AirPort Utility says something about SNMP. Is this the way to go? Some kind of demon on my Mac that retrieves and saves the logs like every two days?
    I want to copy the access logs, I am not interested in Time Machine backup logs.
    Thanks.

    There is a way to do this via Syslog. On the Logging & Statistics panel (within the AirPort Utility), you can point the AirPort's system logs to a Syslog "server."
    This would require a "dedicated" network client to receive the logs.
    Unfortunately, setting up a Syslog server is a bit intensive initially, but is simple to operate and maintain.
    Please check out this Apple Support Communities thread: Directing Syslog message to a file
    ref: Enable an Apple Mac OS machine as a syslog server

  • Where to see the log files of runtime

    For my installation, the locations mentioned in the documentation here http://download.oracle.com/docs/cd/E13154_01/bpm/docs65/admin_guide/index.html?t=modules/logging/c_Head_Logging.html does not contain any log files. Where can i find more information about finding log files and also more importantly how can I log from the BPM process. Is there an out of the box log activity available in BPM. I could not find anything in the documentation.

    Go to this link: Re: log Message in Studio
    Paragraph number 4 shows how to find the actual log file.
    Dan

  • XMLForm Service message keep showing in the log file

    Hello,
    The message below keeps showing in the log file when the form is opening in Workspace ES. It keeps repeating like hundred times in the log file when the form is launching.
    It just happen to some forms but not to all forms. I tried to see if there is any thing strange in the form but could not find any thing wrong in the form nor any explaination about this error on Adobe site or Internet. Note that I am still using LiveCycle ES 8.2.1 with SP3.
    =============
    [10/18/11 13:46:22:644 CDT] 0000003e XMLFormServic W com.adobe.service.ProcessResource$ManagerImpl log ALC-XTG-102-001: [1712] Bad value: 'designer__defaultHyphenation.para.hyphenation' of the 'use' attribute of 'hyphenation' element ''. Default will be used instead.
    ============
    Can any one please advise.
    Thanks in advance,
    Han

    Yes,
    Upgrade it to SP1 with hot-fix. SUN has a very big bug in the 4.16 SP1 software . u have to apply that hot-fix otherwise you will in big bug in userpassword attribute and some other security issues.

  • I'm confused about the apple ID transition from my aol screen name. Does it continue to use my aol email as my apple ID, converting it somehow or do I need to provide a new email address or just create a new username for the apple ID?

    I'm confused about the transition to an apple ID that doesn't use my aol email to sign in. The instructions are vague and ambiguous. Any help would be appreciated.

    OK, so if your current Apple ID using your AOL Username (like johndoe), then you need to log onto Manage your Apple ID and EDIT that AOL Username to a valid email address: Apple - My Apple ID
    If you have an AOL email address (like [email protected]), and you are not using that as another Apple ID, you can change the AOL Username Apple ID to that. Otherwise, you can change it to any valid email address (which you will have to verify when you change to it).
    Hope that clears it up. Post back if it doesn't!
    Cheers,
    GB

  • Info about the log traces in Activity Data Collector

    Hi,
    I have configured the activity data collector by setting the following properties in ADC and restarted the service
    Activate Data Collection :true
    Additional File Formats: --(not set anythng left blank)
    Base File Name: Portal Activity
    Directory Name: portalActivityTraces
    File Encoding : UTF-8
    Hour in the day to close all files, in GMT : 0
    Main File Format : %Orfo.t(dd-MM-yyyy HH:mm:ss,GMT+5.5)%%Stab%%Orfo.ct%%Stab%%Orfo.in%%Stab%%Orfo.un%%Stab%%Orfo.bt%%Stab%%Orfo.pu%%Stab%%Orfo.rh(referer)%%Snl%
    Max Buffer size :500KB
    Max File Size : 10240 KB
    The files are created in the folder called "Portal Activity Traces" But the issue is with the name of the log files getting created
    Since i have not set any additional file formats, The log files consist of main file formats
    the file names are like this 
    portalActivity_29893750_1254305061537.txt.open in this
    wat  does the time stamp "1254305061537" refer? Plz explain
    some files are of type text document and some are of type "open" wat does tis mean?
    If i set t "Hour in the day to close all files"
    to 0 wen does it write the log files? Is it at 12? after that will it create a new file?
    n in the main file format i have set the time to GMT5.5 (since IST is GMT5.5) but im not getting the proper time format
    Plz help me out
    Thanks in Advance
    Regards,
    Sowmya
    Edited by: Sowmya B on Sep 30, 2009 1:43 PM

    Hi Prasanna,
    Thanks for the reply.
    Actually there are about 3 to 4 files which are of type .open and they have created long back.Are those files still getting populated.If we set the "hour to close all files" to 0 it should close the open files and create the new(fresh) files for the next day right?
    According to the documentation in help portal, "Files may be closed before reaching this limit(Max File Size), as all files are closed at the hour specified in the Hour in the day to close all files property."
    Then how come some old files are still getting populated?
    Midnight means wat time in particular? Plz explain
    About timestamp is it the time the file was created in some format?
    If anyone knows plz explain the format of timestamp
    Edited by: Sowmya B on Sep 30, 2009 2:13 PM

  • SQL Server 2012 Reorg Index Job Blew up the Log File

    We have a maintenance plan that nightly (1) runs dbcc checkdb on all databases, (2) reorgs indexes on all databases, compacting large objects, (3) updates statistics, etc. There are three user databases, one large, one medium, one small. Usually it uses
    a little more than 80% of the medium database's log, set to 6,700 MB. Last night the reorg index step caused the log to increase to almost 14,000 MB and then blew up because the maximum file size was set to 14,000 MB, one of the alter index commands failed
    because it ran out of log space. (Dbcc checkdb step ran successfully.) Anyone have any idea what might cause this? There is one update process on this database, it runs at 3 AM. The maintenance plan runs at 9 PM and completes by 1 AM. The medium database has
    a 21,000 MB data file, reserved space is at about 10 GB. This is a SQL 2012 Standard SP 2 running on Windows 2012 Server Standard.

    I personally like to shrink the log files once the indexes have been rebuilt and before switching back to full recovery, because as I'm going to take a full backup afterwards, having a small log file reduces the size of the backup.
    Do you grow them afterwards, or do you let the application waste time on that during peak hours?
    I have not checked, but I see no reason why the backup size would depend on the size of the log file - it's the data in the data file you back up, not the log file.
    I would say this is highly dubious.
    Erland Sommarskog, SQL Server MVP, [email protected]
    Yeah I let the application allegedly "waste" a few milisseconds a day autogrowing the log file. Common, how long do you think it takes for a log file to grow a few GB on most storage systems nowadays? As long as you set an appropriate autogrow
    interval so your log file doesn't get too fragmented (full of VLFs), you'll be perfectly fine in most situations.
    Lets say you have a logical disk dedicated to log file storage, but it is shared across multiple databases within the instance. Having allocated space for the log files means there will be not much free space left in the disk in case ANY database needs more
    space than the others due to a peak in transactional workload, even though other databases have unused space that could have been used.
    What if this same disk, for some reason, is also used to store the tempdb log file? Then all applications will become unstable.
    These are the main reasons I don't recommend people blindly crucify keeping log files small when possible. I know there are many people who disagree and I'm aware of their reasons. Maybe we just had different experiences about this subject. Maybe people
    just haven't been through the nightmare of having a corrupted system database or a crashed instance because of insuficient log space in the middle of the day.
    And you are right about the size of the backup, I didn't put it correctly. It isn't the size of the backup that gets smaller (although the backup operation will run faster, having tested this myself), but the benefit from backing up a database with a small
    log file is that you won't need the extra space to restore it in a different environment such as a BI or DEV server, where recuperability doesn't matter and the database will be on simple recovery mode.
    Restoring the database will also be faster.
    Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
    is an undocumented behavior and should not be relied upon.

  • Robocopy Log File - Skipped files - Interpreting the Log file

    Hey all,
    I am migrating our main file server that contains approximately 8TB of data. I am doing it a few large folders at a time.  The folder below is about 1.2TB.  Looking at the log file (which is over 330MB) I can see it skipped a large number of files,
    however I haven't found text in the file where it specifies what was skipped, any idea on what I should search for?
    I used the following Robocopy command to transfer the data:
    robocopy E:\DATA Z:\DATA /MIR /SEC /W:5 /R:3 /LOG:"Z:\Log\data\log.txt"
    The final log output is:
                    Total    Copied   Skipped  Mismatch    FAILED    Extras
         Dirs :    141093    134629      6464         0         0         0
        Files :   1498053   1310982    160208         0     26863       231
        Bytes :2024.244 g1894.768 g 117.468 g         0  12.007 g  505.38 m
        Times :   0:00:00  18:15:41                       0:01:00 -18:-16:-41
        Speed :            30946657 Bytes/sec.
        Speed :            1770.781 MegaBytes/min.
        Ended : Thu Jul 03 04:05:33 2014
    I assume some are files that are in use but others may be permissions issues, does the log file detail why a file is not copied?
    TIA
    Carl

    Hi.
    Files that are skipped are files that already exists. Files that are open/permissions etc will be listed under failed. As Noah said use /v too see which files were skipped. From robocopy /?:
    :: Logging Options :
    /V :: produce Verbose output, showing skipped files.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. Even if you are not the author of a thread you can always help others by voting as Helpful. This can
    be beneficial to other community members reading the thread.
    Oscar Virot

  • What are the log files created while running a mapping?

    Hi Sasi , I have a doubt who actually generates the " Log Events ". Is it the Log Manager or Application services ?

    Hi Anitha, The Integration service  will be generate two logs when the mapping runs 1) Session log  -- Has the details of the task ,session errors and load statistics..2) Workflow log -- Has the details of the workflow processing, and workflow errors..  The workflow log will be generated when the workflow started and the session log will be generated once the session intiated.For more detail Please refer the infa help docs... Normally the services will generate log  ex: IS and RS will log their activity...  The below process will happen the when the workflow inititated..[Copied fropm infa help docs] 1.#The Integration Service writes binary log files on the node. It sends  information about the sessions and workflows to the Log Manager.2.#The Log Manager stores information about workflow and session logs in the  domain configuration database. The domain configuration database stores  information such as the path to the log file location, the node that contains  the log, and the Integration Service that created the log.3.#When you view a session or workflow in the Log Events window, the Log Manager  retrieves the information from the domain configuration database to determine  the location of the session or workflow logs.4.#The Log Manager dispatches a Log Agent to retrieve the log events on each  node to display in the Log Events window.    ThanksSasiramesh

  • The LOG file \work\dev_jcontrol is not present

    The LOG file \work\dev_jcontrol is not present; even thou I have restarted the server:
    stopsap
    startsap <j2ee_instanse>
    Any idea?

    Hi,
    cluster ID is just combination of below parameters:
    In our case, my source system (ABC) was refreshed from another system (XYZ) recently
    so while installing target system ( DEF), I changed the source system details from ABC to XYZ in below file and retried the
    SAPinst screen. System copy has got completed successfully.
    Open the file <installation directory>/jmt/cluster_id_switch.properties and edit the line
    src.ci.sid=
    src.ci.instance.number=
    src.ci.instance.name=
    src.ci.host=
    If in your case source system is not refreshed recently; You may try with functional host name or OS host name etc. details for above parameters.
    If this does not work check details of "SAP Note 966752 - Java system copy problems with the Java
    Migration Toolkit" which says almost the same thing but I could not get that as the statements related to
    box number are bit confusing and contradictory.
    Cheers !!!
    Ashish

Maybe you are looking for

  • Ical week view glitches

    In the last few days, my ical 8.0 week view, on my laptop running OS 10.10.1, has suddenly become glitchy.  Some days' info does display, while other days display as are blank.  When I switch to day or month view all the data is still there.  But wee

  • How to display a document content in a JSP page

    Hi friends, I am trying to display a document's content in a JSP page after user authentication.For that I mapped a jsp file with extension ".sens" in Content Management SDK manager,and put the file in the directory '/ifs/jsp-bin' of webstarterapp,bu

  • Error in BAPI_REQUIREMENTS_CHANGE

    Hi All,   I am getting the below error in  BAPI_REQUIREMENTS_CHANGE while changing the requirmrnt type of  a materail.what cud be wrong? Error : 'No new requirements can be created during a change transaction' Thanks, Rakesh.

  • Can not download crystal report 2008 Service Pack 1 Version 12.1.0.892

    I can't download crystal 2008 sp1 from https://smpdl.sap-ag.de/~sapidp/012002523100006555792009E/cr2008win_sp1.exe and I got follow message Error Message - ObjectID not existant What happened? ObjectID not found in R/3 What can you do? Check your Obj

  • Synchronous  and Asynchronous   clarification

    Hi, BDC is synchronous method and Call transaction is Asynchronous method. If I have 100 records in my internal table(which is uploaded from the flat file)  and 50th record is a erronious one.Then in both of the above cases what will happen What I kn