Log files not being removed.

Hello,
I've upgraded an application from BerkeleyDB 5.1.25 to 5.3.21, and after that, log files are no more automatically removed. This is the only change in the application. It's an application written in C.
The environment of the application is created with the flag DB_LOG_AUTO_REMOVE
dbenv->log_set_config(dbenv, DB_LOG_AUTO_REMOVE, TRUE).
The application has a thread to periodically checkpoint the data
dbenv->txn_checkpoint(dbenv, 0, 0, 0)
So far, so good, with version 5.1.25, this was enough to remove unused log files (I don't need to be able to do catastrophic recovery). But this doesn't work anymore with version 5.3.21.
I I run db_archive (no options), it shows nothing, suggesting that all log files are still needed. But if I run db_hot_backup on the database, all but the last logfiles are removed (on the backup) as wanted.
Rem : Usually, I don't want to run db_archive or any external tool, to remove unused log files. I hope what is inside the application is enough to remove unused log files.
Is this something known, something changed or can you suggest me something to look for ?
Thanks for your help
José-Marcio
Edited by: user564597 on Mar 24, 2013 6:35 PM
Edited by: user564597 on Mar 24, 2013 6:38 PM
Edited by: user564597 on Mar 25, 2013 8:57 AM

thank you for giving us a test program. This helped tremendously to fully understand what you are doing. In 5.3 we fixed a bug dealing with the way log files are archived in an HA environment. What you are running into is the consequences of that bug fix. In the test program you are using DB_INIT_REP. This is the key to use that you want an HA environment. With HA, there is a master and some number of read only clients. By default we treat the initiating database as the master. This is what is happening in our case. In an HA (replicated) environment, we cannot archive log files until we can be assured that the clients have applied the contents of that log file. Our belief is that you are not really running in an HA environment and you do not need the DB_INIT_REP flag. In our initial testing where we said it worked for us, this was because we did not use the DB_INIT_REP flag, as there was no mention of replication being needed in the post.
Recommendation: Please remove the use of the DB_INIT_REP flag or properly set up an HA environment (details in our docs).
thanks
mike

Similar Messages

  • Log4j log file not being created

    Using websphere for a web app. At first I was getting the error log4j:WARN No appenders could be found for logger....
    So I created the property file and I assume correctly referenced it. The error went away and my logging messages are showing up in the websphere console, but the .log file specified in my log4j.properties file is not being written to... it is only writing to my systemOut.log.
    If I remove the ROOT.File line it still does not create the file (I've done a search on the IBM directory
    #Default log level to ERROR. Other levels are INFO and DEBUG.
    log4j.rootLogger=INFO,ROOT
    log4j.appender.ROOT=org.apache.log4j.RollingFileAppender
    log4j.appender.ROOT.File=c:\myapplication.log
    log4j.appender.ROOT.MaxFileSize=1000KB
    #Keep 5 old files around.
    log4j.appender.ROOT.MaxBackupIndex=5
    log4j.appender.ROOT.layout=org.apache.log4j.PatternLayout
    #Format almost same as WebSphere's common log format.
    log4j.appender.ROOT.layout.ConversionPattern=[%d] %t %c %-5p - %m%n
    #Optionally override log level of individual packages or classes
    log4j.logger.com.webage.ejbs=INFO       
    private static final Logger logger = Logger.getLogger(LoginAction.class);
        public ActionForward execute(ActionMapping mapping, ActionForm form,
                HttpServletRequest request, HttpServletResponse response)
                throws IOException, ServletException {
            initializeLogger();
    private void initializeLogger() {
            org.apache.log4j.BasicConfigurator.configure();
    //trying the above just to get it to work.. because by default this
    //should look in WEB-INF/classes/log4j.properties... I thought
            /*try {
                String log4jUrl = servlet.getServletContext().getInitParameter(
                        "LOG4J_XML");
                if (!(log4jUrl == null || log4jUrl.equals("")))
                    DOMConfigurator.configure(servlet.getServletContext()
                            .getResource(log4jUrl));
            } catch (MalformedURLException e) {
                e.printStackTrace();
            } catch (FactoryConfigurationError e) {
                e.printStackTrace();
        }    Edited by: gmachamer on Nov 30, 2007 6:37 AM

    ok changed to xml file and found a few things out.
    now when I debug though the logger that was created has an empty level... but if I look at the parent logger it is correctly pulling the root logger from my xml (if I change the priority attribute then it changes when debugging the code)
    <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="true">
         <!-- this appender would be the same as having a System.out -->
         <appender name="console" class="org.apache.log4j.ConsoleAppender">
              <param name="Target" value="System.out"/>
              <layout class="org.apache.log4j.PatternLayout">
                         <param name="ConversionPattern" value="%-5p %c{1} - %m%n"/>
                  </layout>
           </appender>
           <appender name="rollingFileAppender" class="org.apache.log4j.RollingFileAppender">
              <!-- name and location of the file to log to -->
                 <param name="File" value="c:/appLog.log"/>
              <!-- the maximum size the file will be before it rolls the file -->
                 <param name="MaxFileSize" value="1000kb"/>
              <!-- the number of backups you want to maintain -->
              <param name="MaxBackupIndex" value="5"/>
              <!--
                   This is the layout of your messages, you can do alot with this.
                   See the java docs for the class PatternLayout for an explanation of
                   the different values you can have.
              -->
                 <layout class="org.apache.log4j.PatternLayout">
                          <param name="ConversionPattern" value="%t %-5p %c{2} - %m%n"/>
                      </layout>          
              </appender>
           <root>
                  <priority value ="error" />
                  <appender-ref ref="rollingFileAppender" />
                  <appender-ref ref="console" />
           </root> 
    </log4j:configuration>

  • Cannot publish get error message - log file not being created

    When trying to publish a FlashHelp project, I get an error
    message window that says "Publishing has been cancelled. Failed to
    create file: (project name).log "
    When I click okay in the message window, the publishing
    process stops. However, if I look in the SSL folder, I see the log
    file. It is a text file.
    I had this problem in January 09 but it seemed to be an issue
    with the password and path in the FTP command window. I fixed it
    and it worked fine. However, I haven't published since the end of
    January. Now, when I try to publish, it is giving me the same error
    message. I checked and reviewed the FTP window fields and they are
    fine. But I'm still getting the error message and can't publish.
    Why?
    I need to get this problem fixed ASAP and ensure that it
    doesn't occur again. What's strange is that I've got 3 other
    projects and this is the only 1 that gets this error message.

    Yes, the generation worked. I checked the log file that
    worked from the time it worked before and it seems to be the same
    as the log file that is generated when I get the error message.
    I created a new FlashHelp layout and got the same error
    message. What's really weird is there is a log file in the SSL
    folder but when you click OK in the error message, it stops the
    publish function.
    Last time I had to blow away the cpd file as if this was a
    corrupt project. But that gets to be painful. As I use templates to
    put change dates in the footers of topics and templates get lost
    when you blow away the cpd.
    Any other thoughts?

  • Files not being removed from hard drive

    I've deleted photos from the lightroom (5) catalog but the files havent been deleted from the folder on the hard drive (external).  What have i done wrong?

    I'll throw out a guess
    The folders containing these photos do not have WRITE permission, or are marked as read-only

  • Empty/underutilized log files not removed

    I have an application that runs the cleaner and the checkpointer explicitly (instead of relying on the database to do it).
    Here are the relevant environment settings: je.env.runCheckpointer=false, je.env.runCleaner=false, je.cleaner.minUtilization=5, je.cleaner.expunge=true.
    When running the application, I noticed that the few dozen log files have been removed, but later (even the cleaner was executed at regular intervals), no more log files were removed.
    I have run the DbSpace utility on the environment and found the following result:
    File Size (KB) % Used
    00000033 97656 0
    00000034 97655 0
    00000035 97656 0
    00000036 97656 0
    00000037 97656 0
    00000038 97655 2
    00000039 97656 0
    0000003a 97656 0
    0000003b 97655 0
    0000003c 97655 0
    0000003d 97655 0
    0000003e 97655 0
    0000003f 97656 0
    00000040 97655 0
    00000041 97656 0
    00000042 97656 0
    00000043 97656 0
    00000044 97655 0
    00000045 97655 0
    00000046 97656 0
    This goes on for a long time. I had the database tracing enabled at CONFIG level. Here are the last lines of the log just before the last log file (0x32) is removed:
    2009-05-06 08:41:51:111:CDT INFO CleanerRun 49 on file 0x30 begins backlog=2
    2009-05-06 08:41:52:181:CDT SEVERE CleanerRun 49 on file 0x30 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206347 nINsObsolete=6365 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199971 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:41:52:182:CDT INFO CleanerRun 50 on file 0x31 begins backlog=1
    2009-05-06 08:41:53:223:CDT SEVERE CleanerRun 50 on file 0x31 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205475 nINsObsolete=6319 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199144 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:41:53:224:CDT INFO CleanerRun 51 on file 0x32 begins backlog=0
    2009-05-06 08:41:54:292:CDT SEVERE CleanerRun 51 on file 0x32 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205197 nINsObsolete=6292 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198893 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:24:300:CDT INFO CleanerRun 52 on file 0x33 begins backlog=1
    2009-05-06 08:42:24:546:CDT CONFIG Checkpoint 963: source=api success=true nFullINFlushThisRun=13 nDeltaINFlushThisRun=0
    2009-05-06 08:42:24:931:CDT SEVERE Cleaner deleted file 0x32
    2009-05-06 08:42:24:938:CDT SEVERE Cleaner deleted file 0x31
    2009-05-06 08:42:24:946:CDT SEVERE Cleaner deleted file 0x30
    Here are a few log lines right after the last log message with cleaner deletion (until the next checkpoint):
    2009-05-06 08:42:25:339:CDT SEVERE CleanerRun 52 on file 0x33 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204164 nINsObsolete=6277 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197865 nLNsCleaned=11 nLNsDead=0 nLNsMigrated=0 nLNsMarked=11 nLNQueueHits=9 nLNsLocked=0
    2009-05-06 08:42:25:340:CDT INFO CleanerRun 53 on file 0x34 begins backlog=0
    2009-05-06 08:42:26:284:CDT SEVERE CleanerRun 53 on file 0x34 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=203386 nINsObsolete=6281 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197091 nLNsCleaned=2 nLNsDead=2 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:56:290:CDT INFO CleanerRun 54 on file 0x35 begins backlog=4
    2009-05-06 08:42:57:252:CDT SEVERE CleanerRun 54 on file 0x35 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205497 nINsObsolete=6312 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199164 nLNsCleaned=10 nLNsDead=3 nLNsMigrated=0 nLNsMarked=7 nLNQueueHits=6 nLNsLocked=0
    2009-05-06 08:42:57:253:CDT INFO CleanerRun 55 on file 0x39 begins backlog=4
    2009-05-06 08:42:58:097:CDT SEVERE CleanerRun 55 on file 0x39 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204553 nINsObsolete=6301 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198238 nLNsCleaned=2 nLNsDead=0 nLNsMigrated=0 nLNsMarked=2 nLNQueueHits=1 nLNsLocked=0
    2009-05-06 08:42:58:098:CDT INFO CleanerRun 56 on file 0x3a begins backlog=3
    2009-05-06 08:42:59:261:CDT SEVERE CleanerRun 56 on file 0x3a invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204867 nINsObsolete=6270 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198586 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:59:262:CDT INFO CleanerRun 57 on file 0x36 begins backlog=2
    2009-05-06 08:43:02:185:CDT SEVERE CleanerRun 57 on file 0x36 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206158 nINsObsolete=6359 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199786 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:02:186:CDT INFO CleanerRun 58 on file 0x37 begins backlog=2
    2009-05-06 08:43:03:243:CDT SEVERE CleanerRun 58 on file 0x37 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206160 nINsObsolete=6331 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199817 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:03:244:CDT INFO CleanerRun 59 on file 0x3b begins backlog=1
    2009-05-06 08:43:04:000:CDT SEVERE CleanerRun 59 on file 0x3b invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206576 nINsObsolete=6385 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200179 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:04:001:CDT INFO CleanerRun 60 on file 0x38 begins backlog=0
    2009-05-06 08:43:08:180:CDT SEVERE CleanerRun 60 on file 0x38 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205460 nINsObsolete=6324 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=194125 nLNsCleaned=4999 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=4999
    2009-05-06 08:43:08:224:CDT INFO CleanerRun 61 on file 0x3c begins backlog=0
    2009-05-06 08:43:09:099:CDT SEVERE CleanerRun 61 on file 0x3c invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206589 nINsObsolete=6343 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200235 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:24:548:CDT CONFIG Checkpoint 964: source=api success=true nFullINFlushThisRun=12 nDeltaINFlushThisRun=0
    I could not see anything fundamentally different between the log messages when log files were removed and when they were not. The DbSpace utility confirmed that there are plenty of log files under the minimum utilization, so I can't quite explain while the log file removal stopped all of a sudden.
    Any help would be appreciated (JE version: 3.3.75).

    Hi Bertold,
    My first guess is that one or more transactions have accidentally not been ended (committed or aborted), or cursors not closed.
    A clue is the nLNsLocked=4999 in the second set of trace messages. This means that 4999 records were locked by your application and were unable to be migrated by the cleaner. The cleaner will wait until these record locks are released before deleting any log files. Records locks are held by transactions and cursors.
    If this doesn't ring a bell and you need to look further, one thing you can do is print the EnvironmentStats periodically (System.out.println(Environment.getStats(null))). Take a look at the nPendingLNsProcessed and nPendingLNsLocked. The former is the number of records the cleaner attempts to migrate because they were locked earlier. The latter is the number that are still locked and cannot be migrated.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • NImax.exe has generated errors. You must restart the program, a log file is being created.

    NImax.exe has generated errors. You must restart the program, a log file is
    being created.
    This error occurs just after bootup, no programs started except windows..
    Running Windows 2000 latest version on a brand new computer.
    (pent 4 3.0 ghz 500MB ram).
    Can anyone help me with this problem? Happened after install Labview 7.0
    express. Installed twice and same problem occured with both installs.
    Labview 7.0 is the only thing installed right now. Also selected "Visual
    Basic Support" install from driver cd's.
    One other thing I just noticed. System info reports 2 processors and I
    think I only have one.
    TIA

    Happy update.
    After ignoring this problem for awhile and installing some NI DAQ boards,
    this problem is not happening anymore.
    Would urge NI to investigate further though. Messed me up for a couple of
    days, and still don't know why it happened.
    Much thanks to those who helped.
    JJ
    "JJF" wrote in message
    news:qB6Hb.48702$VB2.90660@attbi_s51...
    > One other point, I found out this is a message generated by drwtsn32. Can
    > get rid of the error by unchecking "show visual feedback on errors" box,
    but
    > don't like that fix. Still need help.
    >
    > Thanks,
    >
    > JJ
    >
    > "JJF" wrote in message
    > news:%JRGb.138921$8y1.419649@attbi_s52...
    > > Hi Nirmal and all,
    > >
    > > Happy Holidays to you too, and thanks for the reply.
    > >
    > > Making progress. Did all the uninstalling and registry editing as you
    > > suggested. Also, updated Win2K at Microsoft.com as suggested.
    > > Re-installed with default settings of LV express 7.0. Now I only get
    > the
    > > error when I exit NImax.
    > >
    > > Also, my p2p home network is not that hot. Have it set up as a
    workgroup.
    > > Sometimes when I boot up I have file access between the two computers
    and
    > > sometimes windows explorer can't find the workgroup network path
    (between
    > > the two computers). Both computers can always access the internet
    though.
    > > Using a 4 port Linksys cable/dsl router on a cable modem. Using XP
    home
    > > edition on base computer (the one set up for the isp) and Win2K on the
    > other
    > > computer.
    > >
    > > Also, any idea where this log file is going? Thought it was part of the
    > > event viewer but the errors in the event viewer don't seem to correspond
    > to
    > > the NImax.exe logging event now. There is a file in
    > > "winnt\sytem32\config\software.log" that seems to change at about the
    same
    > > time but I can't access the file because it is in use by the system.
    > >
    > > Using NImax ver 3.02.3005. Do you know if I can download "NI
    measurement
    > > and automation explorer" and/or the driver cd's from NI.com? Maybe I
    have
    > a
    > > flaky disk or something. I have a feeling it may also have to do with
    the
    > > "on again/off again" network neighborhood connection problem.
    > >
    > >
    > > Thanks again,
    > >
    > > JJ
    > >
    > >
    > >
    > > "Nirmal Sharma" wrote in message
    > > news:506500000005000000AE470100-1068850981000@exch​ange.ni.com...
    > > > Hi,
    > > > Happy cristmas & new year for you & ur brand new pc...
    > > >
    > > > How are you uninstalling & then installing LV in your pc ?
    > > >
    > > > I suggest to remove complete LV (Remove All option) from your
    > > > computer. Once uninstallation is completed, go to windows registry -
    > > > by windows start -> run -> regedit -> Enter
    > > >
    > > > Goto HKEY_LOCAL_MACHINE-> SOFTWARE -> National Instruments - Delete
    > > > this folder (National Instruments folder)
    > > >
    > > > Remove any other foloder/file related to NI's software. Be very
    > > > cautious while deleting files from windows registry bcoz wrong file
    > > > deletion may hang your whole system.
    > > >
    > > > Restart your computer..hope it should bootup without any errors.
    > > >
    > > > Now as answered by Alexander, update windows.
    > > >
    > > > After updating, bootup your system. If it boots up without any error
    > > > message, install LV with the typical (default installation).
    > > >
    > > > Hope this helps. Your feedbacks are welcome.
    > > >
    > > > Best Regards,
    > > > Nirmal Sharma
    > > > India
    > >
    > >
    >
    >

  • PDF files not being displayed correctly, instead I get a blank screen with some sort of small pinned icon in the centre.  It was working fine until today HELP!

    PDF files not being displayed correctly, instead I get a blank screen with some sort of small pinned icon in the centre.  It was working fine until today HELP!

    What is your operating system?  Reader version?  Are these local or online PDFs?  If online, in what browser?
    Can you post a screenshot: https://forums.adobe.com/thread/1070933

  • BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED

    Hello,
    To generate the log report, /VIRSA/ZVFATBAK program is scheduled on hourly basis but some time report doesn't get generated and if we see the background job then it shows sucessfully finished.
    If we see the maually the log report for FFID then below error message is displayed.
    " BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED"
    Can anyone guide me to solve the issue.
    Thanks in advance.
    Best Regards,
    Prashant Dubey

    Hi,
    once chk the status of the job by selecting that and check job status(cltr+shift_f12)
    since it was periodically scheduled job there will be a RELEASED job after every active job..
    so try to copy that into another job using copy option and give some new name which u have to remember...
    the moment u copy u can find the same copied job in SCHEDULED status...
    from here, try to run it again on hourly basis....
    After copying the job u can unschedule the old released job to scheduled otherwise 2 will run at a time...
    rgds,

  • Anybody know what's up with the menu.xml.files not being available for DW after installing on a new comp?

    Anybody know what's up with the menu.xml.files not being available for DW after installing on a new comp?

    Usually that error can be cleared up by renaming the personal config folder. Turn on your OS's hidden files and then go to...
    C: > Users > your username > AppData > Roaming > Dreamweaver (version) > your language > configuration
    Rename the configuration folder configuration-old and start DW up. That should create an entirely new configuration folder and correct the menu.xml file.

  • I ran into the issue of the 32-bit file not being recognized in LR 5.6

    I ran into the issue of the 32-bit file not being recognized in LR 5.6 after it has been saved in PS. I tried every type of file I could save a file as in CC but none of them would be recognized or said it was corrupt.  I had exported 3 files to Merge in HDR Pro Photoshop as 32 bit files.  I then saved in PS and returned to LR 5.6, but it said the bit depth was not supported .  Is this a known issue with LR 5.6 / PS CC?

    Hi,
    this might help: Video Tutorial – 32-bit HDR TIFF files in LR 4.1 « Julieanne Kost's Blog (Only 32 bit TIFF is supported).

  • 3?'s: Message today warning lack of memory when using Word (files in Documents) something about "idisc not working" 2. Message week ago "Files not being backed up to Time Capsule"; 3. When using Mac Mail I'm prompted for password but none work TKS - J

    3 ?'s:
    1  Message today warning lack of memory when using Word (files in Documents) something about "idisc not working"
    2. Message week ago "Files not being backed up to Time Capsule";                                                                                                                                             
    3. When using Mac Mail I'm prompted for password but none work
    Thanks - J

    Thanks Allan for your quick response to my amateur questions.
    Allan:     I'm running version Mac OS X Version 10.6.8     PS Processor is 2.4 GHz Intel core 15 
    Memory  4 gb  1067   MHz  DDr3  TN And @ 1983-2011 Apple Inc.
    I just "Updated Software" as prompted.
    Thanks for helping me!    - John Garrett
    PS.
    Hardware Overview:
      Model Name:          MacBook Pro
      Model Identifier:          MacBookPro6,2
      Processor Name:          Intel Core i5
      Processor Speed:          2.4 GHz
      Number Of Processors:          1
      Total Number Of Cores:          2
      L2 Cache (per core):          256 KB
      L3 Cache:          3 MB
      Memory:          4 GB
      Processor Interconnect Speed:          4.8 GT/s
      Boot ROM Version:          MBP61.0057.B0C
      SMC Version (system):          1.58f17
      Serial Number (system):          W8*****AGU
      Hardware UUID:          *****
      Sudden Motion Sensor:
      State:          Enabled
    <Edited By Host>

  • GIF files not being cached for toolbar (WEB Forms)

    I realized that the GIF files we are using
    in our toolbars are not being cached by the
    browser. I analyzed the file XLF.LOG and
    the files always have access code "200"
    (not cached). The file REGISTRY.DAT has
    the same problem. Only the JAR file has
    access code "304" (cached). Although these
    GIF files are small (~900 bytes), it takes
    many seconds to open each form when someone
    executes our application using a low speed
    line (many of our forms have toolbars
    with more than 30 icons). Is there any
    special configuration I have to do or this
    will be corrected in future versions?
    XLF.LOG (first and second execution):
    #Version: 1.0
    #Software: Oracle WRB Log Server
    #Fields: clf
    1.0.5.57 - - [24/Jan/2000:07:45:48 -0300] "GET /htm/sales HTTP/1.0" 200 1440
    1.0.5.57 - - [24/Jan/2000:07:45:51 -0300] "GET /htm/ HTTP/1.0" 200 760
    1.0.5.57 - - [24/Jan/2000:07:45:51 -0300] "GET /forms_code/f60all.jar HTTP/1.0" 304 0
    1.0.5.57 - - [24/Jan/2000:07:45:53 -0300] "GET /forms_code/javax/swing/JInternalFrame.class HTTP/1.0" 404 99
    1.0.5.57 - - [24/Jan/2000:07:45:55 -0300] "GET /forms_code/oracle/forms/registry/Registry.dat HTTP/1.0" 200 4122
    1.0.5.57 - - [24/Jan/2000:07:46:06 -0300] "GET /img/PASSWORD.gif HTTP/1.0" 200 853
    1.0.5.57 - - [24/Jan/2000:07:46:06 -0300] "GET /img/HELP.gif HTTP/1.0" 200 898
    1.0.5.57 - - [24/Jan/2000:07:46:06 -0300] "GET /img/KEYS.gif HTTP/1.0" 200 864
    1.0.5.57 - - [24/Jan/2000:07:46:06 -0300] "GET /img/EXIT.gif HTTP/1.0" 200 888
    1.0.5.57 - - [24/Jan/2000:07:47:28 -0300] "GET /htm/sales HTTP/1.0" 304 0
    1.0.5.57 - - [24/Jan/2000:07:47:30 -0300] "GET /htm/ HTTP/1.0" 304 0
    1.0.5.57 - - [24/Jan/2000:07:47:30 -0300] "GET /forms_code/f60all.jar HTTP/1.0" 304 0
    1.0.5.57 - - [24/Jan/2000:07:47:32 -0300] "GET /forms_code/javax/swing/JInternalFrame.class HTTP/1.0" 404 99
    1.0.5.57 - - [24/Jan/2000:07:47:32 -0300] "GET /forms_code/oracle/forms/registry/Registry.dat HTTP/1.0" 200 4122
    1.0.5.57 - - [24/Jan/2000:07:47:45 -0300] "GET /img/PASSWORD.gif HTTP/1.0" 200 853
    1.0.5.57 - - [24/Jan/2000:07:47:45 -0300] "GET /img/HELP.gif HTTP/1.0" 200 898
    1.0.5.57 - - [24/Jan/2000:07:47:45 -0300] "GET /img/KEYS.gif HTTP/1.0" 200 864
    1.0.5.57 - - [24/Jan/2000:07:47:45 -0300] "GET /img/EXIT.gif HTTP/1.0" 200 888
    null

    Have you tried turning on the "File Caching" setting under the "Server" branch for you Listener under OAS?

  • SWF File not being generated by compiler

    I am using Flash Builder 4.5.
    One of the modules that compiles without error is still not being bound into the runtime image (in this case debug).
    There are no compiler errors (auto build is switched ON) but a swf file is not being generated.
    I have tried removing the reference to the module in the Project ->Properties->Modules window and then adding it back in to no avail.
    This was working up until I made a change to the module in question and then saved it.

    I would check all imported images in the library. If you cannot see the image in your library preview; there is a good chance that there is an issue and it will need to be reimported.

  • Large files not being picked by FTP adapter

    Hello,
    In our integration scenario, files from the AS400 (legacy system) are placed at a FTP location from where the XI picks up the file for processing. We have FTP communication channel.
    The file has about 8000 records is in binary format. When the file was placed it was not being picked up after the mentioned polling period of 60 secs. After manually inserting about 300 records which makes the file size smaller, the file is picked up bu XI and processed without error. With the 8000 records the file size was 2.5 MB.
    Thank you

    Please provide a J2EE trace at logging severity "Debug" for the location
    "com.sap.aii.adapter.file". Please do the following:
    - increase logging severity to "Debug"
    - set log levels to ERROR for all components
    - ONLY set log level to DEBUG for "com.sap.aii.adapter.file"
    - reproduce the problem and gather the defaultTrace.trc immediately
      after you reproduce
    - remember to reset the logging severity after you have reproduced as
      leaving it on "Debug" can cause performance problems in your system.
    Also the ftp server logs will be needed in this case.

  • Archived log file not displaying

    While navigating around the "home" page for OCS as an administrator...I was trying to run a report under Reports>Conferences>Diagnostics.
    The links says:
    Click the link below to view comprehensive conference diagnostics. To see the log file correctly, use Internet Explorer 6.0 or higher.
    I am using IE 6 and the page shows up as being done...but it is blank. Any idea what is wrong? The URL reads:
    https://mywebserver/imtapp/logs/imtLogs.jsp?fileName=D:/ocs_onebox/mtier/imeeting/logs/sessions/12.20.2004/10000-clbsvr_OCS_home_mid.mywebserver.imt-collab.0-06_34_01.xml
    The file is there on the filesystem.
    TIA.

    Stages means Transformations in Data flow...
    Transformation names are not displaying correctly in log file.
    for example if i given name as  "TC_table_name" for Table compare Transformation then its displaying only "Table Comparison"  in Log file

Maybe you are looking for