Problems with transaction-logs on cache engines

Good Day All,
I have a Cache Engine 550 here and the transaction log working.log file got quite large.
I was not able to export it to my ftp server so I logged into the Cache engine via ftp and downloaded the file to a PC.
I then deleted the working.log file on the Cache Engine and rebooted the cache engine.
The working.log file was not re-created as I had hoped it might be.
I have created a file called working.log in the correct directory. This file does not seem to get updated though so this must not be right either.
Any suggestions?
regards,
amanda

Hi Zach,
Thank you so much for writing back. I am running an archaic version of the software... i can check tomorrow. As to the logging.... i had not enabled transaction-logging in itself so it was a silly config error ...
:) amanda

Similar Messages

  • Performance problem with transaction log

    We are having some performance problem in SAP – BW 3.5 system running on MS – SQL server 2000.The box is sized 63,574 MB. The transaction logs gets filled up after loading data in to a transactional cube or after doing selective deletion. The size of the transaction log is 7,587MB currently.
    Basis team feels that when performing either loading or selective deletion, SQL server views it as a single transaction and doesn't commit until every record is written. And so as a result, transaction logs fills up ultimately bringing the system down.
    The system log shows a DBIF error during the transaction log fill up as follows:
    Database error 9002 at COM
    > [9002] the log file for database 'BWP' is full. Back up the
    > Transaction log for the database to free up some log space.
    Function COMMIT on connection R/3 failed
    Perform rollback
    Can we make changes to Database to make commit action frequently? Is there any parameters we could change to reduce the packet size? Is there some setting to be changed in SQL server?
    Any Help will be appreciated.

    if you have disk space avialable you can allocate more space to the transaction log.

  • OAV-9016 - Audit Vault 12.1.1 error creating audit trail with TRANSACTION LOG

    Hey guys,
    I bumped into this problem when trying to start an audit trail with TRANSACTION LOG.
    Oracle Audit Vault and Database Firewall 12.1.1.1
    Oracle 11gR2 RAC two nodes, OEL x64.
    Connection String:
    jdbc:oracle:thin:@//192.168.1.139:1521/orcl
    I have already ran the sql setup for a REDO_COLL user.
    Any ideas?
    I have created secure target for each node.
    (host01)(oracle@orcl1):log> pwd
    /u01/app/oracle/agent/av/log
    (host01)(oracle@orcl1):log> cat av.collfwk-8311-0.log
    [2013-12-12T17:16:49.855-02:00] [collfwk] [ERROR] [] [] [tid: 22] [ecid: 192.168.1.109:27132:1386867392018:0,0] OAV-9016: Target database global_name is not correct. global_name must include the domain for transaction log collection. Please configure the target database with the correct global_name.CollectionFactory : createCollection : Exception while creating collection. [[
    Target database global_name is not correct. global_name must include the domain for transaction log collection. Please configure the target database with the correct global_name.
            at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.checkDBName(RedoCollector.java:1480)
            at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.verifySource(RedoCollector.java:1278)
            at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.startCollector(RedoCollector.java:215)
            at oracle.av.platform.agent.collfwk.impl.redo.RedoCollectorManager.startTrail(RedoCollectorManager.java:199)
            at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:504)
            at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:354)
            at oracle.av.platform.agent.StartTrailCommandHandler.processMessage(StartTrailCommandHandler.java:63)
            at oracle.av.platform.agent.AgentController.processMessage(AgentController.java:325)
            at oracle.av.platform.agent.AgentController$MessageListenerThread.run(AgentController.java:1859)
            at java.lang.Thread.run(Thread.java:722)
    (host01)(grid@+ASM1):~> lsnrctl status
    LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 12-DEC-2013 17:27:34
    Copyright (c) 1991, 2011, Oracle.  All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    STATUS of the LISTENER
    Alias                     LISTENER
    Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
    Start Date                12-DEC-2013 16:58:03
    Uptime                    0 days 0 hr. 29 min. 31 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /u01/app/11.2.0/grid/network/admin/listener.ora
    Listener Log File         /u01/app/grid/diag/tnslsnr/host01/listener/alert/log.xml
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.109)(PORT=1521)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.139)(PORT=1521)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
      Instance "+ASM1", status READY, has 1 handler(s) for this service...
    Service "orcl" has 1 instance(s).
      Instance "orcl1", status READY, has 1 handler(s) for this service...
    Service "orclXDB" has 1 instance(s).
      Instance "orcl1", status READY, has 1 handler(s) for this service...
    The command completed successfully
    (host01)(grid@+ASM1):~>
    (host01)(grid@+ASM1):~> cat /u01/app/11.2.0/grid/network/admin/listener.ora
    LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))            # line added by Agent
    LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3))))                # line added by Agent
    LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2))))                # line added by Agent
    LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))                # line added by Agent
    ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON                # line added by Agent
    ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON                # line added by Agent
    ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON                # line added by Agent
    ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON              # line added by Agent
    (host01)(grid@+ASM1):~>

    Hi
    Just run the script $AV_AGENT/av/plugins/com.oracle.av.plugin.oracle/config/oracle_user_setup.sql  USER_NAME REDO_COLL
    This will grant the user some privileges and roles like DBA and CREATE Database Link
    I hope this answer your question
    Thanks
    Ahmed Moustafa

  • Audit Vault 12.1.1 error creating audit trail with TRANSACTION LOG

    Hi,
    i installed AV 12.1.1 , the DB target is with Data Guard.
    when i run the script oracle_user_setup with the mode REDO_COLL the final message is that was succesfull , but when i go to the AV console and try to create an audit trail with TRANSACTION LOG the AV console shows me an error and the log shows me this :
    [2013-10-16T03:37:18.593-05:00] [collfwk] [ERROR] [] [] [tid: 10] [ecid: 192.168.56.8:78800:1381912639433:0,0] RedoCollector : runSourceScript : Error while running script on source for REDO collector.
    [2013-10-16T03:37:19.528-05:00] [collfwk] [ERROR] [] [] [tid: 10] [ecid: 192.168.56.8:78800:1381912639433:0,0] OAV-8004: Failed to start collector {0}:{1}CollectionFactory : createCollection : Exception while creating collection. [[
    Failed to start collector {0}:{1}
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.runSourceScript(RedoCollector.java:816)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.sourceSetup(RedoCollector.java:579)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.setup(RedoCollector.java:454)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.startCollector(RedoCollector.java:216)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollectorManager.startTrail(RedoCollectorManager.java:199)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:504)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:354)
                    at oracle.av.platform.agent.StartTrailCommandHandler.processMessage(StartTrailCommandHandler.java:63)
                    at oracle.av.platform.agent.AgentController.processMessage(AgentController.java:325)
                    at oracle.av.platform.agent.AgentController$MessageListenerThread.run(AgentController.java:1859)
                    at java.lang.Thread.run(Thread.java:679)
    Nested Exception:
    java.sql.SQLSyntaxErrorException: ORA-01031: insufficient privileges
    ORA-06512: at line 1
                    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:445)
                    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
                    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:879)
                    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:450)
                    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192)
                    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
                    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
                    at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1044)
                    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1329)
                    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3584)
                    at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3685)
                    at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1376)
                    at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
                    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                    at java.lang.reflect.Method.invoke(Method.java:616)
                    at oracle.ucp.jdbc.proxy.StatementProxyFactory.invoke(StatementProxyFactory.java:230)
                    at oracle.ucp.jdbc.proxy.PreparedStatementProxyFactory.invoke(PreparedStatementProxyFactory.java:124)
                    at $Proxy2.execute(Unknown Source)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.runSourceScript(RedoCollector.java:747)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.sourceSetup(RedoCollector.java:579)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.setup(RedoCollector.java:454)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.startCollector(RedoCollector.java:216)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollectorManager.startTrail(RedoCollectorManager.java:199)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:504)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:354)
                    at oracle.av.platform.agent.StartTrailCommandHandler.processMessage(StartTrailCommandHandler.java:63)
                    at oracle.av.platform.agent.AgentController.processMessage(AgentController.java:325)
                    at oracle.av.platform.agent.AgentController$MessageListenerThread.run(AgentController.java:1859)
                    at java.lang.Thread.run(Thread.java:679)
    i don't understand why the issue because the user has the privileges given by the script and i tried with grant as sysdba but without any result
    i don't understand what are the privileges that the collector needs.
    any idea?
    thnks for any help

    Hi
    Just run the script $AV_AGENT/av/plugins/com.oracle.av.plugin.oracle/config/oracle_user_setup.sql  USER_NAME REDO_COLL
    This will grant the user some privileges and roles like DBA and CREATE Database Link
    I hope this answer your question
    Thanks
    Ahmed Moustafa

  • Problems with transaction CJ20N

    Hello Experts,
    I have created a tooling asset with transaction AS01 and I have some problems with transaction CJ20N.
    I would like to know how can we Link the Asset already created with the WBS by transaction CJ20N.
    Thanks
    Ferdaws

    Hi Ferdaws,
    In the asset master (AS02) you can maintain the WBS element in Time Dependent data.
    For maintaining the WBS element in asset master, first you have to make WBS element optional entry in Screen Layouts for Asset master data.
    Kindly check with this.
    Best Regards,
    Vasu.

  • Problem with Transaction-Duration Dialog

    Hello All,
    I wonder if somebody could shed a light on the problem that we're having
    with transaction-duration dialog service object. The problem is this.
    When we call the service object and its SQL statement fails (for
    example, because of constraint violation) Forte raises AbortException.
    It also clears the error stack, so we have no way to know what caused
    it, original error ("ORA -... Constraint Violation") disappears. All we
    get is this:
    USER ERROR: Access to a load balanced router member (which is a service
    object) failed for the reasons below.
    Class: qqsp_AbortTransaction with ReasonCode: SP_ER_USERABORT
    Error #: [601, 162]
    Detected at: qqdo_LbRouter::Route at 2
    Last TOOL statement: method tester.test, line 3
    Error Time: Thu Jul 2 15:56:40
    Distributed method called: SO_UmbrellaProxy.testAbortExc (object
    name
    site/sosa_sampleservice_cl0/th_testabort_cl0-bmso0x15d:0x1) from
    partition
    "TH_TestAbort_cl0_Client", (partitionId =
    DC5B2DC0-0EA9-11D2-AFD0-5F72194BAA77:0x15d:0x2, taskId =
    [DC5B2DC0-0EA9-11D2-AFD0-5F72194BAA77:0x15d.4]) in application
    "Forte_cl0", pid 74 on node AANANIEV in environment archenv
    Originator: SP_AO_XACTMGR
    Exception occurred (locally) on partition "Forte_cl0_Client",
    (partitionId
    = DC5B2DC0-0EA9-11D2-AFD0-5F72194BAA77:0x15d:0x1, taskId =
    [DC5B2DC0-0EA9-11D2-AFD0-5F72194BAA77:0x15d:0x1.16]) in
    application
    "Forte_cl0", pid 74 on node AANANIEV in environment archenv.
    USER ERROR: Your transaction was aborted
    Class: qqsp_AbortTransaction with ReasonCode: SP_ER_USERABORT
    Error #: [402, 45]
    Detected at: XactMgr.TXAbort at 1
    Error Time: Thu Jul 2 15:56:40
    Originator: SP_AO_XACTMGR
    Exception occurred (remotely) on partition
    "TH_TestAbort_cl0_Part1-router", (partitionId =
    DC5B2DC0-0EA9-11D2-AFD0-5F72194BAA77:0x15b:0x9, taskId =
    [DC5B2DC0-0EA9-11D2-AFD0-5F72194BAA77:0x15d:0x2.17]) in
    application
    "TH_TestAbort_cl0", pid 18196 on node hp9000_1 in environment
    archenv.
    Has anyone experienced this problem with "transaction-duration" service
    object? Is there any way to find out the cause of "AbortException"? It
    seems impossible to catch the AbortException on the server partition,
    looks like Forte exits right away.
    Everything works fine with "message-duration" dialog, unfortunately it
    is not an option for us because of the way our batch programs were
    designed.
    Any help is greatly appreciated.
    Alexander Ananiev
    Claremont Technology Group
    916-558-4127
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    thanks Peter for trying to help
    ok, i've narrowed down the problem, it seems that when i try and save a file onto one partition of one of my ext drives it won't save to my last save dir but if i save to another partition/drive it works ok, also if i rename the 'dodgy' partition it also seems to work ok, so somehow the name of the drive has become corrupted.
    anybody ever heard of this? or how i could fix this as it's a quite annoying because alot of my audio files are referenced on this partition, so itunes or Cubase (audio sequencer) can't find my files now.

  • Problem With Seeing Message in Adapter Engine

    Hello Everyone,
    We are facing a strange problem while running an <b>Integration Process</b>, <b>BPM</b> in PI. In the process, we are trying to read a .CSV File and convert it into a .XML File via FCC. The .CSV file is placed in the XI box itself, under a certain directory. We have configured the process such that whenever the .CSV file is read, its attributes are set to <b>'read only'</b>. After configuring the process, the required XML file did not show up. So we checked that whether the .CSV file was read and it showed up attributes of 'read only'.We cross checked
        -First by going to <b>SXMB_MONI</b>; here it said 'No messages available for selection'
        -Then by examining the 'channel monitoring' in AE; where we found both the sender and receiver channel are showing green. The sender channel first shows up that <b>'Processing started'</b> and then <b>'Processing Completed Successfully'.</b> The receiver channel does not show up any messages.
        -we checked for the XML messages that are in the Adapter in <b>IDX5</b>; it shows <b>'no messages selected'</b>.
        -We anticipated that it may be a cache problem and cleared all the caches; but when we tried to clear the Adapter Framework Cache, it popped up an authorization error and the problem persisted.
    We are using NW2004s with SP6 at the ABAP and BASIS stack and SP 9 at the Java Stack. Also, all the required connections made in SM59 are in order.
    Now, <b>What can be the problem? Is it the difference in SP level that is causing the problem? Or is it something else? What have we missed?</b> Guys, please help us solve this problem.
    Need Help. Pleae Reply. Points will be awarded.
    Thanking in anticipation.
    Amitabha

    Hi Shabarish,
    1.Document name <b>is indeed</b> the name of message type in our case.
    2. We did not use delete mode; but once the file was read- we manually changed its attributes from 'RA' to 'A'..and it became 'RA' again after 2 seconds; which is our polling interval.
    3. And we have discovered that the file is being read properly..because we can see the XML payload of it and it contains the correct data.But still it does neither show up in SXMB_MONI..nor in IDX5.
    <b>Can you please tell us what are the necessary roles for the User PIAFUSER?</b> We suspect that somewhere's an authorization error we are getting; because the AF cache refresh attempt shows "Forbidden" and we are getting an error in Tcode SXI_SHOW_MESSAGE as:
    <b>AE_DETAILS_GET_ERROR: no_adapter_engine_found: Unable to find Adapter Engine:</b>; Stack trace shows:<b>Error while reading access data (URL, user, password) for the Adapter Engine</b>

  • Problem with SAP PI7.0 J2ee Engine starting

    Hi,
    Problem with SAP PI7.0 Server
    I am using AIX6.1 and p570
    J2ee engine is not starting but I am able to login to ABAP engine.
    When I am trying to long in through internet explorer it is not working.
    How to check J2ee engine started or not.
    If it is not started then how to start J2ee engine.
    If J2ee engine problem is there then how to solve the issue and where to check to solve the issue.

    Hi,
    How to check J2ee engine started or not.
    Check the url http://ipaddress:5xx00/index.html
    If this url shows you the system info page then you j2ee engine is up.
    If it is not started then how to start J2ee engine.
    Check your server0.log file.
    If J2ee engine problem is there then how to solve the issue and where to check to solve the issue.
    Check for the problems in log file and do search in sdn or in the market place you will get plenty of info read them and try to resolve the errors.
    Regards,
    Vamshi.

  • Problem with transacted JMS connection factory and transaction timeouts

              We encountered an interesting problem using transacted JMS connection factories.
              An EJB starts a container managed transaction and tries to validate a credit card
              before creating some information to a database for the user, in case of success
              an SMS is sent to the user via the transacted JMS queue. If the credit card authentications
              duration is about the same as the transactions timeout (in this case the default
              30 seconds) sometimes the database inserts is committed but the JMS insert is
              rollbacked. How can this be?
              If the authorization duration is much longer than 30 seconds everything works
              fine (both database and JMS inserts rollbacked), the same is true if a rollback
              is insured by calling EJBContext.setRollbackOnly(). The problem thus occurs only
              if the duration is approximately the same as the transaction timeout, it appears
              that the database insert is not timeouted but the JMS insert is. How can this
              be if they are both participating in the same transaction.
              The JMSConnectionFactory used is a Connection factory with XA-enabled. The result
              is the same also with the default "javax.jms.QueueConnectionFactory" and if we
              configure our own factory with user transactions enabled.
              Any help appreciated!
              

    Tomas Granö wrote:
              > We encountered an interesting problem using transacted JMS connection factories.
              > An EJB starts a container managed transaction and tries to validate a credit card
              > before creating some information to a database for the user, in case of success
              > an SMS is sent to the user via the transacted JMS queue. If the credit card authentications
              > duration is about the same as the transactions timeout (in this case the default
              > 30 seconds) sometimes the database inserts is committed but the JMS insert is
              > rollbacked. How can this be?
              It should not be.
              >
              > If the authorization duration is much longer than 30 seconds everything works
              > fine (both database and JMS inserts rollbacked), the same is true if a rollback
              > is insured by calling EJBContext.setRollbackOnly(). The problem thus occurs only
              > if the duration is approximately the same as the transaction timeout, it appears
              > that the database insert is not timeouted but the JMS insert is. How can this
              > be if they are both participating in the same transaction.
              >
              > The JMSConnectionFactory used is a Connection factory with XA-enabled. The result
              > is the same also with the default "javax.jms.QueueConnectionFactory" and if we
              > configure our own factory with user transactions enabled.
              >
              > Any help appreciated!
              Make sure that your session is not "transacted". In other words,
              the first parameter to createSession() must be false. There is an
              unfortunate name re-use here. If a session is "transacted", it
              maintains an independent "inner transaction" independent of the
              outer transaction. From the above description, it seems unlikely
              that your application has this wrong, as you say that
              "setRollbackOnly" works - but please check anyway.
              Make sure that you are using a true XA capable driver and database
              (XA "emulation" may not suffice)
              Beyond the above, I do not see what can be going wrong. You
              may want to try posting to the transactions and jdbc newsgroups. Note
              that JMS is appears to be exhibiting the correct behavior, but the
              JDBC operation is not. The JDBC operation appears to have
              its timeout independent of the transaction monitor's timeout.
              Tom
              

  • PROBLEM WITH CALL LOGS/CONTACTS

    HI FRIENDS,
    I AM FACING A WIERD PROBLEM WITH MY Z10, WHEN I GET A CALL OR A MISS CALL THE NUMBER SHOWS UNDER A DIFFERENT CONTACT NAME HOWEVER THE ORIGINAL CONTACTS REMAIN UNCHANGED AS IT ONLY HAPPENS IN CALL LOG PAGE.
    CAN ANYONE PLEASE HELP ME ON THE SAME.

    Hello chiragmandavia and welcome to the BlackBerry Support Community Forums.
    Sorry to hear you're experiencing an issue with your Call Logs.  
    Is the name in the log showing as a different contact entirely, or is the name not how you manually entered it in your phone?
    Does this happen with all your contacts in the logs or just specific ones? 
    Do you have multiple link sources for your Contacts in the Contacts app? 
    Thanks!
    -HMthePirate
    Come follow your BlackBerry Technical Team on twitter! @BlackBerryHelp
    Be sure to click Kudos! for those who have helped you.Click Solution? for posts that have solved your issue(s)!

  • Problem with call log and email signature.​..

     Hi, I have 2 little problem with my BB
    First, the day won't appear in the call logs. It only displays the month like 08/dd.
     Pic 1
    Second, when I send emails, my signature gets all mixed up because of my french punctuation, any idea why ?
    Pic 2
    Message Edited by Jerg on 08-09-2009 11:29 AM
    Message Edited by Jerg on 08-09-2009 11:31 AM

    Wow, that's REALLY low. That number means there's only 6MB free. It's recommended to have AT LEAST 15MB free. Check under Options | Status, hit the menu button, then choose Database Sizes. What number is listed for Total Size at the top?
    If the Content Store is really big then that means all of the media files (pictures, videos, etc) are being saved in the device memory rather than on a media card. Does your boss have a media card to store the media on?
    Did you also change the Keep Appointments setting to something lower like you did with the Keep Messages setting?
    If someone has been helpful please consider giving them kudos by clicking the star to the left of their post.
    Remember to resolve your thread by clicking Accepted Solution.

  • Problems with java logging utility

    Hi,
    I am having a few problems with java's logging framework.
    When I create a logger using
    Logger log = Logger.getLogger("my.com");
    and log my messages using this "log" object, the messages are logged to System.err which is my console. But when I add a Filehandler to this Logger object, I get log messages both in the file and the standard error.
    The problem is that I want to supress logging to the standard error but do not know how to do this. I tried a few things:
    1) If I close the standard error (System.err.close() ), things work fine but I can't afford doing this since other parts of my application might use System.err for some purpose.
    2) I thought that , by default, a ConsoleHandler object is added to the Logger object. So I thought I might get around the problem if I close() this ConsoleHandler object. But, to my surprise, log.getHandlers() returns an empty array by default. So there is no ConsoleHandler attached to it.
    Any info. regarding it will be very much appreciated.
    Thanks,
    Arch.

    Logger.setUseParentHandlers(false)

  • Probleme with transaction in session beans

    Hello.
    i create an session bean which call enties beans.
    When i execute un example first time, its correct.
    when i execute for second time, the methode don't stop.
    i think it is problem of transaction.
    thank you

    Hi,
    It is impossible to answer your with the given information. When ever you are asking a question please put the relavent informatin like code and deployment descriptors. So please do post the required information.
    Ashok

  • RollingFileAppender/Log4J question (problem with reading logging)

    Hi everyone,
    I've got a question about log4j.
    I'm working with an application that generates a lot of logging. When I'm trying to figure out problems in the application it's hard to read the log files because they get updated/overwritten all the time.
    The mechanism of the RollingFileAppender of log4j is that when the maximum file size is reached a new log file is created with <filename>.1 and the name of every file that already existed is increased with 1.
    So when I try to read my logfiles they get overwritten while I'm reading it because the application generates a new logfile sometimes 4 times per minute. That's very annoying, so what I would like to have is that everytime a new logfile is created it's number is increased with 1 until a maximum of files and then start over.
    I've been trying to find this mechanism somewhere but can't find it. The DailyRollingFileAppender kinda does what I want but you can't set a maximum of files and I can't have that because of the space available on the server.
    Does anyone know how to solve this?

    I've got a question about log4j.
    I'm working with an application that generates a lot
    of logging. When I'm trying to figure out problems in
    the application it's hard to read the log files
    because they get updated/overwritten all the time.
    The mechanism of the RollingFileAppender of log4j is
    that when the maximum file size is reached a new log
    file is created with <filename>.1 and the name of
    every file that already existed is increased with 1.
    So when I try to read my logfiles they get
    overwritten while I'm reading it because the
    e application generates a new logfile sometimes 4
    times per minute. That's very annoying, so what I
    would like to have is that everytime a new logfile is
    created it's number is increased with 1 until a
    maximum of files and then start over.That's not Log4J's problem. That's your problem for treating log files like production data. Maybe you need to use something else, like writing this supposedly persistent data to a database.
    I've been trying to find this mechanism somewhere but
    can't find it. The DailyRollingFileAppender kinda
    does what I want but you can't set a maximum of files
    and I can't have that because of the space available
    on the server.What stops you from introducing another periodic process, to purge old log files? That's what most would do.

  • Problems with transaction MD21

    Hi experts, i have a problem with this transaction because every time i open it when i try to add the material text in the Modify layout button, when i hit ok no text appearts in the field, whant can i do?, thank you in advance.
    kind regards

    Hi
    1) Create a Screen Variant in SHD0 with Additional Material Data ticked for MD21 transaction and assign to specific users.
    2) After Entering MD21 in the Display Planning file entries screen click on  Change lay out and save as default after selecting Material Description column
    From nxt time onwards the user will get the Material Description in the Layout for the Selection criteria
    Regards
    Brahmaji

Maybe you are looking for