Rolling Log Files

Hi,
This is more of an Application Management Issue, and it has nothing to do with Weblogic as such. My problem is, the log file that is created by the standarad out redirection from the Application Server is getting bigger and bigger day by day and the system is running out of space. Some really important statements are being logged, which are monitored by real time alarming system. I would like the log files to get roll over after reaching a particular size so that the old files can be archived. I know this is quite easy with log4j, but the statements that are being outputted are not coming from different levels of log4j statement.
I am runnging Weblogic 8.1 on Sun Solaris.
Any help on this would be higly Appreciated
Regards
Nitin

I do not have the details yet about the structure of the logs, but these will be similar to web logs.  IVR systems produce log records about each call and each "jump" that a caller makes thru the system (like clickthru in a web log).  These logs are written to constantly by the IVR application and it decides when a log has "filled up".  When it has, it closes the active log and opens a new one.  I suspect that each log file has a timestamp appended to its name to indicate when it has created.  I also suspect that a lock is placed on the file while it is active and then released when it is closed.

Similar Messages

  • Regarding Log4.xml to add timestamp in log file

    Dear Sir,
    Could you guide me how to append the timestamp got appeared in log file which has been generated from Log4j.xml?? This is my Log4j.xml.
    <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> <!-- Order of child elements is appender*, logger*, root?. --> <!-- Appenders control how logging is output. --> <appender name="CM" class="org.apache.log4j.FileAppender"> <param name="File" value="Master.log"/> <param name="Threshold" value="DEBUG"/> <param name="Append" value="true"/> <param name="MaxFileSize" value="1MB"/> <param name="MaxBackupIndex" value="1"/> <layout class="org.apache.log4j.PatternLayout"> <!-- {fully-qualified-class-name}:{method-name}:{line-number} - {message}{newline} --> <param name="ConversionPattern" value="%C:%M:%L - %m%n"/> </layout> </appender> <appender name="stdout" class="org.apache.log4j.ConsoleAppender"> <param name="Threshold" value="INFO"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%C:%M:%L - %m%n"/> </layout> </appender> <!-- Logger hierarchy example: root - com - com.ociweb - com.ociweb.demo - com.ociweb.demo.LogJDemo --> <!-- Setting additivity to false prevents ancestor categories for being used in addition to this one. --> <logger name="com.tf" additivity="true"> <priority value="DEBUG"/> <appender-ref ref="CM"/> </logger> <!-- Levels from lowest to highest are trace, debug, info, warn, error, fatal & off. --> <!-- The root category is used for all loggers unless a more specific logger matches. --> <root> <appender-ref ref="stdout"/> </root> </log4j:configuration> It would be great, if you could give the solution for this. There is no probs in getting timestamp from the folowing properties file Log4j.properties: # # Configure the logger to output info level messages into a rolling log file. # log4j.rootLogger=DEBUG, R log4j.appender.R=org.apache.log4j.DailyRollingFileAppender log4j.appender.R.DatePattern='.'yyyy-MM-dd # # Edit the next line to point to your logs directory. # The last part of the name is the log file name. # log4j.appender.R.File=c:/temp/log/${log.file} log4j.appender.R.layout=org.apache.log4j.PatternLayout # # Print the date in ISO 8601 format # log4j.appender.R.layout.ConversionPattern=%d %-5p %c %L - %m%n but i need it from Log4j.xml
    thanks in advance mani

    Dear Sir,
    Could you guide me how to append the timestamp got appeared in log file which has been generated from Log4j.xml?? This is my Log4j.xml.
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
    <log4j:configuration
      xmlns:log4j="http://jakarta.apache.org/log4j/">
      <!-- Order of child elements is appender*, logger*, root?. -->
      <!-- Appenders control how logging is output. -->
      <appender name="CM" class="org.apache.log4j.FileAppender">
         <param name="File" value="customer_master.log"/>
         <param name="Threshold" value="DEBUG"/>
         <param name="Append" value="true"/>
         <param name="MaxFileSize" value="1MB"/>
         <param name="MaxBackupIndex" value="1"/>
        <layout class="org.apache.log4j.PatternLayout">
          <!-- {fully-qualified-class-name}:{method-name}:{line-number}
                - {message}{newline} -->
          <param name="ConversionPattern" value="%C:%M:%L - %m%n"/>
        </layout>     
      </appender>
      <appender name="stdout" class="org.apache.log4j.ConsoleAppender">
        <param name="Threshold" value="INFO"/>
        <layout class="org.apache.log4j.PatternLayout">
          <param name="ConversionPattern" value="%C:%M:%L - %m%n"/>
        </layout>
      </appender>
      <!-- Logger hierarchy example:
           root - com - com.ociweb - com.ociweb.demo - com.ociweb.demo.LogJDemo
      -->
      <!-- Setting additivity to false prevents ancestor categories
           for being used in addition to this one. -->
      <logger name="com.tf" additivity="true">
        <priority value="DEBUG"/>
        <appender-ref ref="CM"/>
      </logger>
      <!-- Levels from lowest to highest are
           trace, debug, info, warn, error, fatal & off. -->
      <!-- The root category is used for all loggers
           unless a more specific logger matches. -->
      <root>
        <appender-ref ref="stdout"/>
      </root>
    </log4j:configuration>It would be great, if you could give the solution for this. There is no probs in getting timestamp from the folowing properties file Log4j.properties:
    # Configure the logger to output info level messages into a rolling log file.
    log4j.rootLogger=DEBUG, R
    log4j.appender.R=org.apache.log4j.DailyRollingFileAppender
    log4j.appender.R.DatePattern='.'yyyy-MM-dd
    # Edit the next line to point to your logs directory.
    # The last part of the name is the log file name.
    log4j.appender.R.File=c:/temp/log/${log.file}
    log4j.appender.R.layout=org.apache.log4j.PatternLayout
    # Print the date in ISO 8601 format
    log4j.appender.R.layout.ConversionPattern=%d %-5p %c %L - %m%nthanks in advance

  • Log file isnt named per my expression

    Hi, we run std 2008 r2.  
    I think I'm in a catch 22 but maybe the community can help. 
    I configured ssis logging using a flat file conn manager I named ssislog.  It's connection property is set in an expression as follows...
    @[User::logDir] + (DT_WSTR, 50) (YEAR( @[User::Date] ) ) + RIGHT("0" +  (DT_WSTR, 50) (MONTH( @[User::Date] ) ),2) + RIGHT("0" +  (DT_WSTR, 50) (DAY( @[User::Date] ) ),2) + ".txt"
    variable User::Date is the target of a resultset mapping in an exec sql task.  The latter task is the first component in the pkg.
    My logs keep appending to the same datestamped log file based on the initial value stored in variable User::Date at development time.  Probably because ssis is put in a quandary needing to bind the log file name before the 1st component runs.
    what are my options?  I'm going to look for a file rename feature in ssis and post back here.

    Yes, it resolved the log file name as the 1st step and used it all the way.
    There is nothing you can do if I understood you need a rolling log file feature.
    Arthur
    MyBlog
    Twitter

  • Problem in Rolling to new a log file only when it exceeds max size (Log4net library)

    Hello,
    I am using log4net library to create log files.
    My requirement is roll to a new log file with name appended with timestamp only when file size exceeds max size (file name ex: log_2014_12_11_12:34:45 etc).
    My config is as follow
     <appender name="LogFileAppender"
                          type="log4net.Appender.RollingFileAppender" >
            <param name="File" value="logging\log.txt" />
            <param name="AppendToFile" value="true" />
            <rollingStyle value="Size" />
            <maxSizeRollBackups value="2" />
            <maximumFileSize value="2MB" />
            <staticLogFileName value="true" />
            <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
            <layout type="log4net.Layout.PatternLayout">
              <param name="ConversionPattern"
                   value="%-5p%d{yyyy-MM-dd hh:mm:ss} – %m%n" />
              <conversionPattern
                   value="%newline%newline%date %newline%logger 
                           [%property{NDC}] %newline>> %message%newline" />
            </layout>
          </appender>
    Issue is date time is not appending to file name. 
    But if i set "Rolling style" as "Date or composite", file name gets appended with timestamp, but new file gets created before reaching max file size.(Because file gets created  whenever date time changes, which i dont want) .
    Please help me in solving this issue?
    Thanks

    Hello,
    I'd ask the logfornet people: http://logging.apache.org/log4net/
    Or search on codeproject - there may be some tutorials that would help you.
    http://www.codeproject.com/Articles/140911/log-net-Tutorial
    http://www.codeproject.com/Articles/14819/How-to-use-log-net
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog: Unlock PowerShell
    My Book:
    Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C406F75746C6F6F6B2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}})

  • Any ways to roll over to a different log file when the current log file big

    How to roll over a log file when it reaches maximum to a different log file?
    any ways of doing this??????

    More info in the new owners....
    http://www.oracle.com/technology/pub/articles/hunter_logging.html
    And more!!!!! here to build a configuration file with filehandler properly setted to an specified size
    http://www.linuxtopia.org/online_books/programming_books/thinking_in_java/TIJ317_021.htm

  • Failed to roll HTTP log file Error at Startup WL 8.1

    Hi people, i´m having the following error on the Weblogic 8.1 console at startup , the applications works fine but i´m always having this:
    ####<Oct 19, 2004 6:44:12 PM CDT> <Error> <HTTP> <computerName> <portalServer> <ExecuteThread: '11' for queue: 'default'> <<WLS Kernel>> <> <BEA-101242> <Failed to roll HTTP log file for the Web server: portalServer.
    java.io.IOException: Failed to rename log file on attempt to rotate logs
         at weblogic.servlet.logging.LogManagerHttp.rotateLog(LogManagerHttp.java:200)
         at weblogic.servlet.logging.LogManagerHttp.keepStatsAndRollIfNecessary(LogManagerHttp.java:349)
         at weblogic.servlet.logging.LogManagerHttp.log(LogManagerHttp.java:391)
         at weblogic.servlet.internal.HttpServer.log(HttpServer.java:1137)
         at weblogic.servlet.internal.ServletResponseImpl.send(ServletResponseImpl.java:1192)
         at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2590)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    >
    Does anyone know what do i need to do in order to solve this problem??
    thanks a lot
    Guillermo De La Rosa

    I'm having the same error message. I also noticed that the server is running a bit slower and that beasvc.exe is using more memory that usual.Has anyone logged a technical support case??
    Thanks.
    ####<Sep 20, 2005 10:15:28 AM EDT> <Info> <HTTP> <ddca2401> <myserver> <ExecuteThread: '44' for queue: 'weblogic.kernel.Default'> <<anonymous>> <> <BEA-101047> <[ServletContext(id=17209502,name=apps,context-path=/apps)] /*: Using standard I/O>
    ####<Sep 20, 2005 10:15:28 AM EDT> <Error> <HTTP> <ddca2401> <myserver> <ExecuteThread: '44' for queue: 'weblogic.kernel.Default'> <<WLS Kernel>> <> <BEA-101242> <Failed to roll HTTP log file for the Web server: myserver.
    java.io.IOException: Failed to rename log file on attempt to rotate logs
         at weblogic.servlet.logging.LogManagerHttp.rotateLog(LogManagerHttp.java:200)
         at weblogic.servlet.logging.LogManagerHttp.keepStatsAndRollIfNecessary(LogManagerHttp.java:349)
         at weblogic.servlet.logging.LogManagerHttp.log(LogManagerHttp.java:388)
         at weblogic.servlet.internal.HttpServer.log(HttpServer.java:1153)
         at weblogic.servlet.internal.ServletResponseImpl.send(ServletResponseImpl.java:1197)
         at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2574)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)

  • Rolling weblogic.stderr and weblogic.stdout log files

    Is there a way of make the -Dweblogic.stderr and -Dweblogic.stdout log files, rotate by size or time? We are running into 100MB+ files because we can't find any documentation about how to rotate these files.
    Thanks,
    Rajesh

    The stdout and stderr output options in weblogic apply to the jvm process using the standard unix standard in/ standard out redirection; therefore, BEA has not included a native method to rotate these files.
    However, in weblogic 10, if you place these files in the same directory as the weblogic output log (not to be confused with -D=/path/to/stderr.log and -D=/path/to/stdout.log they will be rotated every time you restart the server.
    The other way to get fine-grained log rotation on these files is to use a standard log rotation mechanism such as logrotate: http://linuxcommand.org/man_pages/logrotate8.html
    The other option is to address logging using an application framework such as log4j from a development standpoint.
    As of right now, these are your only options, unless someone puts in a feature request to BEA.

  • Apple Mobile Device Support fails to install. Log file provided.

    I have uninstalled the older version of itunes and all associated files in the reccommended order. In particular I have verified all Apple Mobile Device files, services and registry entries have been removed. I then downloaded itunes version 11.0.3 to my PC and installed the product. Other than AMDS the product loaded onto my PC and appears to work ok, including being able to recognise my ipod. Unfortunately itunes does not recognise my iphone.
    However, the install produced two error messages.
    First Error:
    Program Files\itunes\ipodUpdaterExt.dll failed to register.
    HRESULT-1073741819
    Contact your support personnel
    Second Error:
    Service 'Apple Mobile Device' failed to start. Verify you have sufficient privileges to start system services
    During the install, the program 'rolled back' the portion related to AMDS. To further analyse the problem I extracted the AppleMobileDeviceSupport64.msi from the itunes64Setup.exe file. I ran the .msi file as the System Administrator and produced a log file. The log file partially contained the following:
    DIFXAPP: INFO:   ENTER:  DriverPackageInstallW
    DIFXAPP: INFO:   Installing INF file 'C:\Program Files\Common Files\Apple\Mobile Device Support\NetDrivers\netaapl64.inf' (Plug and Play).
    DIFXAPP: INFO:   Could not open file C:\Windows\System32\DriverStore\FileRepository\netaapl64.inf_amd64_neutral_bf78 5db627c6d127\netaapl64.inf. (Error code 0x3: The system cannot find the path specified.)
    DIFXAPP: ERROR:  PnP Install failed. (Error code 0x3EE: The volume for a file has been externally altered so that the opened file is no longer valid.)
    DIFXAPP: INFO:   Attempting to rollback ...
    DIFXAPP: INFO:   No devices to rollback
    DIFXAPP: INFO:   Successfully removed '{4241D803-8012-4EA8-9DF1-63C9C886FCED}' from reference list of driver store entry 'C:\Windows\System32\DriverStore\FileRepository\netaapl64.inf_amd64_neutral_bf7 85db627c6d127\netaapl64.inf'
    DIFXAPP: INFO:   RETURN: DriverPackageInstallW  (0x3EE)
    DIFXAPP: ERROR: encountered while installing driver package 'C:\Program Files\Common Files\Apple\Mobile Device Support\NetDrivers\netaapl64.inf'
    DIFXAPP: ERROR: InstallDriverPackages failed with error 0x3EE
    DIFXAPP: RETURN: InstallDriverPackages() 1006 (0x3EE)
    CustomAction MsiInstallDrivers returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox)
    Action ended 18:53:33: InstallFinalize. Return value 3.
    Action 18:53:33: Rollback. Rolling back action:
    Rollback: MsiInstallDrivers
    Rollback: MsiRollbackInstall
    I am not sure why the program is looking for the netaapI64.inf file in the
    C:\Windows\System32\DriverStore\FileRepository\netaapl64.inf_amd64_neutral_bf785db627c6d127\ folder.
    This file currently exists in:
    C:\Windows\System32\DriverStore\FileRepository\netaapl64.inf_amd64_neutral_dc2cbd989eec1514\ folder.
    I would very much appreciate your help

    I just came across this:
    https://discussions.apple.com/thread/3960640?start=0&tstart=0
    See if this helps.

  • Can I modify WLI system Bean's transaction attribute --turn on archiver resulting endless exception in log file

    hi,erveryone,
    one difficult question need help.
    Environment: WLS8.1sp2 + WLI8.1sp2 + ORACLE9i + solaris9
    when I started archiver manually,just for a while, wli system generated about 40,000 JMS messages in
    wli.internal.worklist.timer.queue,and consume the great mass of system resource of Database server,I had to stop these
    archive processes immediately to keep other applicaitons which using the same database running normal. I did so by
    following steps:
    (1) in WLI console, delete wli.internal.worklist.timer.queue;
    (2) in WLI console, reconstruct wli.internal.worklist.timer.queue;
    (3) restart wli server.
    after server was restarted, wli server output endless and repeatly exception to log file ,the typical exception was:
    ####<May 8, 2005 3:08:26 PM CST> <Info> <EJB> <app01> <jcwliserver> <ExecuteThread: '54' for queue:
    'weblogic.kernel.Default'> <<anonymous>> <BEA1-54B26B551CC1A8856F80> <BEA-010049> <EJB Exception in method: remove:
    java.sql.SQLException: Transaction rolled back: Unknown reason.
    java.sql.SQLException: Transaction rolled back: Unknown reason
         at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1299)
         at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1250)
         at weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:385)
         at weblogic.jdbc.jta.DataSource.connect(DataSource.java:343)
         at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:305)
         at weblogic.ejb20.cmp.rdbms.RDBMSPersistenceManager.getConnection(RDBMSPersistenceManager.java:2247)
         at
    com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_loadGroup0(ListenerBean_1nsp14__WebLogic_CMP_R
    DBMS.java:1055)
         at
    com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_setTaskBean_listeners(ListenerBean_1nsp14__Web
    Logic_CMP_RDBMS.java:596)
         at
    com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_setTaskBean_listeners(ListenerBean_1nsp14__Web
    Logic_CMP_RDBMS.java:584)
         at
    com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.ejbRemove(ListenerBean_1nsp14__WebLogic_CMP_RDBMS.j
    ava:2423)
         at weblogic.ejb20.manager.DBManager.remove(DBManager.java:1318)
         at weblogic.ejb20.internal.EntityEJBLocalHome.remove(EntityEJBLocalHome.java:214)
         at
    com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14_LocalHomeImpl.remove(ListenerBean_1nsp14_LocalHomeImpl.java:131)
         at
    com.bea.wli.worklist.beans.session.RemoteWorklistManagerBean.removeTaskListeners(RemoteWorklistManagerBean.java:3001)
         at
    com.bea.wli.worklist.beans.session.RemoteWorklistManagerBean_us8t1c_EOImpl.removeTaskListeners(RemoteWorklistManagerBean_us8t
    1c_EOImpl.java:698)
         at com.bea.wli.worklist.timer.WorklistTimerMDB.processListenerToRemove(WorklistTimerMDB.java:102)
         at com.bea.wli.worklist.timer.WorklistTimerMDB.onMessage(WorklistTimerMDB.java:61)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:382)
         at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:316)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    >
    ####<May 8, 2005 3:08:26 PM CST> <Info> <EJB> <app01> <jcwliserver> <ExecuteThread: '96' for queue:
    'weblogic.kernel.Default'> <<anonymous>> <BEA1-54B96B551CC1A8856F80> <BEA-010049> <EJB Exception in method: remove:
    javax.ejb.NoSuchEntityException: [EJB:010140]Bean with primary key: '153.22.52.28-17343c7.10243c3c6ec.a51' not found..
    javax.ejb.NoSuchEntityException: [EJB:010140]Bean with primary key: '153.22.52.28-17343c7.10243c3c6ec.a51' not found.
         at
    com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_loadGroup0(ListenerBean_1nsp14__WebLogic_CMP_R
    DBMS.java:1165)
         at
    com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_setTaskBean_listeners(ListenerBean_1nsp14__Web
    Logic_CMP_RDBMS.java:596)
         at
    com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_setTaskBean_listeners(ListenerBean_1nsp14__Web
    Logic_CMP_RDBMS.java:584)
         at
    com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.ejbRemove(ListenerBean_1nsp14__WebLogic_CMP_RDBMS.j
    ava:2423)
         at weblogic.ejb20.manager.DBManager.remove(DBManager.java:1318)
         at weblogic.ejb20.internal.EntityEJBLocalHome.remove(EntityEJBLocalHome.java:214)
         at
    com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14_LocalHomeImpl.remove(ListenerBean_1nsp14_LocalHomeImpl.java:131)
         at
    com.bea.wli.worklist.beans.session.RemoteWorklistManagerBean.removeTaskListeners(RemoteWorklistManagerBean.java:3001)
         at
    com.bea.wli.worklist.beans.session.RemoteWorklistManagerBean_us8t1c_EOImpl.removeTaskListeners(RemoteWorklistManagerBean_us8t
    1c_EOImpl.java:698)
         at com.bea.wli.worklist.timer.WorklistTimerMDB.processListenerToRemove(WorklistTimerMDB.java:102)
         at com.bea.wli.worklist.timer.WorklistTimerMDB.onMessage(WorklistTimerMDB.java:61)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:382)
         at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:316)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    >
    The wli server generated log file very quickly ,:it can output 1M bytes log file per second,all logged information
    is similar to the <BEA-010049> excetpion metioned above. BEA support engineer suggested me to totally stop the
    archive ,I did so,but the server was still ouput the log file like crazy as before and the normal log information are
    completely override by <BEA-010049> excetpion.
    I checked the EntityEJBs in WLI console :Mywlidomain> Applications> WLI System EJBs> WLI Worklist Persistence$)A#,and
    found that in statistics table :
    ListenerBean : Pool miss ratio = 99.67%, transaction rollback ration = 99.90%,Destory Bean Ratio = 99.48%(see
    attachment)
    WorklistTimerMDB: transaction rollback ratio = 99.97%
    It seems ListenerBean worked incorrectly.I searched in support.bea.com and found one example which also about server
    output endless log file,the author solved this problem by changing Bean's transaction-attribute from 'Required'
    to 'RequiresNew' thought he didn't know why it works. I try this method by changing ListenerBean's
    transaction-attribute from 'Required' to 'RequiresNew'.
    $weblogic_home/integration/lib/wli-ejbs.ear/ejb-jar-generic.xml:
    <ejb-name>CommentBean</ejb-name>
    <method-name>*</method-name>
    </method>
    <trans-attribute>Required</trans-attribute>
    </container-transaction>
    <container-transaction>
    <method>
    <ejb-name>ListenerBean</ejb-name>
    <method-name>*</method-name>
    </method>
    <trans-attribute>RequiresNew</trans-attribute> -----------the default value is Required,I modified it to
    RequiresNew.
    </container-transaction>
    <container-transaction>
    really it works, the log file output resume normal. But there are still some problems:
    (1) this exception is still exist:
    javax.ejb.NoSuchEntityException: [EJB:010140]Bean with primary key: '153.22.52.28-17343c7.10243c3c6ec.a51' not found.
    (2) is this method safe ?(Does "Modify ListenBean's transaction-attribute" impat other parts of wli system?)
    (3) after changed the transaction attribute, if turn on archive again, the server output endless exception:
    ####<Jun 1, 2005 5:14:58 PM CST> <Info> <EJB> <app01> <jcwliserver> <ExecuteThread: '63' for queue:
    'weblogic.kernel.Default'> <<anonymous>> <BEA1-2F43890B86B0A8856F80> <BEA-010036> <Exception from ejbStore:
    java.sql.SQLException: XA error: XAER_RMERR : A resource manager error has occured in the transaction branch start()
    failed on resource 'weblogic.jdbc.jta.DataSource': XAER_RMERR : A resource manager error has occured in the transaction
    branch
    oracle.jdbc.xa.OracleXAException
         at oracle.jdbc.xa.OracleXAResource.checkError(OracleXAResource.java:1160)
         at oracle.jdbc.xa.client.OracleXAResource.start(OracleXAResource.java:311)
         at weblogic.jdbc.wrapper.VendorXAResource.start(VendorXAResource.java:50)
         at weblogic.jdbc.jta.DataSource.start(DataSource.java:617)
         at weblogic.transaction.internal.XAServerResourceInfo.start(XAServerResourceInfo.java:1075)
         at weblogic.transaction.internal.XAServerResourceInfo.xaStart(XAServerResourceInfo.java:1007)
         at weblogic.transaction.internal.XAServerResourceInfo.enlist(XAServerResourceInfo.java:218)
         at weblogic.transaction.internal.ServerTransactionImpl.enlistResource(ServerTransactionImpl.java:419)
         at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1287)
         at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1250)
         at weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:385)
         at weblogic.jdbc.jta.DataSource.connect(DataSource.java:343)
         at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:305)
         at weblogic.ejb20.cmp.rdbms.RDBMSPersistenceManager.getConnection(RDBMSPersistenceManager.java:2247)
         at
    com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.__WL_store(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:363
    6)
         at
    com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.ejbStore(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:3548)
         at weblogic.ejb20.manager.DBManager.beforeCompletion(DBManager.java:927)
         at weblogic.ejb20.internal.TxManager$TxListener.beforeCompletion(TxManager.java:745)
         at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1010)
         at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:115)
         at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1142)
         at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:1868)
         at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:250)
         at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:221)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:412)
         at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:316)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    java.sql.SQLException: XA error: XAER_RMERR : A resource manager error has occured in the transaction branch start()
    failed on resource 'weblogic.jdbc.jta.DataSource': XAER_RMERR : A resource manager error has occured in the transaction
    branch
    oracle.jdbc.xa.OracleXAException
         at oracle.jdbc.xa.OracleXAResource.checkError(OracleXAResource.java:1160)
         at oracle.jdbc.xa.client.OracleXAResource.start(OracleXAResource.java:311)
         at weblogic.jdbc.wrapper.VendorXAResource.start(VendorXAResource.java:50)
         at weblogic.jdbc.jta.DataSource.start(DataSource.java:617)
         at weblogic.transaction.internal.XAServerResourceInfo.start(XAServerResourceInfo.java:1075)
         at weblogic.transaction.internal.XAServerResourceInfo.xaStart(XAServerResourceInfo.java:1007)
         at weblogic.transaction.internal.XAServerResourceInfo.enlist(XAServerResourceInfo.java:218)
         at weblogic.transaction.internal.ServerTransactionImpl.enlistResource(ServerTransactionImpl.java:419)
         at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1287)
         at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1250)
         at weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:385)
         at weblogic.jdbc.jta.DataSource.connect(DataSource.java:343)
         at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:305)
         at weblogic.ejb20.cmp.rdbms.RDBMSPersistenceManager.getConnection(RDBMSPersistenceManager.java:2247)
         at
    com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.__WL_store(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:363
    6)
         at
    com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.ejbStore(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:3548)
         at weblogic.ejb20.manager.DBManager.beforeCompletion(DBManager.java:927)
         at weblogic.ejb20.internal.TxManager$TxListener.beforeCompletion(TxManager.java:745)
         at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1010)
         at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:115)
         at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1142)
         at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:1868)
         at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:250)
         at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:221)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:412)
         at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:316)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
         at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1292)
         at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1250)
         at weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:385)
         at weblogic.jdbc.jta.DataSource.connect(DataSource.java:343)
         at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:305)
         at weblogic.ejb20.cmp.rdbms.RDBMSPersistenceManager.getConnection(RDBMSPersistenceManager.java:2247)
         at
    com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.__WL_store(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:363
    6)
         at
    com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.ejbStore(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:3548)
         at weblogic.ejb20.manager.DBManager.beforeCompletion(DBManager.java:927)
         at weblogic.ejb20.internal.TxManager$TxListener.beforeCompletion(TxManager.java:745)
         at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1010)
         at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:115)
         at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1142)
         at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:1868)
         at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:250)
         at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:221)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:412)
         at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:316)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    >
    How can I solve these problem ? any suggestion is warm welcome.
    Thanks in advance.
    Great Lou

    Back up all data to at least two different storage devices, if you haven't already done so. The backups can be made with Time Machine or with a mirroring tool such as Carbon Copy Cloner. Preferably both.
    Boot into Recovery (command-R at startup), launch Disk Utility, and erase the startup volume with the default options.This operation will destroy all data on the volume, so you had be better be sure of your backups. Quit Disk Utility and install OS X. When you reboot, you'll be prompted to go through the initial setup process. That’s when you transfer the data from one of your backups. For details of how this works, see here:
    Using Setup Assistant
    Transfer only "Users" and "Settings" – not "Applications" or "Other files." Don't transfer the Guest account, if it was enabled on the old system. Test. If the problem is still there, you have a hardware fault. Take the machine to an Apple Store for diagnosis.
    If the problem is resolved, reinstall your third-party software cautiously. Self-contained applications that install into the Applications folder by drag-and-drop or download from the App Store are safe. Anything that comes packaged as an installer or that prompts for an administrator password is suspect, and you must test thoroughly after reinstalling each such item to make sure you haven't restored the problem.
    Note: You need an always-on Ethernet or Wi-Fi connection to the Internet to use Recovery. It won’t work with USB or PPPoE modems, or with proxy servers, or with networks that require a certificate for authentication.

  • Node.js loss of permission to write/create log files

    We have been operating Node.js as a worker role cloud service. To track server activity, we write log files (via log4js) to C:\logs
    Originally the logging was configured with size-based roll-over. e.g. new file every 20MB. I noticed on some servers the sequencing was uneven
    socket.log <-- current active file
    socket.log.1
    socket.log.3
    socket.log.5
    socket.log.7
    it should be
    socket.log.1
    socket.log.2
    socket.log.3
    socket.log.4
    Whenever there is uneven sequence, i realise the beginning of each file revealed the Node process was restarted. From Windows Azure event log, it further indicated worker role hosting mechanism found node.exe to have terminated abruptly.
    With no other information to clue what is exactly happening, I thought there was some fault with log4js roll over implementation (updating to latest versions did not help). Subsequently switched to date-based roll-over mode; saw that roll-over happened every
    midnight and was happy with it.
    However some weeks later I realise the roll-over was (not always, but pretty predictably) only happening every alternate midnight.
    socket.log-2014-06-05
    socket.log-2014-06-07
    socket.log-2014-06-09
    And each file again revealed that midnight the roll-over did not happen, node.exe was crashing again. Additional logging on uncaughtException and exit happens showed nothing; which seems to suggest node.exe was killed by external influence (e.g. process
    kill) but it was unfathomable anything in the OS would want to kill node.exe.
    Additionally, having two instances in the cloud service, we observe the crashing of both node.exe within minutes of each other. Always. However if we had two server instances brought up on different days, then the "schedule" for crashing would
    be offset by the difference of the instance launch dates.
    Unable to trap more details what's going on, we tried a different logging library - winston. winston has the additional feature of logging uncaughtExceptions so it was not necessary to manually log that. Since winston does not have date-based roll-over it
    went back to size-based roll-over; which obviously meant no more midnight crash. 
    Eventually, I spotted some random midday crash today. It did not coincide with size-based rollover event, but winston was able to log an interesting uncaughtException.
    "date": "Wed Jun 18 2014 06:26:12 GMT+0000 (Coordinated Universal Time)",
    "process": {
    "pid": 476,
    "uid": null,
    "gid": null,
    "cwd": "E:
    approot",
    "execPath": "E:\\approot
    node.exe",
    "version": "v0.8.26",
    "argv": ["E:\\approot\\node.exe", "E:\\approot\\server.js"],
    "memoryUsage":
    { "rss": 80433152, "heapTotal": 37682920, "heapUsed": 31468888 }
    "os":
    { "loadavg": [0, 0, 0], "uptime": 163780.9854492 }
    "trace": [],
    "stack": ["Error: EPERM, open 'c:\\logs\\socket1.log'"],
    "level": "error",
    "message": "uncaughtException: EPERM, open 'c:\\logs\\socket1.log'",
    "timestamp": "2014-06-18T06:26:12.572Z"
    Interesting question: the Node process _was_ writing to socket1.log all along; why would there be a sudden EPERM error?
    On restart it could resume writing to the same log file. Or in previous cases it would seem like the lack of permission to create a new log file. 
    Any clues on what could possibly cause this? On a "scheduled" basis per server? Given that it happens so frequently and in sync with sister instances in the cloud service, something is happening in the back scenes which I cannot put a finger to.
    thanks
    The melody of logic will always play out the truth. ~ Narumi Ayumu, Spiral

    Hi,
    It is  strange. From your description, how many instances of your worker role? Do you store the log file on your VM local disk? To avoid this question, the best choice is you could store your log file into azure storage blob . If you do this, all log
    file will be stored on blob storage. About how to use azure blob storage, please see this docs:
    http://azure.microsoft.com/en-us/documentation/articles/storage-introduction/
    Please try it.
    If I misunderstood, please let me know.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • What is stored in a transaction log file?

    What does the transaction log file store? Is it the blocks of transactions to be executed, is it the snapshot of records before beginning the
    execution of a transaction or is it just the statements found in a transaction block? Please advice.
    mayooran99

    yes, it will store all the values before and after that were modified. you,first, have to understand the need for transaction log, then, it will start to become apparent, what is stored in the transaction log
    before the transaction can be committed, sql server will make sure that all the information is hardened on the transaction log,so if a crash happens, it can still recover\restore the data.
    when you update some data - the data is feteched into memory and updated- transaction log makes note of it(before and after values etc).see, at this point, the changes were done but not physically present in the data page, they present only in the memory.
    so, if crash happens(before a check piont\lazy writer could be issued), you will that data...this where transaction log comes handy, because all this information is stored in physical file of transaction log. so, when your server comes back on, if the transaction
    is committed, the transaction log will roll forward this iinformation
    when a checkpoint\lazy writer happens, in simple recovery, the transaction log for that txn is cleared out, if there are no other older active txns.
    in full recovery you will take log backups, to clear that txn from the transaction log.
    in transaction log data generally is faster because 1. it is written sequentialyl...it will track the data pageno, lsn and other details that were modified and makes a note of it.
    similar to data cache, there is also transaction log cache, that makes this process faster.. all transactions before being committed, it will wait to make sure everything related to the txn is written to the transaction log on disk.  
    i advice you to pick up - kalen delaney, sql internals book and read - recovery an logging chapter..for more and better understanding...
    Hope it Helps!!

  • Log file

    Dears,
    Running finance module using dac. this task has been failed.
    log file for your reference.
    DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
    DIRECTOR> VAR_27028 Use override value [ORA_R1213] for session parameter:[$DBConnection_OLTP].
    DIRECTOR> VAR_27028 Use override value [ORA_R1213.DATAWAREHOUSE.SDE_ORAR1213_Adaptor.SDE_ORA_CodeDimension_Bank_Cat.log] for session parameter:[$PMSessionLogFile].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[MPLT_ADI_CODES.$$CATEGORY].
    DIRECTOR> VAR_27028 Use override value [1000] for mapping parameter:[MPLT_SA_ORA_CODES.$$DATASOURCE_NUM_ID].
    DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[MPLT_SA_ORA_CODES.$$TENANT_ID].
    DIRECTOR> TM_6014 Initializing session [SDE_ORA_CodeDimension_Bank_Cat] at [Sat Apr 13 20:10:18 2013].
    DIRECTOR> TM_6683 Repository Name: [Oracle_BI_DW_Base]
    DIRECTOR> TM_6684 Server Name: [Oracle_BI_DW_Base_Integration_Service]
    DIRECTOR> TM_6686 Folder: [SDE_ORAR1213_Adaptor]
    DIRECTOR> TM_6685 Workflow: [SDE_ORA_CodeDimension_Bank_Cat] Run Instance Name: [] Run Id: [2791]
    DIRECTOR> TM_6101 Mapping name: SDE_ORA_CodeDimension_Ap_Lookup [version 1].
    DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
    DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
    DIRECTOR> TM_6827 [E:\Informatica\9.0.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_CodeDimension_Bank_Cat].
    DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
    DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR> TM_6708 Using configuration property [DisableDB2BulkMode,Yes]
    DIRECTOR> TM_6708 Using configuration property [ServerPort,4006]
    DIRECTOR> TM_6708 Using configuration property [SiebelUnicodeDB,apps@R12PLY baw@orcl]
    DIRECTOR> TM_6708 Using configuration property [overrideMptlVarWithMapVar,Yes]
    DIRECTOR> TM_6703 Session [SDE_ORA_CodeDimension_Bank_Cat] is run by 64-bit Integration Service [node01_OBIEETESTAPP], version [9.0.1 HotFix2], build [1111].
    MANAGER> PETL_24058 Running Partition Group [1].
    MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
    MANAGER> PETL_24001 Parallel Pipeline Engine running.
    MANAGER> PETL_24003 Initializing session run.
    MAPPING> CMN_1569 Server Mode: [ASCII]
    MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6151 The session sort order is [Binary].
    MAPPING> TM_6155 Using HIGH precision processing.
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6187 Session target-based commit interval is [10000].
    MAPPING> TM_6307 DTM error log disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> DBG_21075 Connecting to database [orcl], user [baw]
    MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_CodeDimension_Bank_Cat]
    DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
    MANAGER> PETL_24004 Starting pre-session tasks. : (Sat Apr 13 20:10:18 2013)
    MANAGER> PETL_24027 Pre-session task completed successfully. : (Sat Apr 13 20:10:18 2013)
    DIRECTOR> PETL_24006 Starting data movement.
    MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 128000 bytes.
    LKPDP_1> DBG_21097 Lookup Transformation [mplt_ADI_Codes.Lkp_Master_Map]: Default sql to create lookup cache: SELECT MASTER_CODE,DATASOURCE_NUM_ID,SOURCE_CODE,CATEGORY,LANGUAGE_CODE FROM W_MASTER_MAP_D ORDER BY DATASOURCE_NUM_ID,SOURCE_CODE,CATEGORY,LANGUAGE_CODE,MASTER_CODE
    LKPDP_3> DBG_21312 Lookup Transformation [mplt_ADI_Codes.Lkp_W_CODE_D]: Lookup override sql to create cache: SELECT W_CODE_D.SOURCE_NAME_1 AS SOURCE_NAME_1, W_CODE_D.SOURCE_NAME_2 AS SOURCE_NAME_2, W_CODE_D.MASTER_DATASOURCE_NUM_ID AS MASTER_DATASOURCE_NUM_ID, W_CODE_D.MASTER_CODE AS MASTER_CODE, W_CODE_D.MASTER_VALUE AS MASTER_VALUE, W_CODE_D.W_INSERT_DT AS W_INSERT_DT, W_CODE_D.TENANT_ID AS TENANT_ID, W_CODE_D.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, W_CODE_D.SOURCE_CODE AS SOURCE_CODE, W_CODE_D.CATEGORY AS CATEGORY, W_CODE_D.LANGUAGE_CODE AS LANGUAGE_CODE FROM W_CODE_D
    WHERE
    W_CODE_D.CATEGORY IN () ORDER BY DATASOURCE_NUM_ID,SOURCE_CODE,CATEGORY,LANGUAGE_CODE,SOURCE_NAME_1,SOURCE_NAME_2,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,TENANT_ID
    LKPDP_2> DBG_21097 Lookup Transformation [mplt_ADI_Codes.Lkp_Master_Code]: Default sql to create lookup cache: SELECT MASTER_VALUE,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,CATEGORY,LANGUAGE_CODE FROM W_MASTER_CODE_D ORDER BY MASTER_DATASOURCE_NUM_ID,MASTER_CODE,CATEGORY,LANGUAGE_CODE,MASTER_VALUE
    LKPDP_1> TE_7212 Increasing [Index Cache] size for transformation [mplt_ADI_Codes.Lkp_Master_Map] from [1000000] to [2611200].
    LKPDP_1> TE_7212 Increasing [Data Cache] size for transformation [mplt_ADI_Codes.Lkp_Master_Map] from [2000000] to [2007040].
    LKPDP_3> TE_7212 Increasing [Index Cache] size for transformation [mplt_ADI_Codes.Lkp_W_CODE_D] from [1000000] to [2611200].
    LKPDP_3> TE_7212 Increasing [Data Cache] size for transformation [mplt_ADI_Codes.Lkp_W_CODE_D] from [2000000] to [2007040].
    LKPDP_2> TE_7212 Increasing [Index Cache] size for transformation [mplt_ADI_Codes.Lkp_Master_Code] from [1000000] to [2611200].
    LKPDP_2> TE_7212 Increasing [Data Cache] size for transformation [mplt_ADI_Codes.Lkp_Master_Code] from [2000000] to [2007040].
    READER_1_1_1> DBG_21438 Reader: Source is [R12PLY], user [apps]
    READER_1_1_1> BLKR_16003 Initialization completed successfully.
    WRITER_1_*_1> WRT_8147 Writer: Target is database [orcl], user [baw], bulk mode [OFF]
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL INSERT statement:
    INSERT INTO W_CODE_D(DATASOURCE_NUM_ID,SOURCE_CODE,SOURCE_CODE_1,SOURCE_CODE_2,SOURCE_CODE_3,SOURCE_NAME_1,SOURCE_NAME_2,CATEGORY,LANGUAGE_CODE,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,W_UPDATE_DT,TENANT_ID) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL UPDATE statement:
    UPDATE W_CODE_D SET SOURCE_CODE_1 = ?, SOURCE_CODE_2 = ?, SOURCE_CODE_3 = ?, SOURCE_NAME_1 = ?, SOURCE_NAME_2 = ?, MASTER_DATASOURCE_NUM_ID = ?, MASTER_CODE = ?, MASTER_VALUE = ?, W_INSERT_DT = ?, W_UPDATE_DT = ?, TENANT_ID = ? WHERE DATASOURCE_NUM_ID = ? AND SOURCE_CODE = ? AND CATEGORY = ? AND LANGUAGE_CODE = ?
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL DELETE statement:
    DELETE FROM W_CODE_D WHERE DATASOURCE_NUM_ID = ? AND SOURCE_CODE = ? AND CATEGORY = ? AND LANGUAGE_CODE = ?
    WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_CODE_D]
    WRITER_1_*_1> WRT_8003 Writer initialization complete.
    READER_1_1_1> BLKR_16007 Reader run started.
    READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_Codes_Ap_Lookup.Sq_Ap_Lookup_Codes] User specified SQL Query [SELECT AP_LOOKUP_CODES.LOOKUP_CODE, AP_LOOKUP_CODES.LOOKUP_TYPE, AP_LOOKUP_CODES.DESCRIPTION
    FROM
    AP_LOOKUP_CODES
    WHERE
    LOOKUP_TYPE =   'ACCOUNT TYPE']
    READER_1_1_1> RR_4049 SQL Query issued to database : (Sat Apr 13 20:10:19 2013)
    WRITER_1_*_1> WRT_8005 Writer run started.
    WRITER_1_*_1> WRT_8158
    *****START LOAD SESSION*****
    Load Start Time: Sat Apr 13 20:10:19 2013
    Target tables:
    W_CODE_D
    READER_1_1_1> RR_4050 First row returned from database to reader : (Sat Apr 13 20:10:19 2013)
    READER_1_1_1> BLKR_16019 Read [4] rows, read [0] error rows for source table [AP_LOOKUP_CODES] instance name [mplt_BC_ORA_Codes_Ap_Lookup.AP_LOOKUP_CODES]
    READER_1_1_1> BLKR_16008 Reader run completed.
    LKPDP_3> TM_6660 Total Buffer Pool size is 609824 bytes and Block size is 65536 bytes.
    LKPDP_3:READER_1_1> DBG_21438 Reader: Source is [orcl], user [baw]
    LKPDP_3:READER_1_1> BLKR_16003 Initialization completed successfully.
    LKPDP_3:READER_1_1> BLKR_16007 Reader run started.
    LKPDP_3:READER_1_1> RR_4049 SQL Query issued to database : (Sat Apr 13 20:10:19 2013)
    LKPDP_3:READER_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    LKPDP_3:READER_1_1> RR_4035 SQL Error [
    ORA-00936: missing expression
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT W_CODE_D.SOURCE_NAME_1 AS SOURCE_NAME_1, W_CODE_D.SOURCE_NAME_2 AS SOURCE_NAME_2, W_CODE_D.MASTER_DATASOURCE_NUM_ID AS MASTER_DATASOURCE_NUM_ID, W_CODE_D.MASTER_CODE AS MASTER_CODE, W_CODE_D.MASTER_VALUE AS MASTER_VALUE, W_CODE_D.W_INSERT_DT AS W_INSERT_DT, W_CODE_D.TENANT_ID AS TENANT_ID, W_CODE_D.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, W_CODE_D.SOURCE_CODE AS SOURCE_CODE, W_CODE_D.CATEGORY AS CATEGORY, W_CODE_D.LANGUAGE_CODE AS LANGUAGE_CODE FROM W_CODE_D
    WHERE
    W_CODE_D.CATEGORY IN () ORDER BY DATASOURCE_NUM_ID,SOURCE_CODE,CATEGORY,LANGUAGE_CODE,SOURCE_NAME_1,SOURCE_NAME_2,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,TENANT_ID
    Oracle Fatal Error
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT W_CODE_D.SOURCE_NAME_1 AS SOURCE_NAME_1, W_CODE_D.SOURCE_NAME_2 AS SOURCE_NAME_2, W_CODE_D.MASTER_DATASOURCE_NUM_ID AS MASTER_DATASOURCE_NUM_ID, W_CODE_D.MASTER_CODE AS MASTER_CODE, W_CODE_D.MASTER_VALUE AS MASTER_VALUE, W_CODE_D.W_INSERT_DT AS W_INSERT_DT, W_CODE_D.TENANT_ID AS TENANT_ID, W_CODE_D.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, W_CODE_D.SOURCE_CODE AS SOURCE_CODE, W_CODE_D.CATEGORY AS CATEGORY, W_CODE_D.LANGUAGE_CODE AS LANGUAGE_CODE FROM W_CODE_D
    WHERE
    W_CODE_D.CATEGORY IN () ORDER BY DATASOURCE_NUM_ID,SOURCE_CODE,CATEGORY,LANGUAGE_CODE,SOURCE_NAME_1,SOURCE_NAME_2,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,TENANT_ID
    Oracle Fatal Error].
    LKPDP_3:READER_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    LKPDP_3:READER_1_1> BLKR_16004 ERROR: Prepare failed.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_ADI_Codes.Lkp_W_CODE_D], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_ADI_Codes.Exp_Master_Code_Lookup], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_ADI_Codes.Exp_Master_Code_Lookup], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_ADI_Codes.Exp_Master_Map_Lookup], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_ADI_Codes.Exp_Master_Map_Lookup], and the session is terminating.
    WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_CODE_D] at end of load
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_SA_ORA_Codes.Fil_Code_Valid], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    WRITER_1_*_1> WRT_8035 Load complete time: Sat Apr 13 20:10:19 2013
    LOAD SUMMARY
    ============
    WRT_8036 Target: W_CODE_D (Instance Name: [W_CODE_D])
    WRT_8044 No data loaded for this target
    WRITER_1__1> WRT_8043 ****END LOAD SESSION*****
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_SA_ORA_Codes.Fil_Code_Valid], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_SA_ORA_Codes.Exp_Code_Cleanse], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_SA_ORA_Codes.Exp_Code_Cleanse], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    MANAGER> PETL_24007 Received request to stop session run. Attempting to stop worker threads.
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_SA_ORA_Codes.Exp_Code_Name_Resolution], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_SA_ORA_Codes.Exp_Code_Name_Resolution], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_BC_ORA_Codes_Ap_Lookup.Exp_Ap_Lookup_Codes], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_BC_ORA_Codes_Ap_Lookup.Exp_Ap_Lookup_Codes], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_BC_ORA_Codes_Ap_Lookup.Sq_Ap_Lookup_Codes], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_BC_ORA_Codes_Ap_Lookup.Sq_Ap_Lookup_Codes], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Sat Apr 13 20:10:19 2013]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [mplt_BC_ORA_Codes_Ap_Lookup.Sq_Ap_Lookup_Codes], and the session is terminating.
    TRANSF_1_1_1> DBG_21511 TE: Fatal Transformation Error.
    MANAGER> PETL_24031
    ***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
    Thread [READER_1_1_1] created for [the read stage] of partition point [mplt_BC_ORA_Codes_Ap_Lookup.Sq_Ap_Lookup_Codes] has completed. The total run time was insufficient for any meaningful statistics.
    Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [mplt_BC_ORA_Codes_Ap_Lookup.Sq_Ap_Lookup_Codes] has completed. The total run time was insufficient for any meaningful statistics.
    Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_CODE_D] has completed. The total run time was insufficient for any meaningful statistics.
    MAPPING> CMN_1793 The index cache size that would hold [0] rows in the lookup table for [mplt_ADI_Codes.Lkp_W_CODE_D], in memory, is [0] bytes
    MAPPING> CMN_1792 The data cache size that would hold [0] rows in the lookup table for [mplt_ADI_Codes.Lkp_W_CODE_D], in memory, is [0] bytes
    MANAGER> PETL_24005 Starting post-session tasks. : (Sat Apr 13 20:10:19 2013)
    MANAGER> PETL_24007 Received request to stop session run. Attempting to stop worker threads.
    MANAGER> PETL_24007 Received request to stop session run. Attempting to stop worker threads.
    MANAGER> PETL_24029 Post-session task completed successfully. : (Sat Apr 13 20:10:19 2013)
    MAPPING> TE_7216 Deleting cache files [PMLKUP13765_131084_0_2791W32] for transformation [mplt_ADI_Codes.Lkp_Master_Map].
    MAPPING> TE_7216 Deleting cache files [PMLKUP13765_131081_0_2791W32] for transformation [mplt_ADI_Codes.Lkp_W_CODE_D].
    MAPPING> TE_7216 Deleting cache files [PMLKUP13765_131083_0_2791W32] for transformation [mplt_ADI_Codes.Lkp_Master_Code].
    MAPPING> TM_6018 The session completed with [0] row transformation errors.
    MANAGER> PETL_24002 Parallel Pipeline Engine finished.
    DIRECTOR> PETL_24013 Session run completed with failure.
    DIRECTOR> TM_6022
    SESSION LOAD SUMMARY
    ================================================
    DIRECTOR> TM_6252 Source Load Summary.
    DIRECTOR> CMN_1740 Table: [Sq_Ap_Lookup_Codes] (Instance Name: [mplt_BC_ORA_Codes_Ap_Lookup.Sq_Ap_Lookup_Codes])
         Output Rows [4], Affected Rows [4], Applied Rows [4], Rejected Rows [0]
    DIRECTOR> TM_6253 Target Load Summary.
    DIRECTOR> CMN_1740 Table: [W_CODE_D] (Instance Name: [W_CODE_D])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6023
    ===================================================
    DIRECTOR> TM_6020 Session [SDE_ORA_CodeDimension_Bank_Cat] completed at [Sat Apr 13 20:10:20 2013].

    Hi 966148,
    All your code combination tasks refer Category as the parameter which DAC passes.
    If you did not follow the instruction what I gave in
    SDE_ORAR1213_Adaptor.SDE_ORA_GLBalanceFact_Full
    then all your code tasks fails with Database driver error.
    If you followed then Close the thread by saying it fixed your issue.
    Mark helpfull or correct.
    Regards,
    Veeresh Rayan

  • How to know the history of shrinking log files in mssql

    hello,
    In my SAP system some one shrinked the log file to 100 GB to 5 GB.How we would check when this
    was shrinked recently .
    Regards,
    ARNS.

    hi,
    Did u check the logfile in sapdirectory.There will be entry of who changed the size and the time.
    Also,
    Goto the screen where we usually change the logfile size.In that particular field press f1 and goto technical setting screen. Get the program name , table name  and field name.
    Now using se11 try to open the table and check whether the changed by value is there for that table.
    Also open the program and debug at change log file process block.use can see in which table it update the changes.
    There is a case of caution in this case.
    The size of the application server's System Log is determined by the
    following SAP profile parameters. Once the current System Log reaches
    the maximum file size, it gets moved to the old_file and and a new
    System Log file gets created. The number of past days messages in the
    System Log depends on the amount/activity of System Log messages and the
    max file size. Once messages get rolled off the current and old files,
    they are no longer retrievable.
    rslg/local/file /usr/sap/<SID>/D*/log/SLOG<SYSNO>
    rslg/local/old_file /usr/sap/<SID>/D*/log/SLOGO<SYSNO>
    rslg/max_diskspace/local 1000000
    rslg/central/file /usr/sap/<SID>/SYS/global/SLOGJ
    rslg/central/old_file /usr/sap/<SID>/SYS/global/SLOGJO
    rslg/max_diskspace/central 4000000  .

  • Routing logs to individual log file in multi rules_file MaxL

    Hi Gurus,
    I have been pretty late to this forum after long time. I have a situation here, and trying to find out the best way for operational benefits.
    We have an ASO cube (Historical) keeps 24 months snapshot data and refreshed monthly just like last 24 months rolling. The cube size is around 18.5 GB. The input level data size is around 13 GB. For monthly refresh the current process rebuilds the cube from scratch, deletes the 1/24 snapshot as it is going to add last months snapshot. The entire process takes 13 hours of processing time becuase the server doesn't have number of CPUs to support parallel operations.
    Since we recently moved to 11.1.2.3, and have ample amounts of CPUs(8) and RAM (16gb), I'd like to take davantage of parallelism, and will go for incremental load. Prior to that since the outline build is EPMA driven I'd only like to rebuild the dimension with all data, essentially restructures the DB, with data after metadata refresh, so that I can keep my history intact, and only proceed for loading the last month's data after clearing out the 1st snapshot.
    My MaxL script looks like below:
    /* Set up logs */
    set timestamp on;
    spool on to $(mxlLog).log;
    /* Connect to Essbase */
    login $key $essUser $key $essPwd on $essServer;
    alter application "$essApp" load database "$essDB";
    /* Disable User Access to DB */
    alter application "$essApp" disable connects;
    /* Unlock all objects */
    alter database "$essApp"."$essDB" unlock all objects;
    /* Clear all data for previous month*/
    alter database "$essApp"."$essDB" clear data in region 'CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})' physical;
    /* Load SQL Data */
    import database "$essApp"."$essDB" data connect as $key $edsUser identified by $key $edsPwd using multiple rules_file 'LOADDATA','LOADJNLS','LOADFX','LOAD_J1','LOAD_J2','LOAD_J3','LOADDELQ' to load_buffer_block starting with buffer_id 1 on error write to "$(mxlLog)_LOADDATA.err";
    /* Selects and build an aggregation that permits the database to grow by no more than 300% */
    execute aggregate process on database "$essApp"."$essDB" stopping when total_size exceeds 4 enable alternate_rollups;
    /* build query tracking views */
    execute aggregate build on database "$essApp"."$essDB" using view_file 'gw';
    /* Enable Query Tracking */
    alter database "$essApp"."$essDB" enable query_tracking;
    /* Enable User Access to DB */
    alter application "$essApp" enable connects;
    logout;
    exit;
    I am able to achive performance but not satisfactory. So I have couple of queries below.
    1. Whether bule shaded codes can further be tuned. I have major problem in clearing only 1 month snapshot : where I require to clear one scenario and the designated 1st month.
    2. Multiple rules_file statement, how do I write logs of each load rule to separte log files instead one, my previous process is wrting error-log for each load rule in separte log file and consolidates at the end of batch run to a single file for the whole batch execution.
    Apprecaite any help in this regrad.
    Thanks,
    DD

    Thanks Celvin. I'd rather route MaxL logs in one log file and consolidate into the batch logs instead of using
    multiple log files.
    Regrading Partial Clear:
    My worry is, I first tried partial clear with 'logical', that too took considerable amonut of time, and the
    difference between logical and physical clear is only 15-20 minutes. FYI, I have 31 dimensions in this cube,
    and the MDX clear script that use Scenario->ACTUAL and Period->&CLEAR_PERIOD (SubVar) is of dynamic hierarchy
    type.
    Is there a way I can rewrite the clear data MDX script in betterway  so that it will clear faster, than this
    <<CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})>>
    Does this clear MDX have any effect on dynamic/stored hierarchy nature of the dimension, if not, then what
    would be optimized way to write this MDX?
    Thanks,
    DD

  • Need to understand when  redo log files content  is wrote to datafiles

    Hi all
    I have a question about the time when redo log files are wrote to the datafiles
    supposing that the database is in NOARCHIVELOG mode, and all redo log files are filled, the official oracle database documentation says that: *a filled redo log file is available
    after the changes recorded in it have been written to the datafiles* witch mean that we just need to have all the redo log files filled to "*commit*" changes to the database
    Thanks for help
    Edited by: rachid on Sep 26, 2012 5:05 PM

    rachid wrote:
    the official oracle database documentation says that: a filled redo log file is available after the changes recorded in it have been written to the datafiles It helps if you include a URL to the page where you found this quote (if you were using the online html manuals).
    The wording is poor and should be modified to something like:
    <blockquote>
    +"a filled online redo log file is available for re-use after all the data blocks that have been changed by change vectors recorded in the log file have been written to the data files"+
    </blockquote>
    Remember if a data block that is NOT an undo block has been changed by a transaction, then an UNDO block has been changed at the same time, and both change vectors will be in the redo log file. The redo log file cannot, therefore, be re-used until the data block and the associated UNDO block have been written to disc. The change to the data block can thus be rolled back (uncommitted changes can be written to data files) because the UNDO is also available on disc if needed.
    If you find the manuals too fragmented to follow you may find that my book, Oracle Core, offers a narrative description that is easier to comprehend.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

Maybe you are looking for

  • Unable to upload multiple files

    In MOSS, I am unable to upload multiple files from the Site Actions | Manage Content and Structure | New | Item menu. I have tried adding Word documents and image files (JPEG, GIFS) to no avail into the \images or \document folders. Even if I try jus

  • Adding a JSON string to an outbound RESTful call in OSB 11.1.1.6

    Inside my Proxy Service Message Flow (OSB 11.1.1.6) I have generated a JSON string and stored it in a variable named $jsonReq. In my Routing action, I point to a Business Service whose Service Type is "Messaging Service" with both Request/Response Me

  • How much will the iPhone 4S cost in Ireland?

    Was wondering how much will a sim free 32 gb iPhone 4S cost when its released in ireland on Oct 28th any one know?

  • Solaris upgrade from 2.8 to 10

    Hi We are starting with the upgrade project soon, and it involves several phases. The first is to upgrade the operating system from Solaris 2.8 to Solaris 10. I have searched the support portal for an upgrade guide for this process and have found not

  • Printing Problem: iMac G5 2.0G

    Everytime that I try to print a document from any possible program, that program quits and no printing results. Printer is a Brother 210C. I have loaded the Printer software and the Brother 210C is displayed and selected in the Printer Utility. I hav