Batch log limit?

FCP is suddenly refusing to log more than 15 clips when I import a batch list. Can anyone explain why this is happening?
I am using FCP 5.1.4 on a G5 Dual 2.7GHz, with OSX 10.4.11. My batch lists are created in Excel 2004 and everything logs ok, up to the 16th clip in any batch. The 16th clip onwards does not log, so I have been making multiple batch lists.
I have never experienced this before and almost always have many more than 15 clips in any batch.
The error message says 'problem reading text file', yet it has clearly read the beginning of the text file with no problem!
Most frustrated here, can anyone help?
Thanks very much in advance.
Dominic

Following an email from Michael, I now have this resolved.
On the original batch file I had only included one comment. On Michael's suggestion I put a comment against every clip and the file then loaded exactly as it should. Very odd bug that one!
Thanks very much Michael. Is there a way I can credit you as having resolved my issue? I gave you a 'helpful' for your initial reply, but you should have a green star too.
Best
Dominic
Message was edited by: Dominic Witherow

Similar Messages

  • Batch logging -- what do you use?

    I have a new 10g database. We want to write some long running overnight batches. I wish to introduce a logging process. If we switch this on (ie set logging = yes) I want each process to write error and information messages to a new batch log file.
    In other databases we have written some in house batch logger stuff which is a pain to support. Surely every major installation must do this and I dont want to write a bit of logging software.
    Q) Is there a standard logger you all use? Do oracle provide one?
    Note I have already looked at log4plsq but this seems to rely on dbms output which I am not happy with as you can easily break the buffer limit.

    No I dont think so. I would have thought that all
    major batch processing systems have this requirement
    in general to write error/info/debug information to a
    file or table.True. Different applications may well have different requirements, however. Generic frameworks are great for logging unstructured information. Custom frameworks tend to be more useful when you need to log more structured information (i.e. writing exceptions to a table, logging application-specific status messages for monitoring, etc).
    I am aware of this approach but this does have a few
    limitations mainly :
    a)if the tablespace is full then you are unlikely to
    write to the tableTrue. However tablespaces filling up generally trigger pages to DBAs, so it's relatively unlikely that anything your application logs in this situation will be beneficial. I
    b)writing of to the table has to be handled
    independent of the commit strategy of the main
    program.Not if you use autonomous transactions
    c) we have requirement that this informaiton is
    written to a fileIt seems somewhat silly to me to log to a flat file from a database-- having error information in tables lets you query it much more efficiently than you can search a bunch of flat files. File I/O also tends to be quite expensive relative to table writes, which can really hurt batch processes. Log4PLSQL, however, does permit you to log to the alert log or to a database trace file.
    Justin
    Looks like APC beat me to it and covered the same points I did. Glad to see we generally agree.
    Message was edited by:
    Justin Cave

  • Unable to give the batch log name more than 6 char

    EPMA Batch Log file
    unable to give the batch log name more than 6 char if i give more than 6 chars it dosent log anything in the log file its was working fine in 11.1.2 but we face issues in 11.1.2.1
    please advise

    We had the same issue. We downloaded and installed the patch for 11.1.2.1 EPMA (Patch: 11804477) and that corrected the issue. Hope this helps you.
    --Matt                                                                                                                                                                                                                                                                                                                               

  • Error Message: Enter Plant, ME 083 in BAPI_PR_CREATE in Batch Log of batch

    Hi Experts,
    Am creating the Purc Requisations of EBAN table, by using BAPI_PR_CREATE from my " Z " prog.
    They are creating fine, but, in JOB LOG(when scheduled as BATCH JOB), am getting an error as
    " ENTER PLANT" of ID is ME # is 083.
    I tried all combinations, like populated a value into Procuring plant, Supplying plant, in the folllowing structures of BAPI,(passing follwoing tables)
             pritem    = it_bapimereqitemimp
             pritemx   = it_bapimereqitemx
             pritemexp = it_bapimereqitem
             PRITEMSOURCE = it_BAPIMEREQSOURCE
    still getiitng the same error!!!
    I debugged, but, culd not able figure it out??
    let me know that, Why its so?
    How to fix it?
    Any SAP NOTE?
    thanq

    Hi,
    I have the same issue.
    Did you ever figure this out ?

  • Batch file limit

    What is the maximum number of files you can open when using batch-processing?
    I need to process more then 100.000 files one by one. The files are stored in a folder structure. I try Automate -> Batch -> Source: Folder, but it will not start with the first image.
    Are there other solutions (Scripting, Bridge, more memory maybe)?

    I read about the command line limit, but I expected Photoshop to load each file separately and not all at once.
    Then I have to let a piece of javascript load each file and process it. I will use the script instead the batch-processing.

  • Oracle JDBC batch size limit

    Hi,
    Does anyone know of any limitation on the size of the JDBC batch?
    We are facing a problem while storing large feeds to DB:
    - the feed file contains 77K records in it
    - the save to DB action puts all of them in one BATCH statement and tries to save
    - unfortunately in the DB only a small amount of records are saved and the JDBC driver goes silent about the rest
    - the number of records saved from the total count of 77K varies from machine to machine: on my machine the number was something like 11K, on a testing machine it was something like 9K
    - we also know that on some machines even 40K records were saved
    The code fix for this is simple: just save in batches of 1K(or whatever small amount) until all records are saved.
    But I would rather find out that there is a JVM/JDBC configuration option to increase the number of records one BATCH statement can save at a time.
    If not, why the difference between machines? Could it be the amount of RAM available for JVM on the different machines?
    So, does anyone have any idea?
    Thanks,
    Viorel

    Hello,
    This is a forum for TopLink/JPA, so you might have better luck posting your question in the JDBC forum here:
    Java Database Connectivity (JDBC)

  • History log limit

    In CS4, I could have as many undos as I liked, while there only seems to be a default of 20 in CS5 and CS6, or maybe I just can't find how to increase this limit.

    Hello, go to Edit(PC)/Photoshop(Mac)>Preferences>Performance. You can increase that up to 1000. If you have enough memory and scratch disk size.

  • How to write log information into SM37 batch job log

    Hi,
    I have a report running in batch mode, and I would like to log the start time and end time for some part of the code (different Function modules). I need to write this log information into the batch job log. Therefore I can check the time frame of my FMs.
    After search the SDN, I can only get some information on how to write log into the application log displayed in SLG1, but that's not I want. I want to write batch log information, and check it in SM37.
    If you have some solution or code to share, please. Thanks a lot.
    Best Regards,
    Ben

    Hi Nitin
    Thanks for the reply. Could you explain it with some code ?
    I tried to use the write statement , but it did not wrok. I could not see the result in SM37.
    write : "start of the FM1 processing".
    FM1 code
    write : "end of the FM1 processing".
    but those two statement did not show in SM37..
    1) how to use  a information message  ?
    2) how to use NEW PAGE PRINT ON and PRINT OFF command. ?
    I would appreciate if you can write some code ,that I can use directly.
    Thanks a lot.
    Best Regards,
    Ben

  • Is this possible with LOG ERRORS?

    I have a procedure which does a bulk insert/update operation using MERGE (running on Oracle 10gR2). We want to silently log any failed inserts/updates but not rollback the entire batch no matter how many inserts fail. So I am logging the exceptions via LOG ERRORS REJECT LIMIT UNLIMITED. This works fine actually.
    The one other aspect is the procedure is being called from Java and although we want any and all good data to be committed, regardless of how many rows have bad data, we still want to notify the Java front end that not all records were inserted properly. Even something such as '150 rows were not processed.' So I am wondering if there is anyway to still run the entire batch, log the errors, but still raise an error from the stored procedure.
    Here is the working code:
    CREATE TABLE merge_table
        t_id     NUMBER(9,0),
        t_desc   VARCHAR2(100) NOT NULL
    CREATE OR REPLACE TYPE merge_type IS OBJECT
        type_id     NUMBER(9,0),
        type_desc   VARCHAR2(100)
    CREATE OR REPLACE TYPE merge_list IS TABLE OF merge_type;
    -- Create Error Log.
    BEGIN
        DBMS_ERRLOG.CREATE_ERROR_LOG(  
            dml_table_name      => 'MERGE_TABLE',
            err_log_table_name  => 'MERGE_TABLE_ERROR_LOG',     
    END;
    CREATE OR REPLACE PROCEDURE my_merge_proc_bulk(p_records IN merge_list)
    AS
    BEGIN
        MERGE INTO merge_table MT
        USING
            SELECT
                type_id,
                type_desc
            FROM TABLE(p_records)
        ) R           
        ON
            MT.t_id = R.type_id
        WHEN MATCHED THEN UPDATE
        SET
             MT.t_desc = R.type_desc
        WHEN NOT MATCHED THEN INSERT
            MT.t_id,
            MT.t_desc
        VALUES
            R.type_id,
            R.type_desc
        LOG ERRORS INTO MERGE_TABLE_ERROR_LOG ('MERGE') REJECT LIMIT UNLIMITED;
        COMMIT;
    END;
    -- test script to execute procedure
    DECLARE
        l_list       merge_list := merge_list();
        l_size       NUMBER;
        l_start_time NUMBER;
        l_end_time   NUMBER;
    BEGIN
        l_size := 10000;
        DBMS_OUTPUT.PUT_LINE('Row size: ' || l_size || CHR(10));
        l_list.EXTEND(l_size);
        -- Create some test data.
        FOR i IN 1 .. l_size
        LOOP
            l_list(i) := merge_type(i,'desc ' || TO_CHAR(i));
        END LOOP;
        EXECUTE IMMEDIATE 'TRUNCATE TABLE MERGE_TABLE';
        EXECUTE IMMEDIATE 'TRUNCATE TABLE MERGE_TABLE_ERROR_LOG';
        -- Modify some records to simulate bad data/nulls not allowed for desc field  
        l_list(10).type_desc := NULL;
        l_list(11).type_desc := NULL;
        l_list(12).type_desc := NULL;
        l_list(13).type_desc := NULL;
        l_list(14).type_desc := NULL;
        l_start_time := DBMS_UTILITY.GET_TIME;   
        my_merge_proc_bulk(p_records => l_list);
        l_end_time := DBMS_UTILITY.GET_TIME;
        DBMS_OUTPUT.PUT_LINE('Bulk time: ' || TO_CHAR((l_end_time - l_start_time)/100) || ' sec. ' || CHR(10));
    END;
    /I tried this at the end of the procedure, but it does not work, probably because I am not using SAVE EXCEPTIONS:
        IF (SQL%BULK_EXCEPTIONS.COUNT > 0) THEN
            RAISE_APPLICATION_ERROR(-20105, SQL%BULK_EXCEPTIONS.COUNT || ' rows failed for the batch.' );
        END IF;Also the one thing we would like to have is the datetime logged for each failure in the ERROR_LOG table. We may be running several different batches over night. Is this possible to manipulate the table to add this?
    Name                              Null?    Type
    ORA_ERR_NUMBER$                            NUMBER
    ORA_ERR_MESG$                              VARCHAR2(2000)
    ORA_ERR_ROWID$                             ROWID
    ORA_ERR_OPTYP$                             VARCHAR2(2)
    ORA_ERR_TAG$                               VARCHAR2(2000)
    CHANNEL_ID                                 VARCHAR2(4000)
    CHANNEL_DESC                               VARCHAR2(4000)
    CHANNEL_CLASS                              VARCHAR2(4000)Edited by: donovan7800 on Feb 16, 2012 1:14 PM
    Edited by: donovan7800 on Feb 16, 2012 1:17 PM

    Ah yes I remember. The guy needing the TABLE(p_records).
    Re: Merge possible from nested table?
    >
    I tried this at the end of the procedure, but it does not work, probably because I am not using SAVE EXCEPTIONS:
    IF (SQL%BULK_EXCEPTIONS.COUNT > 0) THEN
    RAISE_APPLICATION_ERROR(-20105, SQL%BULK_EXCEPTIONS.COUNT || ' rows failed for the batch.' );
    END IF;
    >
    Correct - you need to use SAVE EXCEPTIONS.
    I know there is the FORALL command, but I figured there was a way to do this with MERGE since the procedure does an update if a match is found instead
    But you can use MERGE with FORALL and add the SAVE EXCEPTIONS to handle your problem.
    I still have a question as to what the source of the PL/SQL table provided by the parameter Is this table being prepared in another PL/SQL procedure, in Java, or how? Are you confident that the number of rows in the table will be small enough to avoid a memory issue?
    If in PL/SQL you could pass a ref cursor and then in this proc use a LOOP with a 'BULK COLLECT into' with a LIMIT clause to do the processing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Timed Interval D1-IMD batch is not working in MDM

    Hi,
    This issue is regarding CM-IMDD(Duplicate of D1-IMD) Timed Interval batch in MDM for converting PENDING imd's to EXCEPTION or FINALIZED.
    When i run this batch for 90 sec as Timed interval. it is creating Next Batch Number and status is going to complete but none of the records got processed. In log threadId is getting null. But when i run the same batch for non-Timed Interval records are getting processed and in log also threadId is not null.
    In threadpoolworker.log see below:
    - 2011-12-06 13:54:24,340 [DistributedCache:EventDispatcher] INFO (support.batch.DistributedJobExecuter) Creating initial run for batch control CM-IMDD
    - 2011-12-06 13:54:24,356 [DistributedCache:EventDispatcher] INFO (support.cluster.BatchClusterCache) updateEntryWithRunInfo key: 773891993 runId: BatchRun_Id(batchControlId: [CM-IMDD], batchNumber: 99, batchRerunNumber: 0) threadId: null
    - 2011-12-06 13:54:24,418 [DEFAULTWorker:3] INFO (batch.logging.BatchLogWriter) Sending standard output to E:\MDM\ouaf\sploutput\DEV9\CM-IMDD.20111206135655.5.THD1.stdout
    - 2011-12-06 13:54:24,418 [DEFAULTWorker:3] INFO (batch.logging.BatchLogWriter) Sending error output to E:\MDM\ouaf\sploutput\DEV9\CM-IMDD.20111206135655.5.THD1.stderr
    - 2011-12-06 13:54:24,497 [DEFAULTWorker:3] INFO (api.batch.AbstractThreadWorker) Batch thread started with holdable session for primary QueryIterator. Cursor will be refreshed every 15 minute(s).
    - 2011-12-06 13:54:24,747 [DEFAULTWorker:3] INFO (support.batch.BatchWorkInSessionExecutable) Preparing to remove work.
    - 2011-12-06 13:54:24,747 [DEFAULTWorker:3] INFO (support.batch.BatchWorkInSessionExecutable) Configuring the end of batch tread execution
    - 2011-12-06 13:54:24,747 [DEFAULTWorker:3] INFO (support.batch.BatchWorkInSessionExecutable) Closing log writer
    - 2011-12-06 13:54:34,278 [GeneralBatchWorkManagerWorker:1] INFO (api.batch.AbstractBatchNotifier) Using mail server properties from mail sender context definition
    - 2011-12-06 13:54:34,293 [GeneralBatchWorkManagerWorker:1] INFO (api.batch.AbstractBatchNotifier) Will use default email sender CM-EMAIL to send email notifications
    - 2011-12-06 13:54:34,293 [GeneralBatchWorkManagerWorker:1] INFO (support.batch.BatchNotificationHandler) Notification message of end of batch run for CM-IMDD has been sent to [email protected]
    Please help in this regard asap..
    Thanks in advance

    Hi Shaymal & VRMP
    I have checked the system, the only problem I am facing is that the system does not show next inspection date dialog box at the time of UD where I can see the next inspection date being set by the system
    Say, I use QA07 (Triggered Manually) on 19.jan.08. The Next inspection date was 26.jan.08 and Initial Run in Days was 10 (days), the system created inspection lot on 19.jan.08 successfully. At the time of UD it automatically set 29.jan.08 as the next inspection date (i.e. 19.jan.08 + 10 days)  istead of 06. Feb.08 (i.e. 26.jan.08 + 10 days) without showing dialog box / suggesting next inspectino date. 
    regards
    Mobashir
    Edited by: Muhammad Mobashir on Jan 19, 2009 7:11 AM

  • Routing logs to individual log file in multi rules_file MaxL

    Hi Gurus,
    I have been pretty late to this forum after long time. I have a situation here, and trying to find out the best way for operational benefits.
    We have an ASO cube (Historical) keeps 24 months snapshot data and refreshed monthly just like last 24 months rolling. The cube size is around 18.5 GB. The input level data size is around 13 GB. For monthly refresh the current process rebuilds the cube from scratch, deletes the 1/24 snapshot as it is going to add last months snapshot. The entire process takes 13 hours of processing time becuase the server doesn't have number of CPUs to support parallel operations.
    Since we recently moved to 11.1.2.3, and have ample amounts of CPUs(8) and RAM (16gb), I'd like to take davantage of parallelism, and will go for incremental load. Prior to that since the outline build is EPMA driven I'd only like to rebuild the dimension with all data, essentially restructures the DB, with data after metadata refresh, so that I can keep my history intact, and only proceed for loading the last month's data after clearing out the 1st snapshot.
    My MaxL script looks like below:
    /* Set up logs */
    set timestamp on;
    spool on to $(mxlLog).log;
    /* Connect to Essbase */
    login $key $essUser $key $essPwd on $essServer;
    alter application "$essApp" load database "$essDB";
    /* Disable User Access to DB */
    alter application "$essApp" disable connects;
    /* Unlock all objects */
    alter database "$essApp"."$essDB" unlock all objects;
    /* Clear all data for previous month*/
    alter database "$essApp"."$essDB" clear data in region 'CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})' physical;
    /* Load SQL Data */
    import database "$essApp"."$essDB" data connect as $key $edsUser identified by $key $edsPwd using multiple rules_file 'LOADDATA','LOADJNLS','LOADFX','LOAD_J1','LOAD_J2','LOAD_J3','LOADDELQ' to load_buffer_block starting with buffer_id 1 on error write to "$(mxlLog)_LOADDATA.err";
    /* Selects and build an aggregation that permits the database to grow by no more than 300% */
    execute aggregate process on database "$essApp"."$essDB" stopping when total_size exceeds 4 enable alternate_rollups;
    /* build query tracking views */
    execute aggregate build on database "$essApp"."$essDB" using view_file 'gw';
    /* Enable Query Tracking */
    alter database "$essApp"."$essDB" enable query_tracking;
    /* Enable User Access to DB */
    alter application "$essApp" enable connects;
    logout;
    exit;
    I am able to achive performance but not satisfactory. So I have couple of queries below.
    1. Whether bule shaded codes can further be tuned. I have major problem in clearing only 1 month snapshot : where I require to clear one scenario and the designated 1st month.
    2. Multiple rules_file statement, how do I write logs of each load rule to separte log files instead one, my previous process is wrting error-log for each load rule in separte log file and consolidates at the end of batch run to a single file for the whole batch execution.
    Apprecaite any help in this regrad.
    Thanks,
    DD

    Thanks Celvin. I'd rather route MaxL logs in one log file and consolidate into the batch logs instead of using
    multiple log files.
    Regrading Partial Clear:
    My worry is, I first tried partial clear with 'logical', that too took considerable amonut of time, and the
    difference between logical and physical clear is only 15-20 minutes. FYI, I have 31 dimensions in this cube,
    and the MDX clear script that use Scenario->ACTUAL and Period->&CLEAR_PERIOD (SubVar) is of dynamic hierarchy
    type.
    Is there a way I can rewrite the clear data MDX script in betterway  so that it will clear faster, than this
    <<CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})>>
    Does this clear MDX have any effect on dynamic/stored hierarchy nature of the dimension, if not, then what
    would be optimized way to write this MDX?
    Thanks,
    DD

  • Retail Datawarehouse batch INVALID ITEM/STORE REPORT

    Hi,
    After running DataWarehouse batch, in the DWI server sometimes appears the following two files in the arch directory (/app/retail/data/arch/done.20110726): itemfile, vatfile
    when open itemfile, appers the following:
            INVALID ITEM/STORE REPORT
    This report indicates items that were sold at stores not stocking the items lis
    ted.
    All items listed were processed but one of the following actions should be take
    n:
    1) items should be set up in Retek at the stores listed
    2) items should be physically removed from the stores listed
           STORE                           ITEM
      0000000012                      100385251
            INVALID ITEM/STORE REPORT
    This report indicates items that were sold at stores not stocking the items lis
    ted.
    All items listed were processed but one of the following actions should be take
    n:
    1) items should be set up in Retek at the stores listed
    2) items should be physically removed from the stores listed
           STORE                           ITEM
      0000000013                      100385251when open vatfile, appers the following:
    VAT WARNING AND ERROR MESSAGES FOR STORE 0000000012 ON TRANSACTION DATE: 20110714
    APPLICATION ERROR: Record#=0000006420: Table vat_item has no entry for item=100385251 *region=0001* related to store=0000000012
    VAT WARNING AND ERROR MESSAGES FOR STORE 0000000013 ON TRANSACTION DATE: 20110714
    APPLICATION ERROR: Record#=0000001720: Table vat_item has no entry for item=100385251 *region=0001* related to store=0000000013Since explicity seems to be a problem with record of item=10038525 on table VAT_ITEM
    I run the following query in RMS
    select *
    from vat_item
    where item = 100385251and it gaves me this results
    ITEM;*VAT_REGION*;ACTIVE_DATE;VAT_TYPE;VAT_CODE;VAT_RATE;CREATE_DATE;CREATE_ID;CREATE_DATETIME;LAST_UPDATE_DATETIME;LAST_UPDATE_ID
    *100385251;1*;20/07/2011;B;0;0;19/07/2011;LBARBOSA;19/07/2011 05:15:19 p.m.;19/07/2011 05:15:19 p.m.;LBARBOSA
    *100385251;1*;19/07/2011;B;1;12;19/07/2011;LBARBOSA;19/07/2011 04:35:37 p.m.;19/07/2011 04:35:37 p.m.;LBARBOSA
    100385251;2;19/07/2011;B;0;0;19/07/2011;LBARBOSA;19/07/2011 04:35:37 p.m.;19/07/2011 04:35:37 p.m.;LBARBOSA
    also I look for data related in RDW database, and some of it appears. Now I don't have no idea what cause the problem, neither its implications. I write you to know if something similar happened to any of you, and If you any idea of it
    Edited by: luisurea on 27-jul-2011 14:46

    luisurea: I'm not sure that the itemfile and vatfile records you indicate are being produced by RETL DWI batch code such as slsildmex.ksh. I assume you're running version 13 of DWI/RDW? What matters to RDW is that every row from the incoming RDWT file which slsildmex.ksh consumes, is output to slsildmdm.txt file, and that every row from slsildmdm.txt file is consumed and properly loaded to RDW target table sls_item_lm_dm table. DWI modules like slsildmex.ksh do have reject processing, for instance if an RMS.ITEM_LOC_SOH/ITEM_LOC record is not found for data in the RDWT file. In that case, the batch log and error (rfx/log and rfx/error) output of the run of slsildmex.ksh should have noted such a reject. I'm wondering if your itemfile and vatfile are artifacts of an RMS batch process such as posupld. If you're doing proper data validation from the RDWT file, to slsildmdm.txt file, to SLS_ITEM_LM_DM table in RDW, and the F_SLS_AMT (i.e. retail value) facts match how you have RMS/ReSA configured, then your sales integration to RDW is fine... and thsoe files might be more related to RMS sales uploading. Hope that helps,
    Dan

  • Logging on NW04

    Hi,
    I'm trying to implement logging a Netweaver application.
    The application must be backwards compatible with SP2. The SP2
    logging works fine, but i can't get the Netweaver logging to
    work even though.
    Following are the steps i took (got this from sap.help.com)
    http://help.sap.com/saphelp_nw04/helpdata/en/e2/f410409f088f5ce10000000a155106/frameset.htm
    NetWeaver version:
    Stack 10
    Configuration of the Portal through visual admin:
    =================================================
    1. Navigate to Server->Services->LogConfigurator
    2. Click the 'to Advance mode' tab
    3. Click the 'Destinations' tab and clicked 'new' for a new Destination
      entered the following data:
      Name: FrameworkLog
      Type: FileLog
      Pattern: ./log/framework.log
      Limit: 500000
      Count: 5
      Severity: All
      Formatter: Anonymous[ListFormatter]
    4. Created a Location by choosing the "Location" tab
      Name: framework.loger
      Min: all
      Max: all
      Severity: all
    5. Assigned the Destination created in (3) to the new Location
    Configuration in the logger.xml file:
    =====================================
    added the following to the logger.xml file:
    the filename parameter is kept for backwards compatibility reasons.
    <Server>
      <Logger name="framework.logger"
              loggerInterface="com.sapportals.portal.prt.logger.ILogger"
              isActive="true"
              locationName = "framework.logger"
              >
       <LoggerClass className="com.sapportals.portal.prt.logger.SimpleFileLogger" level="ALL">
          <param filename="logs/framework.logger.log" append="false"/>
       </LoggerClass>
      </Logger>
    </Server>
    Usage in source code:
    =====================
    static Location location = Location.getLocation("framework.logger");
    location.debugT(xxx);
    The problem is:
    Even though I save everything in the Visual Administrator (after step 5) the assigned
    destination disappears if you, say browse the logviewer and return to the Location.
    All log messages are written to the defaultTrace.
    I tried using a Category. Created a new Category in the Visual Admin and assigned the
    Destination created in step 3.
    The usage in the source code was
    static Category category = Category.getCategory("framework.logger");
    // then using the location attempted to write
    category.info(location, "xxxxx");
    Once again everything was written to defaultTrace. the one difference however,
    was that a (empty) logfile was created and the Destination assigned to
    the Category did not 'get lost'
    any hints as to what i'm doing wrong would be greatly appreciated.
    cheers
    michael

    Hi Michael,
    probably you didn't deactivate ForceSingleTraceFile, see https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/events/webinars/using logging and tracing on the sap web as java.pdf pages 12-17
    Also see http://help.sap.com/saphelp_nw04/helpdata/en/e2/75a74046033913e10000000a155106/frameset.htm for portal specific statements.
    Hope it helps
    Detlev

  • DNS LOG writting Issue in win 2k8 server.

    We have Win2K8 Std, we are experiencing the DNS log issue on this Server. We had set the
    DNS Log limit is 200 MB and this logs should be over written , but automatically deleting the logs once reaches the limit ( 200 MB), again will start with writing the fresh logs.Please assist us the issue as early as possible.

    text below from this link: http://technet.microsoft.com/en-us/library/bb726966.aspx
    which one you've set:
    Determine what happens when the maximum log size is reached. The options available are
    Overwrite Events As Needed Events in the log are overwritten when the maximum file size is reached. Generally, this is the best option on a low priority system.
    Overwrite Events Older Than . . . Days When the maximum file size is reached, events in the log are overwritten only if they are older than the setting you select. If
    the maximum size is reached and the events can't be overwritten, the system generates error messages telling you the event log is full.
    Do Not Overwrite Events (Clear Log Manually) When the maximum file size is reached, the system generates error messages telling you the event log is full.
    or check out this thread, you can opt to save event logs:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/728bb896-b9c4-4043-8aed-7fd4d53713f6/how-do-dns-logs-overwrite?forum=winserverNIS
    Every second counts..make use of it. Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
    IT Stuff Quick Bytes

  • SM35 logs explanation

    In my scenario i executed a report in background ,bdc is being used in it.when i check in sm35 it seems that in one batch logs some of the transaction has errored out and others got executed successfully.how to identify the reason for that particular errors.

    Hi,
    either you run your BI in foreground (step by step) to see what´s wrong or select the BI-session and press Log. Select the log and press Analyse session or Display. See what´s wrong.

Maybe you are looking for

  • How do I transfer one playlist from my itunes library to my ipod?

    How do I transfer one playlist from my itunes library to my ipod. I do not want to sync my whole library music. I tried to set to *manually managing music* and drag the playlist. Doesn't work. If I try to sync *selected playlists* it wants to delete

  • Upgrading to latest iTunes solved CD jewel case songlist printing problems

    I might save someone lots of time here by just stating the fact that when I upgraded to 11.0.3.42 the problems I had with printing jewel case inserts ended. I put off upgrading for quite a while, and tried to get help from iTunes apple to no avail. M

  • Safari will not download silverlight

    Tried to download silverlight but keep getting a message 'safari can not download' Is there anything I can do

  • Primitive data type casting

    Hi.. when I run the following prg, It is throwing exception at line 6.But it is compiling well with line 5. what make difference here.Why cant we do same with long. public class Tester {      public static void main(String[] args) { final int i=10; f

  • Userexit to update purchase docs like sales orders

    Hi, For sales documents, there is a user exit MV45AFZZ which has lot of form routines, that get invoked before saving the sales document. Similarly for purchase orders which is the user exit include which will have such form routines. I want to modif