Create new log file for every 1000 points

Is it possible to have a new file/folder for every 1000 data points ? 
How do you check number of points written to a file?

Need A LOT more context here.  How are you logging the data?  Where is the data coming from?  What format are you saving the data?
Any code you can supply that would give us context would help us help you.
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines

Similar Messages

  • How to create different log files for each of web applications deployed in OC4J

    Hi All,
    I am using OC4J(from Oracle) v1.0.2.2 and Windows2000. Now I want to know
    1. how to create different log files for each of my deployed web applications ?
    2. what are the advantages in running multiple instances of oc4j and in what case we should run
    multiple instances of OC4J ?
    3. how to run OC4J as Windows2000 Service rather than Windows2000 Application ?
    Thanks and Regards,
    Kumar.

    Hi Avi,
    First of all I have given a first reading to log4j and I think there will some more easy way of logging debugging messages than log4j (If you could provide me a detailed explanation of a servlet,jsp,java bean that uses log4j and how to use log4j then it will be very helpful for me). The other easy ways (if I am not using log4j) to my problem i.e creating different log files for each of web applications deployed in oc4j are
    I have created multiple instances of OC4J that are configured to run on different ports and so on each instance I have deployed a single web application . And I started the 2 oc4j instances by transferring thier error/log messages to a file. And the other way is ..
    I have download from jakarta site a package called servhelper . This servhelper is a thread that is started in a startup servlet and stopped in the destroy method of that startup servlet. So this thread will automatically capture all the system.out.println's and will print those to a file. I believe that this thread program is synchronized. So in this method I need not run multiple instances of OC4J instead each deployed web application on single instance of oc4j uses the same thread program (ofcourse a copy of thread program is put in each of the deployed web applications directories) to log messages on to different log files.
    Can you comment on my above 2 approached to logging debugging messages and a compartive explanation to LOG4J and how to use LOG4J using a simple servlet, simple jsp is appreciated ...
    Thanks and Regards,
    Ravi.

  • How to create different log files for different users in log4j

    I want to create different logs for different users, using different appenders for each user so that logs are created in his file only.
    Confusion:How to direct them to different files in my logger class

    Hi Avi,
    First of all I have given a first reading to log4j and I think there will some more easy way of logging debugging messages than log4j (If you could provide me a detailed explanation of a servlet,jsp,java bean that uses log4j and how to use log4j then it will be very helpful for me). The other easy ways (if I am not using log4j) to my problem i.e creating different log files for each of web applications deployed in oc4j are
    I have created multiple instances of OC4J that are configured to run on different ports and so on each instance I have deployed a single web application . And I started the 2 oc4j instances by transferring thier error/log messages to a file. And the other way is ..
    I have download from jakarta site a package called servhelper . This servhelper is a thread that is started in a startup servlet and stopped in the destroy method of that startup servlet. So this thread will automatically capture all the system.out.println's and will print those to a file. I believe that this thread program is synchronized. So in this method I need not run multiple instances of OC4J instead each deployed web application on single instance of oc4j uses the same thread program (ofcourse a copy of thread program is put in each of the deployed web applications directories) to log messages on to different log files.
    Can you comment on my above 2 approached to logging debugging messages and a compartive explanation to LOG4J and how to use LOG4J using a simple servlet, simple jsp is appreciated ...
    Thanks and Regards,
    Ravi.

  • How to create a log file for bapi return structure

    Hi ppl,
         I am using BAPI_PO_CHANGE to mark the delivery of POs as complete after many validations through a classic report now my concern is i have been asked to create a log file which details the errors in the POs which is in the bapi return structure.
       I don't know how to do can any one help at the earliest.
    Regards,
    Bharathy.

    hi
    pls see this thread...
    it may help you...
    /people/kamalkumar.ramakrishnan/blog/2007/01/10/a-primer-on-using-and-creating-sap-application-log
    thx
    pavan
    *pls mark for helpful answers

  • How to create different log files for different levels

    Hi,
    Can some one please help me with my problem I have here?
    I want to send log data to two diffrent files depending on the logging level such as DEBUG and WARN using the same logger instance.
    How can you configure this in log4j.properties.
    Please post sample code for log4j.properties to achieve this.
    Thanks in advance.
    Anurag SIngh

    Hi,
    I have tried your code. the issue is other log file for error is blank. its not writting log to that file.
    following is the code of my log4j.properties
    Please read the code first
    # Set root logger level to DEBUG and its only appender to A1.
    log4j.rootCategory=DEBUG,A1
    #Appender and its layout for A1
    log4j.appender.A1=org.apache.log4j.FileAppender
    log4j.appender.A1.layout=org.apache.log4j.PatternLayout
    log4j.appender.A1.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %log4j.appender.A1.Threshold=DEBUG
    log4j.appender.A1.File=./LogonApplication_Debug.log
    log4j.appender.A1.Append=true
    #Appender and its layout for A2
    log4j.appender.A2=org.apache.log4j.FileAppender
    log4j.appender.A2.layout=org.apache.log4j.PatternLayout
    log4j.appender.A2.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %
    log4j.appender.A2.Threshold=ERROR
    log4j.appender.A2.File=./LogonApplication_Error.log
    log4j.appender.A2.Append=true
    In my logger class i have written something like this:
    public static Logger myLogger = Logger.getLogger (LoggerTest.class.getName());
         public static void main(String args[]){
              myLogger.debug("I have logged a debug message");
              myLogger.error("I have logged an error message");
    Issue:
    Now the problem is that it is creating two files specified in the configuration but it is logging messages both the messages debug and error in only ./LogonApplication_Debug.log file.
    Though it is creating the ./LogonApplication_Error.log file, but it is only blank.
    Can you please trace whats missing/wrong.
    Thanks a lot.
    Shanu

  • AME Bug: Creates new CFA file for each clip queued!

    Here is my workflow:  I have a 60 minute sequence of a dance performance of about 20 dances.  I queued in AME 20 exports to H.264 files by moving the workspace to the segment I want to export, adding it to the queue with a file name, and repeating for the next section (dance).
    For each queued export I make a copy so I have a low and high resolution export (40 in total).  Finally, the last queued export is a MPEG DVD file for Encore.
    The issue is that when I start the queue, the first thing AME does is conform the ENTIRE SEQUENCE'S AUDIO (not just the clip).  Then it starts the export.  On my computer (Core i7-2600, 24GB RAM), it takes over 15 minutes to conform a 60 minute sequence (the CFA ends being close to 2GB).  While it's conforming the CPU load never goes above 10%.  The export starts for the first clip and that takes less than 90 seconds with all 8 cores pegged at 100% and the GPU over 50%.
    Then AME startes the conforming AGAIN FOR THE ENTIRE SEQUENCE instead of re-using the CFA file it just conformed!
    Why doesn't it re-use the CFA file?  Or, why does it conform the entire sequence if all I'm exporting is a portion?
    And why isn't conforming a multi-threaded process?
    At this rate, it will take all night and consume close to 50 GB of disk space in CFA files.
    Here is what my queue looks like:

    Actually, I think it's working as designed when it conforms upon export.  See the online help HERE.
    It says, in part:
    Premiere Pro does conform audio in uncompressed clips when you use them in sequences with non-matching sample rates. However no conforming is done until you export the sequences or create audio preview files.
    The audio in my Canon MXF file is uncompressed 16-bit Linear PCM, so when I bring in the MXF files all PPr does is create PEK files.  This is really nice since I don't have to wait for conforming to begin editing.  Creating PEK files is really fast compared to conforming.  There is a caviat, though...
    The max file size from my camera is about 2GB.  Anything longer will span multiple files.  If I copy the contents of the CF card from the camera to my computer, the browse to the files using PPr, it will find all the spanned clips and interpret them as one longer video file.  Not problem, and that's how it's supposed to work.  BUT, since PPr is treating all these spanned files as one longer video clip, it WILL conform the audio.
    So to speed things up, I just bring in all the spanned files separately, then sort them and insert them into a sequence in order.  It works fine, and no conforming.
    My problem with AME is when it DOES finally conform, it's not re-using the CFA file.  In my particular case, I'm taking one long sequence and exporting different portions of it as separate exports.  I would think that AME would conform the whole sequence once, then re-use it.  But no, it sees each as a separate export, so each export gets its own identical (but different named) CFA file.
    So now my current workflow is:
    Import in all the video as separate MXF files.
    Import audio from digital recorder (house audio).
    Mark the clapper sync points on the video and audio files.
    Add video clips to a sequence in ascending order
    Add to the sequence the audio track and align the sync markers
    Balance the audio tracks with the mixer
    Create a new sequence
    Add to the new sequence the first sequence
    Render and replace audio so I can get a waveform (it helps find edit points) and conformed audio
    Add titles as needed
    Edit the timeline, transitions, add Encore chapters, etc.
    Export clips to AME queue
    Save early and save often!

  • How to create separate log files for each deployed web application in oc4j

    Hi All,
    I am using Windows2000, Oracle9iAS(OC4J). Say I have deployed 3 web applications onto this oc4j server. Then how to create 3 different log files so that I can see the log messages(System.out.println's) of each of these web appliations in a different log file.
    Thanks and Regards,
    Ravi.

    Where do the messages printed via ServletContext.log() go? Is this configurable separately by web application? If so, you could at least replace your System.out.println() with sc.log() statements. For exceptions, you could trap them and log them since the log() method takes a throwable as well as a String.
    John H.

  • Message split for every 1000 records

    Hi All,
    My scenario is Proxy to File.We have to process thousands of records from r/3 to.I need to create separate XML file for every 1000 records in receiver directory.Is there any solution to achieve this?
    Thanks in advance
    Kartikeya

    here;s the blog krish was referring to..
    Night Mare-Processing huge files in SAP XI

  • How to make the Java Logging to create the log file

    Hey,
    I need some help. Below is my code that create a log file for my application however, there is no log file being created when i perform a MarsLogger.log.info("test") in my code for my application.
    * LogManager.java
    * Created on April 10, 2006, 11:42 AM
    package comp1.mars.beans.ejb.util;
    import java.util.logging.*;
    import java.io.*;
    public class LogManager {
    /** Creates a new instance of LogManager */
    public LogManager() {
    protected static Logger getLogManager()
    //String dir = "C:/Sun03/studio5u1_se/appserver7/domains/domain1/server1/logs/an/my.log";
    String dir = "my.log";
    Logger logger = Logger.getLogger("compl1.mars.bean.ejb");
    try {
    // Create an appending file handler
    boolean append = true;
    FileHandler handler = new FileHandler(dir, append);
    //FileHandler(String pattern, int limit, int count, boolean append)
    // Add to the desired logger
    logger = Logger.getLogger("compl1.mars.bean.ejb");
    logger.addHandler(handler);
         logger.setLevel (Level.ALL);
    } catch (IOException e)
    e.printStackTrace();
    } catch(SecurityException se)
    se.printStackTrace();
    return logger;
    }//end of method
    }//end of class
    * MarsLog.java
    * Created on April 10, 2006, 3:24 PM
    package comp1.mars.beans.ejb.util;
    import java.util.logging.*;
    * @author Administrator
    public class MarsLogger extends LogManager {
    /** Creates a new instance of MarsLog */
    public MarsLogger() {
    public static Logger log= getLogManager();
    }

    What is the level of thte handler?
    http://java.sun.com/j2se/1.5.0/docs/api/java/util/logging/Handler.html#setLevel(java.util.logging.Level)

  • Resisting the creation of new log files when SQL SERVER is restarted

    Hi,
    I know that when SQL server is restarted new log files are created. But is it possible to resist creating new log fils and insert log data in the existing log files that are used before restarting the sql server

    Hello,
    I guess Raghvendra answered your question. And as per your previous post its not clear what you want to ask an you did not revert. Again if your issue is solved appreciate if you can please mark the answer and vote the posts helpful.
     Can I continue to log in the same file.?
    What does this line mean exactly ? Yes SQL Server will continue to use same transaction log file(LDF file) for writing information as it was using before shutdown. If you are talking about errorlog file a new errorlog file would be created which you can
    read using
    sp_readerrorlog
    Even if you stopped SQL Server service mistakenly its not that server is gone. Yes when you stopped the server all inflight transactions are rolled back. And when SQL Server would come online it would undergo crash recovery and would bring all the databases
    online by reading transaction log file and performing redo and undo of information. All committed transaction would be rolled forward and uncommitted would be rolled back.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • LOG FILE for batch scripting in MAXL

    Hello,
    I just wanted to know how to create a LOG FILE for batch scripting.
    essmsh E:\Batch\Apps\TOG_DET\Scripts\unload_App.msh
    copy e:\batch\apps\tog_det\loadfile\gldetail.otl e:\hyperion\analyticservices\app\tog_det\gldetail /Y
    essmsh E:\Batch\Apps\TOG_DET\Scripts\Build_Hier_Data.msh
    REM
    ECHO OFF
    ECHO Loading GL actuals into WFS \ Combined......
    E:\HYPERION\common\Perl\5.8.3\bin\MSWin32-x86-multi-thread\PERL.EXE E:\Batch\Apps\WFS.COMBINED\AMLOAD\WFSUATAMLOAD.PLX
    E:\HYPERION\common\Perl\5.8.3\bin\MSWin32-x86-multi-thread\PERL.EXE E:\Batch\Apps\WFS.COMBINED\AMLOAD\WFSUATAMLOAD.PLX
    Drop object d:\NDM\Data\StampFiles\STAMPLOADBKUP.csv of type outline force;
    Alter object d:\NDM\Data\StampFiles\STAMPLOAD_cwoo.csv of type outline rename to d:\NDM\Data\StampFiles\STAMPLOADBKUP.CSV;
    SET LogFile=E:\Batch\Apps\TOG_DET\Logs.log
    This file does not generate log file can any help me what might be the problem? Even though some of the steps above are not correct it should generate me log file atleast. I need syntax or whatever it is to generate Log file.
    Regards
    Soma

    I wanted to have a logfile of the following batch script regardless of whether the script is running or not.
    essmsh E:\Batch\Apps\TOG_DET\Scripts\unload_App.msh
    copy e:\batch\apps\tog_det\loadfile\gldetail.otl e:\hyperion\analyticservices\app\tog_det\gldetail /Y
    essmsh E:\Batch\Apps\TOG_DET\Scripts\Build_Hier_Data.msh
    REM
    ECHO OFF
    ECHO Loading GL actuals into WFS \ Combined......
    E:\HYPERION\common\Perl\5.8.3\bin\MSWin32-x86-multi-thread\PERL.EXE E:\Batch\Apps\WFS.COMBINED\AMLOAD\WFSUATAMLOAD.PLX
    Drop object d:\NDM\Data\StampFiles\STAMPLOADBKUP.csv of type outline force;
    Alter object d:\NDM\Data\StampFiles\STAMPLOAD_cwoo.csv of type outline rename to d:\NDM\Data\StampFiles\STAMPLOADBKUP.CSV;
    What I really want is I need a log file of the above batch script, how the above scripts are running. I do not care whether they are giving me positive results but I need to know what is happening in logfile. HOw will the log file be generated.
    Regards
    SOma

  • SAX: How to create new XML file using SAX parser

    Hi,
    Please anybody help me to create a XML file using the Packages in the 5.0 pack of java. I have successfully created it reading the tag names and values from database using DOM but can i do this using SAX.
    I am successful to read XML using SAX, now i want to create new XML file for some tags and its values using SAX.
    How can i do this ?
    Sachin Kulkarni

    SAX is a parser, not a generator.Well,
    you can use it to create an XML file too. And it will take care of proper encoding, thus being much superior to a normal textwriter:
    See the following code snippet (out is a OutputStream):
    PrintWriter pw = new PrintWriter(out);
          StreamResult streamResult = new StreamResult(pw);
          SAXTransformerFactory tf = (SAXTransformerFactory) TransformerFactory.newInstance();
          //      SAX2.0 ContentHandler.
          TransformerHandler hd = tf.newTransformerHandler();
          Transformer serializer = hd.getTransformer();
          serializer.setOutputProperty(OutputKeys.ENCODING, "UTF-8");//
          serializer.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM,"pdfBookmarks.xsd");
          serializer.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM,"http://schema.inplus.de/pdf/1.0");
          serializer.setOutputProperty(OutputKeys.METHOD,"xml");
          serializer.setOutputProperty(OutputKeys.INDENT, "yes");
          hd.setResult(streamResult);
          hd.startDocument();
          //Get a processing instruction
          hd.processingInstruction("xml-stylesheet","type=\"text/xsl\" href=\"mystyle.xsl\"");
          AttributesImpl atts = new AttributesImpl();
          atts.addAttribute("", "", "someattribute", "CDATA", "test");
          atts.addAttribute("", "", "moreattributes", "CDATA", "test2");
           hd.startElement("", "", "MyTag", atts);
    String curTitle = "Something inside a tag";
              hd.characters(curTitle.toCharArray(), 0, curTitle.length());
        hd.endElement("", "", "MyTag");
          hd.endDocument();
    You are responsible for proper nesting. SAX takes care of encoding.
    Hth
    ;-) stw

  • Is it possible to create materialized view log file for force refresh

    Is it possible to create materialized view log file for force refresh with join condition.
    Say for example:
    CREATE MATERIALIZED VIEW VU1
    REFRESH FORCE
    ON DEMAND
    AS
    SELECT e.employee_id, d.department_id from emp e and departments d
    where e.department_id = d.department_id;
    how can we create log file using 2 tables?
    Also am copying M.View result to new table. Is it possible to have the same values into the new table once the m.view get refreshed?

    You cannot create a record as a materialized view within the Application Designer.
    But there is workaround.
    Create the record as a table within the Application Designer. Don't build it.
    Inside your database, create the materialized with same name and columns as the record created previously.
    After that, you'll be able to work on that record as for all other within the Peoplesoft tools.
    But keep in mind do never build that object, that'll drop your materialized view and create a table instead.
    Same problem exists for partitioned tables, for function based-indexes and some other objects database vendor dependant. Same workaround is used.
    Nicolas.

  • Creating 1 spool for every 1000 sapscripts

    Hi Friends ,
    I have a program where i am printing W-2 forms (sapscripts) . There are a total of 3000 forms . I need to be able to split them into 3 spools of 1000 each instead of creating 3000 spools . Please advise as to wat parameters need to be set in the beginning and when starting a new set of spool with 1000 forms .
    Thanks,
    Teresa

    You need to count the records and then for every 1000 you have to make a new SPOOL..
    LIke:
    LOOP AT ITAB.
    CNT = CNT + 1.
    ITCPO-TDNEWID = ' '.
    IF CNT =  1.
      ITCPO-TDNEWID = 'X'.
    ENDIF.
    OPEN_FORM
    wtih OPTIONS = ITCPO.
    WRITE_FORM
    CLOSE_FORM
    IF CNT = 1000.
      CLEAR CNT.
    ENDIF.
    ENDLOOP.
    Regards,
    Naimesh Patel

  • New Log Files Not 100% Utilization

    I just recently deployed my web app with new configurations to require my BDB to have at least 65% disk utilization (up from the default 50%) and 25% minimum file utilization (up from the default 5%). On start of my app, I temporarily coded an env.cleanLog() to force a clean of the logs to bring the DB up to these utilization parameters.
    I've deployed the app, and its been chugging away at the clean process now for about 2.5 hours. It is generating a new 100 MB log file approximately every minute as it is attempting to compact the DB. However, at 2.5 hours, I just ran DbSpace and the DB utilization is only up to 51%. Perhaps most surprising are the following two facts:
    1) The newly created log files from the cleaner are nowhere near 100% utilization
    2) Some of the newly created log files have already now been cleaned (deleted) themselves.
    There are 0 updates happening to the BDB while this process is running. My understanding was that when the cleaner removed old files and created new files, only good/valid data is written to the new files. However, this doesn't seem to be the case. Can someone enlighten me? I'm happy to read docs if someone can point me in the right direction.
    Thanks.

    Sorry, somehow I missed that you weren't doing any updates, even though you said so. I apologize.
    However, your cache size is much too small for your data set. Running DbCacheSize with a rough approximation of your data set size gives:
    ~/je.cvs$ java -jar build/lib/je.jar DbCacheSize -records 50000000 -key 30 -data 300
    Inputs: records=50000000 keySize=30 dataSize=300 nodeMax=128 binMax=128 density=80% overhead=10%
    === Cache Sizing Summary ===
       Cache Size       Btree Size    Description
      3,543,168,702    3,188,851,832  Minimum, internal nodes only
      3,963,943,955    3,567,549,560  Maximum, internal nodes only
    22,209,835,368   19,988,851,832  Minimum, internal nodes and leaf nodes
    22,630,610,622   20,367,549,560  Maximum, internal nodes and leaf nodes
    === Memory Usage by Btree Level ===
    Minimum Bytes    Maximum Bytes      Nodes    Level
      3,157,713,227    3,532,713,035     488,281    1
         30,834,656       34,496,480       4,768    2
            297,482          332,810          46    3
              6,467            7,235           1    4Apparently when you started, the cleaner was "behind" -- utilization was low. Without enough cache to hold the internal nodes -- 3.5GB according to DbCacheSize -- the cleaner will either take a very long time or never be able to clean up to 50% utilization.
    Do you really only have that much memory available, or is this just a dev environment issue?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Maybe you are looking for