Split the log file based on size

Hi All,
I have created a log on oracle directory by using UTL_FILE. Now I need to create a new file based on the old file size. How to create a new file based on the size? Please provide your suggestion..
Thanks & Regards
Sami

declare
  v_cnt NUMBER := 0;
begin
    filehandle        := UTL_FILE.FOPEN('DIR_TEMP','oops1.txt','r');
    LOOP
     UTL_FILE.GET_LINE(filehandle, v_line);
     v_cnt NUMBER := v_cnt + 1;
     if v_cnt NUMBER  >= 1000 then
         -- create new file
        filehandle2        := UTL_FILE.FOPEN('DIR_TEMP','oops2.txt','w');
     end if;
    END LOOP;
    UTL_FILE.FCLOSE(filehandle);
END;Edited by: oracle on Apr 5, 2012 3:47 AM
Edited by: oracle on Apr 5, 2012 3:53 AM

Similar Messages

  • Datalogging with options to retrieve subset of log file based on date/time

    I would like to thank this forum for useful advice so far in completing my LabVIEW software.
    I have a data logging challenge. I am supposed to log about 30 parameters every 5 seconds. Some of these parameters are digital (ON/OFF), some are values of speed (rpm) and others, an expression of a percentage (%). It should be possible in future to do a histogram or bar chart plot of some of the parameters, for a specific period range (say the last 5 minutes of a certain day). So in effect, do an extraction of a segment of the total log file.
    My challenge is if I use text file, like the one in the attached VI, can it give functionality of retrieving data (while the VI is running) from the log file, based on a certain time range (i.e. retrieve a section of the log file based on a certain date/time range, on demand)?
    The format in the text file is close to what I require, since it lists the time n one column and the other parameters on other columns to enable future histogram generation.
    Thanks a lot, friends.
    Solved!
    Go to Solution.
    Attachments:
    writer.vi ‏19 KB
    time.txt ‏1 KB

    Hey maxidivine,
    Iv been playing round with your code and found that to perform the search that you require could be quite demanding to system resources when scaled to the size of your application I shall try and find a way to perform the search using .txt files but the there are some other options available. I recommend the use if TDMS files as the file format is a very efficient, manageable method of data-logging. The TDMS file format is designed to write and read measured data at a very high speed, while maintaining a hierarchical system of descriptive information.
    Traditionally, TDMS was a National Instruments only file format – you could only read it using our products – LabVIEW/CVI/DIAdem. However, thanks to the popularity of the format, a bolt-on is now available for Excel, which allows you to directly open the .tdms files with Excel (see link).
    National Instruments Technical Data Management Overview
    http://zone.ni.com/devzone/cda/tut/p/id/3676
    Introduction to LabVIEW TDM Streaming Vis
    http://zone.ni.com/devzone/cda/tut/p/id/3539
    VI-Based API for Writing TDMS Files
    http://zone.ni.com/devzone/cda/tut/p/id/6471
    TDM Excel Add-In Tool for Microsoft Excel User Guide
    http://zone.ni.com/devzone/cda/tut/p/id/4906
    TDM Excel Add-In for Microsoft Excel Download
    http://zone.ni.com/devzone/cda/epd/p/id/2944
    Troubleshooting the TDM Excel Add-In for Microsoft Excel 2000-2003
    http://zone.ni.com/devzone/cda/tut/p/id/5874
    Examples of the use of the TDMS API ship with LabVIEW. You will find them in HELP > find examples > fundamentals > File Input and Output. For you application, I would recommend the “Cont Acq&Graph Voltage - Write Data to File (TDMS).vi”.
    Furthermore, if you require some help with DIAdem, I would recommend clicking "getting started" from the DIAdem splash screen. This opens a manual which discusses everything from data analysis to report generation. Also, if you have DIAdem 11 or above, there are tutorial videos which install with DIAdem. These are useful little tutorials, which discuss all the DIAdem fundamentals. You can access these by selecting a particular palette tab (eg. report, view, analysis...etc) and then clicking the tutorial button (shown as a film strip with a question mark) at the top of the group view.
    Here are some more helpful DIAdem related resources for future reference.
    Report Gen in DIAdem...
    http://zone.ni.com/devzone/cda/tut/p/id/7379
    DataPlugins: Supported Data Formats (ni.com/dataplugins)
    http://zone.ni.com/devzone/cda/tut/p/id/4065
    Hope this is helpful
    Philip
    Philip
    Applications Engineer
    National Instruments
    UK Branch
    ===If this fixes your problem, mark as solution!===

  • Getting the log files from client using java program

    hi
    this is lalita...and i am doing a project in networking.... i am new to socket programming....i have established the socket connection between the client and server...with this site members' help....now i have to get the log files of the client system from the server.... via the created socket....i need it by tomorrow...i.e apr 12th ....as i have to show it to my guide...
    i just need a core java program that will get the log information of the client from the server......
    Can anybody please help me in this regard..... it would be of great help to me and my group....
    Anxiously awaiting for the replies....
    Thanking you and regards...
    Lalita.

    Simple.
    Server is listening on a specific port for the connection from the clients.
    Connect the client with the server on the above mentioned port.
    Open the streams on both side for the connection and run in separate thread.
    Define a protocol for communication between client and server.
    e.g after connection with the server the server send a text message to the client (send log) now the client first should the log file name and size to the sever and then send the file. the server should save the file.
    then disconnect the client or want to get another file or for other tasks define the other commands

  • Can we reduce the size of the disk having the Log files for a Dag Database

    There is an issue with disk space filling up for 4 databases part of the same DAG, each having 1 non lagged passive copy.
    The Disks containing the log files are from the VSphere Storage. The Disk size was temporarily expanded to avoid any outages.
    There is a full backup running currently, which is expected to clear the transaction logs on completion and that should be reducing the disk space utilized.
    The storage guys want to know whether they can reclaim the temporarily expanded disk size. i.e reduce the disk space from the storage containing Log Files without affecting anything.
    I couldn't find any documentation on this specific requirement, and want to confirm

    There is an issue with disk space filling up for 4 databases part of the same DAG, each having 1 non lagged passive copy.
    The Disks containing the log files are from the VSphere Storage. The Disk size was temporarily expanded to avoid any outages.
    There is a full backup running currently, which is expected to clear the transaction logs on completion and that should be reducing the disk space utilized.
    The storage guys want to know whether they can reclaim the temporarily expanded disk size. i.e reduce the disk space from the storage containing Log Files without affecting anything.
    I couldn't find any documentation on this specific requirement, and want to confirm
    I dont see why not. Once the logs are cleared, Exchange doesnt care.
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Confused about the log files

    I have written an application that has a Primary and Secondary database. The application creates tens-of-thousands of records in the Primary database, with a 1-to-1 relationship in the Secondary. On subsequent runs it will either update existing Primary records (which should not update the secondary as that element does not change) or it will create new records.
    The application actually works correctly, with the right data, the right updates and the right logical processing. The problem is the log files.
    The input data I am testing with is originally 2Mb as a CSV file and with a fresh database it creates almost 20Mb of data. This is about right for the way it splits the information up and indexes it. If I run the application again with exactly the same data, it should just update all the entries and create nothing new. My understanding is that the updated records will be written to the end of the logs, and the old ones in the earlier logs would be redundant and the cleaner thread would clean them up. I am explicitly cleaning as per the examples. The issue is that each run, the data just doubles in size! Logically it is fine, physically it is taking a ridiculous amount of space. RUnning DbSpace shows that the logs are mostly full (over 90%) where I would expect most to be empty, or sparsely occupied as the new updates are written to new files. cleanLog() does nothing. I am at a total loss!
    Generally the processing I am doing on the primary is looking up the key, if it is there updating the entry, if not creating one. I have been using a cursor to do this, and using the putCurrent() method for existing updates, and put() for new records. I have even tried using Database.delete() and the full put() in place of putCurrent() - but no difference (except it is slower).
    Please help - it is driving me nuts!

    Let me provide a little more context for the questions I was asking. If this doesn't lead us further into understanding your log situation, perhaps we should take this offline. When log cleaning doesn't occur, the basic questions are:
    a. is the application doing anything that prohibits log cleaning? (in your case, no)
    b. has the utilization level fallen to the point where log cleaning should occur? (not on the second run, but it should on following runs)
    c. does the log utilization level match what the application expects? (no, it doesn't match what you expect).
    1) Ran DbDump with and withour -r. I am expecting the
    data to stay consistent. So, after the first run it
    creates the data, and leaves 20mb in place, 3 log
    files near 100% used. After the second run it should
    update the records (which it does from the
    applications point of view) but I now have 40mb
    across 5 log files all near 100% usage.I think that it's accurate to say that both of us are not surprised that the second run (which updates data but does not change the number of records) creates a second 20MB of log, for a total of 40MB. What we do expect though, is that the utilization reported by DbSpace should fall to closer to 50%. Note that since JE's default minimum utilization level is 50%, we don't expect any automatic log cleaning even after the second run.
    Here's the sort of behavior we'd expect from JE if all the basics are taken care of (there are enough log files, there are no open txns, the application stays up long enough for the daemon to run, or the application does batch cleanLog calls itself, etc).
    run 1 - creates 20MB of log file, near 100% utilization, no log cleaning
    run 2 - updates every record, creates another 20MB of log file, utilization falls, maybe to around 60%. No log cleaning yet, because the utilization is still above the 50% threshold.
    run 3 - updates every record, creates another 20MB of log file, utilization falls below 50%, log cleaning starts running, either in the background by the daemon thread, or because the app calls Environment.cleanLog(), without any need to set je.cleaner.forceCleanFiles.
    So the question here is (c) from above -- you're saying that your DbSpace utilization level doesn't match what you believe your application is doing. There are three possible answers -- your application has a bug :-), or with secondaries and whatnot, JE is representing your data in a fashion you didn't expect, or JE's disk space utilization calculation is inaccurate.
    I suggested using DbDump -r as a first sanity check of what data your application holds. It will dump all the valid records in the environment (though not in key order, no -r is slower, but dumps in key order). Keys and data should up on different lines, so the number of lines in the dump files should be twice the number of records in the environment. You've done this already in your application, but this is an independent way of checking. It also makes it easier to see what portion of data is in primary versus secondary databases, because the data is dumped into per-database files. You could also load the data into a new, blank environment to look at it.
    I think asked you about the size of your records because a customer recently reported a JE disk utilization bug, which we are currently working on. It turns out that if your data records are very different in size (in this case, 4 orders of magnitude) and consistently only the larger or the smaller records are made obsolete, the utilization number gets out of whack. It doesn't really sound like your situation, because you're updating all your records, and they don't sound like they're that different in size. But nevertheless, here's a way of looking at what JE thinks your record sizes are. Run this command:
    java -jar je.jar DbPrintLog -h <envhome> -S
    and you'll see some output that talks about different types of log entries, and their sizes. Look at the lines that say LN and LN_TX at the top. These are data records. Do they match the sizes you expect? These lines do include JE's per-record headers. How large that is depends on whether your data is transactional or not. Non-transactional data records have a header of about 35 bytes, whereas transactional data records have 60 bytes added to them. If your data is small, that can be quite a large percentage. This is quite a lot more than for BDB (Core), partly because BDB (Core) doesn't have record level locking, and partly because we store a number of internal fields as 64 bit rather than 16 or 32 bit values.
    The line that's labelled "key/data" shows what portion JE thinks is the application's data. Note that DbPrintLog, unlike DbSpace, doesn't account for obsoleteness, so while you'll see a more detailed picture of what the records look like in the log, you may see more records than you expect.
    A last step we can take is to send you a development version of DbSpace that has a new feature to recalculate the utilization level. It runs more slowly than the vanilla DbSpace, but is a way of double checking the utilization level.
    In my first response, I suggested trying je.cleaner.forceCleanFiles just to make it clear that the cleaner will run, and to see if the problem is really around the question of what the utilization level should be. Setting that property lets the cleaner bypass the utilization trigger. If using it really reduced the size of your logs, it reinforces that your idea of what your application is doing is correct, and casts suspicion on the utilization calculation.
    So in summary, let's try these steps
    - use DbDump and DbPrintLog to double check the amount and size of your application data
    - make a table of runs, that shows the log size in bytes, number of log files, and the utilization level reported by DbSpace
    - run a je.cleaner.forceCleanFiles cleanLog loop on one of the logs that seems to have a high utilization level, and see how much it reduces to, and what the resulting utilization level is
    If it all points to JE, we'll probably take it offline, and ask for your test case.
    Regards,
    Linda

  • Bursting Program Ends in Error with nothing in the Log file

    Hi All
    I have a RDF which calls a bursting program in After report trigger, the problem i'm facing is that the bursting program is completing successfully for a set of parameters and when again the program is run for the same set of parameters the bursting program is ending in error, i have checked the log file but nothing in the log file
    below is my log file
    +---------------------------------------------------------------------------+
    XML Publisher: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    XDOBURSTREP module: XML Publisher Report Bursting Program
    +---------------------------------------------------------------------------+
    Current system time is 30-SEP-2013 15:11:15
    +---------------------------------------------------------------------------+
    XML/BI Publisher Version : 5.6.3
    Request ID: 574533
    All Parameters: Dummy for Data Security=N:ReportRequestID=574532:DebugFlag=Y
    Report Req ID: 574532
    Debug Flag: Y
    Updating request description
    Updated description
    Retrieving XML request information
    Node Name:T1DEVEBSAPP1
    Preparing parameters
    null output =/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574533.out
    inputfilename =/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574532.out
    Data XML File:/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574532.out
    Set Bursting parameters..
    Temp. Directory:/tmp
    [093013_031118656][][STATEMENT] Oracle XML Parser version ::: Oracle XML Developers Kit 10.1.3.130 - Production
    [093013_031118657][][STATEMENT] setOAProperties called..
    Bursting propertes.....
    {user-variable:cp:territory=US, user-variable:cp:ReportRequestID=574532, user-variable:cp:language=en, user-variable:cp:responsibility=20678, user-variable.OA_MEDIA=http://t1devebsapp1.travelzoo.com:8001/OA_MEDIA, burstng-source=EBS, user-variable:cp:DebugFlag=Y, user-variable:cp:parent_request_id=574532, user-variable:cp:locale=en-US, user-variable:cp:user=SETUPUSER, user-variable:cp:application_short_name=XDO, user-variable:cp:request_id=574533, user-variable:cp:org_id=81, user-variable:cp:reportdescription=Travelzoo Invoice Print Selected Invoices-Child, user-variable:cp:Dummy for Data Security=N}
    Start bursting process..
    Bursting process complete..
    Generating Bursting Status Report..
    +---------------------------------------------------------------------------+
    Start of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    End of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    Executing request completion options...
    Output file size:
    602
    Finished executing request completion options.
    +---------------------------------------------------------------------------+
    Concurrent request completed
    Current system time is 30-SEP-2013 15:11:41
    +---------------------------------------------------------------------------+
    and the output says..
    "Error!! Could not deliver the output for Delivery channel:null "
    Wondering What might be the issue, any help on this is greatly appreciated.
    -Ragul

    Ram,
    Is this a custom or standard concurrent program?
    Was this working properly before? If yes, what changes have been done recently?
    Did you try to relink the concurrent program executable file and see if this helps? Also, you could enable trace/debug and submit the request again and see if more details are collected in the logs -- See (Note: 296559.1 - FAQ: Common Tracing Techniques within the Oracle Applications 11i/R12).
    Regards,
    Hussein

  • Bursting Program Errors Out with nothing in the Log file

    Hi All
    I have a RDF which calls a bursting program in After report trigger, the problem i'm facing is that the bursting program is completing successfully for a set of parameters and when again the program is run for the same set of parameters the bursting program is ending in error, i have checked the log file but nothing in the log file
    below is my log file
    +---------------------------------------------------------------------------+
    XML Publisher: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    XDOBURSTREP module: XML Publisher Report Bursting Program
    +---------------------------------------------------------------------------+
    Current system time is 30-SEP-2013 15:11:15
    +---------------------------------------------------------------------------+
    XML/BI Publisher Version : 5.6.3
    Request ID: 574533
    All Parameters: Dummy for Data Security=N:ReportRequestID=574532:DebugFlag=Y
    Report Req ID: 574532
    Debug Flag: Y
    Updating request description
    Updated description
    Retrieving XML request information
    Node Name:T1DEVEBSAPP1
    Preparing parameters
    null output =/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574533.out
    inputfilename =/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574532.out
    Data XML File:/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574532.out
    Set Bursting parameters..
    Temp. Directory:/tmp
    [093013_031118656][][STATEMENT] Oracle XML Parser version ::: Oracle XML Developers Kit 10.1.3.130 - Production
    [093013_031118657][][STATEMENT] setOAProperties called..
    Bursting propertes.....
    {user-variable:cp:territory=US, user-variable:cp:ReportRequestID=574532, user-variable:cp:language=en, user-variable:cp:responsibility=20678, user-variable.OA_MEDIA=http://t1devebsapp1.travelzoo.com:8001/OA_MEDIA, burstng-source=EBS, user-variable:cp:DebugFlag=Y, user-variable:cp:parent_request_id=574532, user-variable:cp:locale=en-US, user-variable:cp:user=SETUPUSER, user-variable:cp:application_short_name=XDO, user-variable:cp:request_id=574533, user-variable:cp:org_id=81, user-variable:cp:reportdescription=Travelzoo Invoice Print Selected Invoices-Child, user-variable:cp:Dummy for Data Security=N}
    Start bursting process..
    Bursting process complete..
    Generating Bursting Status Report..
    +---------------------------------------------------------------------------+
    Start of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    End of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    Executing request completion options...
    Output file size:
    602
    Finished executing request completion options.
    +---------------------------------------------------------------------------+
    Concurrent request completed
    Current system time is 30-SEP-2013 15:11:41
    +---------------------------------------------------------------------------+
    and the output says..
    "Error!! Could not deliver the output for Delivery channel:null "
    Wondering What might be the issue, any help on this is greatly appreciated.
    -Ragul

    Can you find any details about the error from the "View Detail" button (the same window where you check the log and output files)?
    I found the Workflow logs, I am not sure what I am looking for, but i am not seeing any errors reported.The event viewer is supposed to send an email, so do you see anything in the logs that could be related?
    Thanks,
    Hussein

  • How to add a date suffix to the log file name

    In Windows, I want to run certain commands and save the output to a logfile every day. How to add a suffix to the log file name so I can distinguish which log file for which day?
    e.g. cmd >> logfile.date

    AZ wrote:
    In Windows, I want to run certain commands and save the output to a logfile every day. How to add a suffix to the log file name so I can distinguish which log file for which day?
    e.g. cmd >> logfile.datemy best friend name is "google", refer to this [url | http://stackoverflow.com/questions/203090/how-to-get-current-datetime-on-windows-command-line-in-a-suitable-format-for-usi]
    This is what i did
    1) created a dummy file in c drive
    2) copy pasted below lines, you can play around more with the format
    set _my_datetime=%date%_%time%
    set _my_datetime=%_my_datetime: =_%
    set _my_datetime=%_my_datetime::=%
    set _my_datetime=%_my_datetime:/=_%
    set _my_datetime=%_my_datetime:.=_%3) Rename the file from dos
    ren some.txt dummy_file_%_my_datetime%.txt4) Here goes the output
    C:\dir
    dummy_file_Mon_09_20_2010_161347_21.txt
    Most of the code i copied from above url, you can tweak a little bit based on ur requirement and format.
    Regards
    Learner                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • SQL Server 2012 Reorg Index Job Blew up the Log File

    We have a maintenance plan that nightly (1) runs dbcc checkdb on all databases, (2) reorgs indexes on all databases, compacting large objects, (3) updates statistics, etc. There are three user databases, one large, one medium, one small. Usually it uses
    a little more than 80% of the medium database's log, set to 6,700 MB. Last night the reorg index step caused the log to increase to almost 14,000 MB and then blew up because the maximum file size was set to 14,000 MB, one of the alter index commands failed
    because it ran out of log space. (Dbcc checkdb step ran successfully.) Anyone have any idea what might cause this? There is one update process on this database, it runs at 3 AM. The maintenance plan runs at 9 PM and completes by 1 AM. The medium database has
    a 21,000 MB data file, reserved space is at about 10 GB. This is a SQL 2012 Standard SP 2 running on Windows 2012 Server Standard.

    I personally like to shrink the log files once the indexes have been rebuilt and before switching back to full recovery, because as I'm going to take a full backup afterwards, having a small log file reduces the size of the backup.
    Do you grow them afterwards, or do you let the application waste time on that during peak hours?
    I have not checked, but I see no reason why the backup size would depend on the size of the log file - it's the data in the data file you back up, not the log file.
    I would say this is highly dubious.
    Erland Sommarskog, SQL Server MVP, [email protected]
    Yeah I let the application allegedly "waste" a few milisseconds a day autogrowing the log file. Common, how long do you think it takes for a log file to grow a few GB on most storage systems nowadays? As long as you set an appropriate autogrow
    interval so your log file doesn't get too fragmented (full of VLFs), you'll be perfectly fine in most situations.
    Lets say you have a logical disk dedicated to log file storage, but it is shared across multiple databases within the instance. Having allocated space for the log files means there will be not much free space left in the disk in case ANY database needs more
    space than the others due to a peak in transactional workload, even though other databases have unused space that could have been used.
    What if this same disk, for some reason, is also used to store the tempdb log file? Then all applications will become unstable.
    These are the main reasons I don't recommend people blindly crucify keeping log files small when possible. I know there are many people who disagree and I'm aware of their reasons. Maybe we just had different experiences about this subject. Maybe people
    just haven't been through the nightmare of having a corrupted system database or a crashed instance because of insuficient log space in the middle of the day.
    And you are right about the size of the backup, I didn't put it correctly. It isn't the size of the backup that gets smaller (although the backup operation will run faster, having tested this myself), but the benefit from backing up a database with a small
    log file is that you won't need the extra space to restore it in a different environment such as a BI or DEV server, where recuperability doesn't matter and the database will be on simple recovery mode.
    Restoring the database will also be faster.
    Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
    is an undocumented behavior and should not be relied upon.

  • Where can I file the log file generate by RMAN using EM 10g.

    Hi
    I trying to find the log file that is generated using RMAN invoked from EM.
    I can only see the file using Internet Explorer with the URL:
    em/console/database/rec/bkpMgmt?skey=257&type=oracle_database&target=isatprod.dla_dns.com&event=showJobDe
    But I need to find where the log files are located in the filesystem because in other server I will not have EM with OC4J.
    Thanks.
    Juan.

    When use OEM for 10g and choose the option / Maintance/Backup Reports
    I can see information of all my backups, that includes:
    Backup Name - Start Time - Time Taken - Status - Type- Output Devices - Input Size .....
    When I click on Status field I can see the log file of this Backup.
    (whe click in status one URL will be invoked ,something like below URL)
    http://10.5.0.86:1158/em/console/database/rec/bkpMgmt?skey=259&type=oracle_database&target=isatprod.dla_dns.com&event=showJobDet&objType=jobDtl
    So the log file exist in any place for every backup made , the problem that I can not find it.
    The log has approximately 500 lines, if you want I can send you by email the log.
    Currently I don't have a repository catalog, I use a control file as repository.
    I think that 500 lines of log is not include in any dynamic performance views.
    Thanks
    Juan

  • Excessive disk usage when I drag the log file viewer window (why)?

    When I drag the Log File Viewer window in Gnome, I get huge amounts of hard disk usage and the hard drive makes a loud rumbling noise. This happens only while dragging the Log File Viewer window and no other windows (that I've noticed so far).
    Why is this happening?
    Last edited by trusktr (2012-01-11 05:27:54)

    Elements11DRC
    What version of Premiere Elements are you working with and on what computer operating system is it running?
    Can we assume by your selected ID, that the program is Premiere Elements 11?
    Pending further details, I will assume that you are working with Premiere Elements 11 on Windows 7, 8, or 8.1 64 bit.
    Where is this "My Videos" Folder - on a DVD disc being used as a DataDisc for video storage purposes?
    If so, Add Media/DVD Camera or Computer Drive/Video Importer and from there automatically into the project in Project Assets as well as on the Timeline.
    If your "My Videos" Folder is a folder on the computer hard drive, then Add Media/Files and Folders to get the video into Project Assets from where you drag the video to the Timeline.
    Now for the video that you are trying to import...what are its properties
    Video Compression
    Audio Compression
    Frame Size
    Frame Rate
    Interlaced or Progressive
    File Extension
    Pixel Aspect Ratio
    Probably answered the easiest by knowing the brand/model/settings of the camera that recorded the video.
    Prime interest, that video compression. It could be MotionJPEG which can be problematic for Premiere Elements. It could be AVCHD.avi which cannot be imported.
    We can go into greater detail on your project details once we rule in or out any of the factors mentioned above.
    By the way, what is the destination for this project....burn to disc DVD or Blu-ray...export to file saved to the computer hard drive...other?
    More later.
    Thanks.
    ATR

  • Log4j problem for backing up the log file

    This is my log4j.properties. it doesn't seem to back up the log file and create a new log file when it reaches to MAX size. can anybody look at it?
    Thanks..
    log4j.rootCategory=debug, stdout, R
    # Print only messages of priority WARN or higher for your category
    log4j.category.your.category.name=WARN
    # Specifically inherit the priority level
    #log4j.category.your.category.name=INHERITED
    #### First appender writes to console
    log4j.appender.stdout=org.apache.log4j.ConsoleAppender
    log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
    # Pattern to output the caller's file name and line number.
    log4j.appender.stdout.layout.ConversionPattern=[%d] [%c] %-5p - %m%n
    #### Second appender writes to a file
    log4j.appender.R=org.apache.log4j.RollingFileAppender
    log4j.appender.R.File=/opt/mc/logs/AMSWAS.log
    # Control the maximum log file size
    log4j.appender.R.MaxFileSize=200000KB
    log4j.appender.R.MaxBackupIndex=1
    log4j.appender.R.layout=org.apache.log4j.PatternLayout
    log4j.appender.R.layout.ConversionPattern=[%d] [%c] %-5p - %m%n

    Hello again Tom and thanks for your help!
    No, I didn't optimize any media though I've rendered proxies for all media. I have about 14TB of R3D footage on five external harddrives, it's a feature. The proxies and sound files which I copied to the library makes the library file that big. I can't consolidate all media, as far as I know, since I don't have a drive even close to the size of all the R3D footage.
    In terms of copying I have only tried the good old "copy/paste" method. I have never used any of those programs you mentioned. Can those programs be used to copy certain files or will they copy an entire drive?
    Will the automatic back-ups FCPX does every 15 minutes save my timelines if something would go wrong and the library file would dissappear? I don't fully understand how that back-up process works. I could always render new proxies, though it would take time, but re-editing all those timelines is a whole other thing. Important to note here is that I'm used to Premiere Pro and the "old" FCP which is why all of this is so confusing.
    Thank you again!

  • History report error: | An Exceptional Error occurred. Application exiting. Check the log file for error 5022

    Hi all
    I've got a error msg when try to generate a report using Cisco history report tool:
    Error | An Exceptional Error occurred. Application exiting.  Check the log file for error 5022
    It only happens when choose report template: ICD_Contact_Service_Queue_Activity_by_CSQ_en_us.
    User tried samething on other PC, it working fine.
    only on user' own PC and only choose this report, error appears.
    user runing windows 7 and do not have crystal report installed
    tried reinstalled the software, doesn't work.
    also tried this: (https://cisco-support.hosted.jivesoftware.com/thread/2041254) - doesn't work
    then tried https://supportforums.cisco.com/docs/DOC-6209  - doesn't work
    attached the log file.
    thanks.

    wenqianyu wrote:From the log file:Looks like you get a Login Window.Error message showed up after username/password be enteredThere is an error in the log: Error happened in comparing UCCX version and HRC versionYou may need to do a clean uninstall, download the Historical report from the server, and install it again on the PC.Does this only happen to one PC or to every PC with this application?Wenqian 
    I have completely uninstalled the HRC, and download from server install again -- still doesn't work with exactly same error.
    this matter only happens on this PC, when user try same thing on other PC, it works.
    so i think it not relate to server or account.

  • (Cisco Historical Reporting / HRC ) All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054

    Hi All,
    I am getting an error message "All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054"  when trying to log into HRC (This user has the reporting capabilities) . I checked the log files this is what i found out 
    The log file stated that there were ongoing connections of HRC with the CCX  (I am sure there isn't any active login to HRC)
    || When you tried to login the following error was being displayed because the maximum number of connections were reached for the server .  We can see that a total number of 5 connections have been configured . ||
    1: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Current number of connections (5) from historical Clients/Scheduler to 'CRA_DATABASE' database exceeded the maximum number of possible connections (5).Check with your administrator about changing this limit on server (wfengine.properties), however this might impact server performance.
    || Below we can see all 5 connections being used up . ||
    2: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:[DB Connections From Clients (count=5)]|[(#1) 'username'='uccxhrc','hostname'='3SK5FS1.ucsfmedicalcenter.org']|[(#2) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#3) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#4) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#5) 'username'='uccxhrc','hostname'='47BMMM1.ucsfmedicalcenter.org']
    || Once the maximum number of connection was reached it threw an error . ||
    3: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Number of max connection to 'CRA_DATABASE' database was reached! Connection could not be established.
    4: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Database connection to 'CRA_DATABASE' failed due to (All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054.)
    Current exact UCCX Version 9.0.2.11001-24
    Current CUCM Version 8.6.2.23900-10
    Business impact  Not Critical
    Exact error message  All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054
    What is the OS version of the PC you are running  and is it physical machine or virtual machine that is running the HRC client ..
    OS Version Windows 7 Home Premium  64 bit and it’s a physical machine.
    . The Max DB Connections for Report Client Sessions is set to 5 for each servers (There are two servers). The no of HR Sessions is set to 10.
    I wanted to know if there is a way to find the HRC sessions active now and terminate the one or more or all of that sessions from the server end ? 

    We have had this "PRX5" problem with Exchange 2013 since the RTM version.  We recently applied CU3, and it did not correct the problem.  We have seen this problem on every Exchange 2013 we manage.  They are all installations where all roles
    are installed on the same Windows server, and in our case, they are all Windows virtual machines using Windows 2012 Hyper-V.
    We have tried all the "this fixed it for me" solutions regarding DNS, network cards, host file entries and so forth.  None of those "solutions" made any difference whatsoever.  The occurrence of the temporary error PRX5 seems totally random. 
    About 2 out of 20 incoming mail test by Microsoft Connectivity Analyzer fail with this PRX5 error.
    Most people don't ever notice the issue because remote mail servers retry the connection later.  However, telephone voice mail systems that forward voice message files to email, or other such applications such as your scanner, often don't retry and
    simply fail.  Our phone system actually disables all further attempts to send voice mail to a particular user if the PRX5 error is returned when the email is sent by the phone system.
    Is Microsoft totally oblivious to this problem?
    PRX5 is a serious issue that needs an Exchange team resolution, or at least an acknowledgement that the problem actually does exist and has negative consequences for proper mail flow.
    JSB

  • Is there a way to automatically collect the log files from a AirPort or TimeCapsule base station?

    Hi there,
    the headline says basically all: Is there a way to collect the log files from a Time Capsule 7 AirPort from time to time? They are overwritten quite soon but I want a complete log of all activities to my access point.
    The AirPort Utility says something about SNMP. Is this the way to go? Some kind of demon on my Mac that retrieves and saves the logs like every two days?
    I want to copy the access logs, I am not interested in Time Machine backup logs.
    Thanks.

    There is a way to do this via Syslog. On the Logging & Statistics panel (within the AirPort Utility), you can point the AirPort's system logs to a Syslog "server."
    This would require a "dedicated" network client to receive the logs.
    Unfortunately, setting up a Syslog server is a bit intensive initially, but is simple to operate and maintain.
    Please check out this Apple Support Communities thread: Directing Syslog message to a file
    ref: Enable an Apple Mac OS machine as a syslog server

Maybe you are looking for

  • Help! blue screen on windows partition

    HI I just got my macbook pro 2.16 then installed win (spanish version)after uninstall almost all my software (nero,frutyloops,winrar,midicontrollers drivers sonar,well you get the idea)and reinstalling windows for 3th time Im still getting a blue scr

  • Illustrator CC custom fit print doesn't work

    This is a serious workflow disruption: In CSx I could open any file, choose a print preset, click on 'custom' for Media Size and it would automatically adjust the print width and height to whatever the artwork boundaries were (Ignore Artboards ✓'d).

  • Temp Tablespace Goes Berzerk using XMLGEN

    The following procedure seems to maxout my temporary tablespace (2G!!!!) I use this procedure to pump XML to a client given a sql statement and a few parameters. I am using the plxmlparser_v1_0_2 release and the XSU12_ver1_2_1 release. There seems to

  • How to transfer .pst into new mac outlook office

    Hi just bougt new mac computer and also microsoft office including outlook. But I cannot transfer my .pst date with the converter into the new programm. Always the importer stopps the import. What do I make wrong? Thanks for helping me Alaldia

  • Adf dvt fitline feature

    Hi, I am trying to use the ADF fitline feature on time-series data. I tried the tag fitlineType="FT_LINEAR" and bean code like series.setFitlineType(BaseGraphComponent.FT_LINEAR); I still can not get the fit line on either scatter , line or other gra