Large file created in archive area.

I'm having this situation. Their is a file being created in our archives area that starts with DATAFILES as shown below, that is very big and
can this file be deleted or purged from the archives area? If another file this size is created in the area it will surely occupy all the disk space available. How can I handle this situation and why is this file being created? The first 2 files are other files being created in the same area.
-rw-r----- 1 oracle dba 10691584 Feb 16 19:00 ARCHIVES_..._20100216_mhl66926_s16081_p1
-rw-r----- 1 oracle dba 20578304 Feb 15 00:00 CONTROLFILE_c-1076815695-20100215-00
-rw-r----- 1 oracle dba 70718234624 Feb 8 00:39 DATAFILES_..._20100208_5el5f395_s15534_p1
Thanks in advance!

It looks like a backup, but we can't be sure. You'd have to identify the process / script / job that creates the file and why.
/sbin/fuser DATAFILES_..._20100208_5el5f395_s15534_p1
might help identify a process accessing / writing to that file.
Hemant K Chitale
http://hemantoracledba.blogspot.com

Similar Messages

  • Reader can't read/open large files created in ArcGIS ??

    Hi there
    We have a problem with files (GEO PDF's) created primarily from ArcGIS (ESRI).
    Reader version XI or DC can't open/read them (blank screen, but all functions working). It can be files around 60 MB or 1.5 GB and above 300 DPI.
    The weird thing is that if we use FOXIT ALL of them will open/show so it can't be the file that is corrupt, but something in Reader.....either a setting, limitation or ??
    Anyone ??????

    Hi,
    We would like to take a look at the large files to investigate the cause of the problem.
    Please note that the Adobe forums do not accept email attachments.
    Would you share the links to the files?
    How to share a document
    If you are not comfortable sharing the links in this public user forum, please send me a private forum message with the links.
    Thank you.

  • Problem upload too large file in a Content Area ORA-1401

    Hi All
    I have a customer who is trying to upload a file with a too large name in a content area.
    If the filename is 80 characters, it works fine.
    But if the filename is upper or 80 characters the following error ocurred:
    ORA-01401: inserted value too large for column
    DAD name: portal30
    PROCEDURE : PORTAL30.wwv_add_wizard.edititem
    URL : http://sbastida-us:80/pls/portal30/PORTAL30.wwv_add_wizard.edititem
    I checked the table: WWV_DOCUMENT and it have a column name: FILENAME is defined like VARCHAR2(350).
    If i run a query on this table, the filename column stores the filename of the file uploaded.
    Is this a new bug? may i fill this bug?
    Do you have any idea about this issue?
    Thanks in advance,
    Catalina

    Catalina,
    The following restrictions apply to the names of documents that you are uploading into the repository:
    Filenames must be 80 characters or less.
    Filenames must not include any of these characters: \ / : * ? < > | " % # +
    Filenames can include spaces and any of these characters: ! @ ~ & . $ ^ ( ) - _ ` ' [ ] { } ; =
    Regards,
    Jerry
    null

  • Large files created in TEMP folder

    Hello,
    one of our servers reported low disk space and we discovered, that its caused by huge files created in temp folder.
    These file names are like C:\Windows\Temp\TMP00000A78405B314E573DA1A0
    Process monitor reported, that theyre created by FEP (msmpeng.exe), from 1MB to 1.7GB size ..
    Is it normal behavior?
     Process monitor
    2/8/2014 17:50
    50:17.1
    System
    IRP_MJ_SET_INFORMATION
    C:\Windows\Temp\TMP00000A78405B314E573DA1A0
     1 048 576
    Type: SetEndOfFileInformationFile, EndOfFile:   1 048 576
    2/8/2014 17:52
    52:50.2
    System
    IRP_MJ_SET_INFORMATION
    C:\Windows\Temp\TMP00000A78405B314E573DA1A0
     1 708 720 128
    Type: SetEndOfFileInformationFile, EndOfFile:   1 708 720 128
    Sincerely,
    Vladimir Vaněk

    Did you find a resolution for this? I have a server that is doing the same thing and would like to get it cleaned up.
    Thanks!
    Jeff

  • Linking image files created in Mac are broken on PC and Vice Versa

    Hello,
    A fellow designer and I are sharing Illustrator files in the office but she is on a Mac and I am on a PC. Our linking image files are located in multiple folders on our company server, but whenever we open each other's files, all the links are broken and have to be relinked due to our different operating platforms. This can be very time consuming since the images are in different folders and there are many of them.
    Anybody know how to fix this PC/Mac problem without dropping all the linking files into the same folder as the .ai file? I'm not the IT guy so I don't think that rebuilding our server structure is an option either...would cause too much havoc around the office.
    Any help would be appreciated.

    Let me ask you something does this sound like a viable way to work?
    The way to do it is as you yourself suggest to work locally with all the files in one folder so you cannot break the links and that you know where the art is at all times.
    If the IT thinks this is a good way to work then let them create a script to save all linked files to a common live folder for the project so that both computers can find it.
    But from what you describe it is just looking for trouble and might be exactly what the IT wants trouble so they can extend their job. In which case ther is  no solution.

  • The date and time of files created and saved are off

    I notice that under the date modified column, files that I have saved or updated are usually a day off. I checked the date and time in the system preferences and it is correct. is something wrong with the battery?

    Hi, paulmont -
    Be sure the Time Zone setting is correct.
    If you have time displayed as 12-hour (rather than 24-hour), be sure the AM/PM setting is correct. Having that wrong will displace the mod/create date/time recorded for a file by 12 hours, which will (at times) shift it to another day.
    If you use Classic, make sure the same settings are in Classic as well as in OSX, particularly the Time Zone setting. This is especially true for machines which can be and are booted into OS 9. The typical symptom for this being the cause is an offset of one or a few hours.

  • XMP files created in LR5 are showing in Finder as "Microsoft Excel 97-2004 workbook" files--???

    I import NEF files as I've done for years...no problem on import.
    Now I've noticed that once I start making adjustments in LR5, the accompanying XMP files show in the Finder as "Microsoft Excel 97-2004 workbook" files. Instead of as "Adobe XMP" files. Huh???
    I'm able to work fine within LR, but since I noticed this I've been a) converting imported files to DNGs within LR, and b) importing new files as DNGs.
    But sometimes I want to import files as NEFs...any idea why this is happening? Is it a LR bug? Is DNG conversion the best (only?) solution?
    Thanks in advance to anyone who has a clue!

    Hmmm, my concern isn't with opening the XMP, it's that the file type seems to have changed and I don't know how that's happening.
    Here's a screenshot comparison from Finder windows:

  • Ok to store large files on desktop?

    Ok to store large files on desktop?
    Are there consequences (running slow) when storing a large files or numerous files on the desktop?
    Sometimes I use the desktop to stores photos until I complete a project. There are other files on the desktop that I could remove if it causes the computer to slow down. If not, I'll keep them there because it's easy and convenient.
    I have 4G of memory.

    Hi Junk,
    I can't think of any consequences to storing your large files on your desktop. After all, from your system's point of view, your desktop is just another folder. However, there is some system consequence to storing lots of files there, as it will decrease performance. But with 4GB of memory, I'm not sure how much of a difference you might see. And there's a big gap between "I'm storing 15 photos on my desktop" and "I can no longer see my hard disk icon in this mess." If the idea of even potential system slowdowns bothers you, create a work folder and leave it on your desktop to drop all of those photos in. If you're not getting too insane with the amount, though, I'm sure you'll be fine.
    Hope that helps!
    —Hazy

  • How does Time Machine handle large files?

    I'm relatively new at the whole Time Capsule / Time Machine process and have learned that large files (eg aperture library) are backed up each time there is a change and this can lead to the TC filling up quicker than normal.
    How does this work with daily and weekly backups?
    For example, if my aperture library is, say 1Gb and I import a load of photos from my camera and this goes up to 2Gb. I've learned that I should disable time machine while I'm in Aperture (or at least before 10.6...not sure now). So given I've done that, imported the files to Aperture but want to edit them later and ultimately move them into iPhoto to keep the Aperture album small.
    When I turn back on Time Machine, the next hourly backup will know the library has changed and will back it up, this will go on until a day backup has been taken - this deletes the 24 hourly backups? or does it merge them?
    If I then do the editing the following week, then export the photos and the library is now back to 1Gb again....backed up hourly/daily/weekly etc what am I left with??
    Do I have an original, the 2GB version and the new 1Gb version...ie 4Gb......is there a cunning way I can work to change the files within a week so only one of the changes is in the backup?

    Orpheus999 wrote:
    When I turn back on Time Machine, the next hourly backup will know the library has changed and will back it up, this will go on until a day backup has been taken - this deletes the 24 hourly backups? or does it merge them?
    The Time Machine panel of System Preferences says this:
    Time Machine keeps
    - Hourly backup for the past 24 hours
    - Daily backups for the past month
    - Weekly backups until your backup disk is full
    Each time Time Machine runs it creates what appears to be an entirely new backup set, although it does this in a way that doesn't require it to copy files that have already been copied. So merging isn't necessary. Another effect of how it operates is that each unique version of a file (as opposed to packages of files) only exists on the backup volume once.
    According to the contents of my Time Machine backup file, hourly backups are literally kept for 24 hours, not until the next "daily" backup. For a "daily" backup, it seems to keep the oldest "hourly" backup for a day.
    If I then do the editing the following week, then export the photos and the library is now back to 1Gb again....backed up hourly/daily/weekly etc what am I left with??
    Do I have an original, the 2GB version and the new 1Gb version...ie 4Gb......is there a cunning way I can work to change the files within a week so only one of the changes is in the backup?
    You might be able to exclude those files from being backed up at certain times, but I can't be sure this would result in older copied of those files being retained.

  • DNG files created with LR4 do not show thumbnails

    Hi, 
    When using LR3 to create DNG files from my RAW files, those files would show me thumbnails of my image in Windows Explorer.  I am running Windows/7 (64) and have installed a CODEC from "Fast Picture Viewer" that allows thumbnails from RAW & DNG files to be shown.   It seems, however, that the thumbnails do not show for DNG's created with LR4.
    To Remedy this I uninstalled the CODEC.  Downloaded the most recent version and installed it but this did not fix the problem.
    I then went to the user forum for the CODEC product and searched for my problem.  I found a thread who's solution is to turn off "Embed Fast Load Data" when exporting to DNG and it fixes the problem.  I tried this and indeed it did fix it.  The responder went on to say ".....When this option is enabled the files created are no longer DNGs (just an undocumented private format of Adobe that no one else can read to this date)".   This statement surprised me as it is counter to what I understand Adobe created DNG to be.  Can I get some input on this comment as if true it is very troubling.
    My second question is that I see where to turn off the "Embed Fast Load Data" in the LR Export module, but where do I do the same thing in the Import module when I'm selecting import mode "Copy to DNG"?
    And, my thrid question is this.  If indeed the DNG files created by LR4 are proper DNG files and this CODEC is just flawed in some way, does anyone have a better way to allow image thumbnails to be shown in Windows Explorer?
    Thanks -- Dan

    Hi David,
    You testing does not coincide with mine.  I have consistently kept "Use
    lossy compression" turned off.  With lossy turned off, it seems that using
    Fast Data Load prevents the thumbnails from displaying whereas turning off
    "Fast Data Load" allows the thumbnail to be shown.
    Below is comment from Adobe confirming that they have yet to release the
    spec containing "Fast Data Load".
    From: Ian Lyons [email protected]
    Sent: Tuesday, June 05, 2012 12:29 AM
    To: Califdan
    Subject: DNG files created with LR4 do not show
    thumbnails
    Re: DNG files created with LR4 do not show thumbnails
    created by Ian Lyons in Photoshop Lightroom - View the full discussion

  • Re: Transfer Large Files between SAP Systems

    Dear All.
    Please can someone advice a quick way of transferring large files from one sap system to the other. Can we use ALE? Can Ale handle large files? If there are others ways please advice at your earliest. thx alot

    yes ,you can use ALE between two SAP R/3 system..
    it is efficient way to transfer from One SAP R/3 to Other SAP R/3

  • Windows Explorer misreads large-file .zip archives

       I just spent about 90 minutes trying to report this problem through
    the normal support channels with no useful result, so, in desperation,
    I'm trying here, in the hope that someone can direct this report to some
    useful place.
       There appears to be a bug in the .zip archive reader used by Windows
    Explorer in Windows 7 (and up, most likely).
       An Info-ZIP Zip user recently reported a problem with an archive
    created using our Zip program.  The archive was valid, but it contained
    a file which was larger than 4GiB.  The complaint was that Windows
    Explorer displayed (and, apparently believed) an absurdly large size
    value for this large-file archive member.  We have since reproduced the
    problem.
       The original .zip archive format includes uncompressed and compressed
    sizes for archive members (files), and these sizes were stored in 32-bit
    fields.  This caused problems for files which are larger than 4GiB (or,
    on some system types, where signed size values were used, 2GiB).  The
    solution to this fundamental limitation was to extend the .zip archive
    format to allow storage of 64-bit member sizes, when necessary.  (PKWARE
    identifies this format extension as "Zip64".)
       The .zip archive format includes a mechanism, the "Extra Field", for
    storing various kinds of metadata which had no place in the normal
    archive file headers.  Examples include OS-specific file-attribute data,
    such as Finder info and extended attributes for Apple Macintosh; record
    format, record size, and record type data for VMS/OpenVMS; universal
    file times and/or UID/GID for UNIX(-like) systems; and so on.  The Extra
    Field is where the 64-bit member sizes are stored, when the fixed 32-bit
    size fields are too small.
       An Extra Field has a structure which allows multiple types of extra
    data to be included.  It comprises one or more "Extra Blocks", each of
    which has the following structure:
           Size (bytes) | Description
          --------------+------------
                2       | Type code
                2       | Number of data bytes to follow
            (variable)  | Extra block data
       The problem with the .zip archive reader used by Windows Explorer is
    that it appears to expect the Extra Block which includes the 64-bit
    member sizes (type code = 0x0001) to be the first (or only) Extra Block
    in the Extra Field.  If some other Extra Block appears at the start of
    the Extra Field, then its (non-size) data are being incorrectly
    interpreted as the 64-bit sizes, while the actual 64-bit size data,
    further along in the Extra Field, are ignored.
       Perhaps the .zip archive _writer_ used by Windows Explorer always
    places the Extra Block with the 64-bit sizes in this special location,
    but the .zip specification does not demand any particular order or
    placement of Extra Blocks in the Extra Field, and other programs
    (Info-ZIP Zip, for example) should not be expected to abide by this
    artificial restriction.  For details, see section "4.5 Extensible data
    fields" in the PKWARE APPNOTE:
          http://www.pkware.com/documents/casestudies/APPNOTE.TXT
       A .zip archive reader is expected to consider the Extra Block type
    codes, and interpret accordingly the data which follow.  In particular,
    it's not sufficient to trust that any particular Extra Block will be the
    first one in the Extra Field.  It's generally safe to ignore any Extra
    Block whose type code is not recognized, but it's crucial to scan the
    Extra Field, identify each Extra Block, and handle it according to its
    type.
       Here are some relatively small (about 14MiB each) test archives which
    illustrate the problem:
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_V.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_W.zip
       Correct info, from UnZip 6.00 ("unzip -lv"):
    Archive:  test_4g.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_V.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_W.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    (In these reports, "Length" is the uncompressed size; "Size" is the
    compressed size.)
       Incorrect info, from (Windows 7) Windows Explorer:
    Archive        Name          Compressed size   Size
    test_4g.zip    test_4g.txt         14,454 KB   562,951,376,907,238 KB
    test_4g_V.zip  test_4g.txt         14,454 KB   8,796,110,221,518 KB
    test_4g_W.zip  test_4g.txt         14,454 KB   1,464,940,363,777 KB
       Faced with these unrealistic sizes, Windows Explorer refuses to
    extract the member file, for lack of (petabytes of) free disk space.
       The archive test_4g.zip has the following Extra Blocks: universal
    time (type = 0x5455) and 64-bit sizes (type = 0x0001).  test_4g_V.zip
    has: PWWARE VMS (type = 0x000c) and 64-bit sizes (type = 0x0001).
    test_4g_W.zip has: NT security descriptor (type = 0x4453), universal
    time (type = 0x5455), and 64-bit sizes (type = 0x0001).  Obviously,
    Info-ZIP UnZip has no trouble correctly finding the 64-bit size info in
    these archives, but Windows Explorer is clearly confused.  (Note that
    "1,464,940,363,777 KB" translates to 0x0005545500000400 (bytes), and
    "0x00055455" looks exactly like the size, "0x0005" and the type code
    "0x5455" for a "UT" universal time Extra Block, which was present in
    that archive.  This is consistent with the hypothesis that the wrong
    data in the Extra Field are being interpreted as the 64-bit size data.)
       Without being able to see the source code involved here, it's hard to
    know exactly what it's doing wrong, but it does appear that the .zip
    reader used by Windows Explorer is using a very (too) simple-minded
    method to extract 64-bit size data from the Extra Field, causing it to
    get bad data from a properly formed archive.
       I suspect that the engineer involved will have little trouble finding
    and fixing the code which parses an Extra Field to extract the 64-bit
    sizes correctly, but if anyone has any questions, we'd be happy to help.
       For the Info-ZIP (http://info-zip.org/) team,
       Steven Schweda

    > We can't get the source (info-zip) program for test.
       I don't know why you would need to, but yes, you can:
          http://www.info-zip.org/
          ftp://ftp.info-zip.org/pub/infozip/src/
    You can also get pre-built executables for Windows:
          ftp://ftp.info-zip.org/pub/infozip/win32/unz600xn.exe
          ftp://ftp.info-zip.org/pub/infozip/win32/zip300xn.zip
    > In addition, since other zip application runs correctly. Since it should
    > be your software itself issue.
       You seem to misunderstand the situation.  The facts are these:
       1.  For your convenience, I've provided three test archives, each of
    which includes a file larger than 4GiB.  These archives are valid.
       2.  Info-ZIP UnZip (version 6.00 or newer) can process these archives
    correctly.  This is consistent with the fact that these archives are
    valid.
       3.  Programs from other vendors can process these archives correctly.
    I've supplied a screenshot showing one of them (7-Zip) doing so, as you
    requested.  This is consistent with the fact that these archives are
    valid.
       4.  Windows Explorer (on Windows 7) cannot process these archives
    correctly, apparently because it misreads the (Zip64) file size data.
    I've supplied a screenshot of Windows Explorer showing the bad file size
    it gets, and the failure that occurs when one tries to use it to extract
    the file from one of these archives, as you requested.  This is
    consistent with the fact that there's a bug in the .zip reader used by
    Windows Explorer.
       Yes, "other zip application runs correctly."  Info-ZIP UnZip runs
    correctly.  Only Windows Explorer does _not_ run correctly.

  • Create AAC Version failing in iTunes 8 for large files

    Hi,
    I'm having trouble converting a few large (1gb+) MP3 files to AAC versions.
    The files are Audiobooks which have been ripped from CD, then merged into a single MP3 file (with a track CUE file created so I can add the chapter marks and create a final resulting M4B file).
    The MP3 files are all encoded at 44.1KHz sample rate, 64Kbps bitrate, Mono audio. When converting anything under 1GB (using iTunes' Create AAC Version option), it seems to convert fine, however as soon as I try a file over 1GB it seems to start as expected, then get approx 2/3rds through, then restart. the restart however, does not start from the beginning, it starts from the point where it had got to. The resulting M4A file is corrupt and unplayable.
    Am I doing anything wrong here, or is there some sort of size/length limitation on AAC encoded audio in MP4 containers?
    On a side note, I have set the AAC import options to 44.1KHz, 64KBps bitrate, Mono audio, but this does not seem to be adhered to when converting an MP3 file, instead taking whatever settings iTunes seems to think as right...

    Hi,
    That software looks to be working perfectly so far - not only has it changed me from using three tools to do what this one does on its own, but it's also the first that's given me a clue as to why the other tools were failing!
    When I first got it to convert the chaptered MP3 files across to AAC, it came up with a warning message saying something along the lines of "The audio files you have queued are a total of x,xxx,xxx,xxx frames long, which is longer than the AAC standards can accomodate. The following tracks will be removed...". Looks like it wasn't the filesize at all, but the bitrate was too high, and went over the file's limits that way instead.
    Quick note on the settings: I tried the iTunes standard "64K Podcast" settings first of all, but the sample rate got set to 22K, which caused the speech to go all robot-like, and the S's slurred. Had to bump it back up to 44K (with 32K stereo bitrate, set to mono so 16K overall).
    Overall, a very useful tool, thanks Chris!

  • Weird filename are getting created in archive directory.

    Hi All,
    Weird filename are getting created in archive directory.
    I have one ftp adapter,after the file getting processed the file's are getting moved to archive directory.
    But with some different file name/ filename are randomly generated. But the content of the file remains the same.
    Pls suggest.
    Thanks.

    This is standard functionality, it uses the original filename with a suffix of the datetime.
    cheers
    James

  • Append PDF files to create one large file

    This is what I would like to do:
    I have a .rar folder with numerous files in it.
    Within the main folder, there are about 10 subfolders.
    Each of these subfolders has about 15 PDF documents in them.
    I would like to take all 150 (15 PDF's x 10 subfolders) PDF documents and append them into one large PDF document.
    They are all in order. How could I accomplish this in LabVIEW?
    * Using 8.5
    Cory K
    Solved!
    Go to Solution.

    I made this one a while ago.
    Enter the path to the directory containing your pdf files.
    Enter a pattern (ex. *.pdf).
    Enter a name for the file to be created.
    Run the VI.  It'll  fill the array with the files that match your pattern.
    Enter the page numbers from each file that you want in in the output file into the Pages controls.
    Hit "Go".  It'll build your pdf with the selected pages using pdftk and System Exec.
    You'll need to modify it to handle your use-case, but it should be easy (unless .rar indicates you're not working in Windows!)
    Jim 
    Jim
    You're entirely bonkers. But I'll tell you a secret. All the best people are. ~ Alice
    Attachments:
    LV_PDF.zip ‏1477 KB

Maybe you are looking for

  • Healthy copy of mailbox Database in DAG acting as a seeding source for another Healthy copy Database in DAG

    Overview of the architecture We have Site (Primary & DR) 2 Database copies in primary site and 1 Database on the DR site (Primary site 2 copies one mounted and second healthy & on the DR site one healthy copy of Database) Recently Due to Network Prob

  • Scan a document to adobe reader

    how do I scan a document to adobe reader

  • Cannot run Forms-Application

    Hi, i built a new form with forms 10g. Only 1 block and 1 table. Then I startet OC4J and tried to run this form from the forms builder. It does not work. So I mad a .fmx file and tried to start this file from Internetexplorer 8 and Firefox. Internet

  • Issue with the SNP Extractor in APO-BI.

    Hi Experts, I am facing an issue with SNP Extractor. I have a SNP Extractor in APO system, and it has 4 datasources connected to it.Out of the 4 datasources, we are getting display of records for relevant selections in only one data source, but for t

  • Subtotals on Matrix Reports

    I have a matrix report. Rows are Items. Number of rows are based strictly on data. Rows are also grouped on 4 levels different levels with SUBTOTALS ON ALL LEVELS Levels are Line, Class, Department, Division (and of course report). Columns are Dates.