HTMLDB_tools, processing a very large csv file in a collection
Hi Guys,
I'm new to APEX and trying to load approx 1,000,000 rows into a table from a csv file, does anyone have a better way of doing it than htmldb_tools.parse_file (which is very slow)
Cheers
It's not Apex, but you could use SQL*Loader. It's really very fast!
greets, Dik
Similar Messages
-
Best data Structor for dealing with very large CSV files
hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
any suggestions would be greatly apprechiated.
Message was edited by:
ninjarobHow much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
If the data size turns out to be prohibitive of loading into memory, how about a relational database?
Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero). -
Import very large csv files into SAP
Hi
We're not using PI, but have middleware called Trading Networks. Our design is fixed (not my decision) to not upload files into Application Server and import it from there. Our design dictates that we must write RFCs and Trading Networks will call the RFC per interface with very large file sent as table of strings. This takes 14 minutes to import into SAP plain Z-table from where we'll integrate. As a test we uploaded the file to Application Server and integrated into Z-table from there. This took 4 minutes. However our architect is not impressed that we'll stretch available Application Server to it's limits.
I want to propose that the large file be split in e.g. 4 parts at Trading Networks level and call 4 threads of the RFC which could reduce integration time to e.g. 3 minutes 30 seconds. Is there someone that has suggestions in this regard especially about a proposed, working, elegant solution for integrating large files with our current environment? This will form the foundation of our project.
Thank you and best regards,
AdrianZip compression can be tried. The RFC will receive zip stream which will be decompressed using CL_ABAP_ZIP.
-
HELP!! Very Large Spooling / File Size after Data Merge
My question is: If the image is the same and only the text is different why not use the same image over and over again?
Here is what happens...
Using CS3 and XP (P4 2.4Ghz, 1GB Ram, 256MB Video Card) I have taken a postcard pdf (the backside) placed it in a document, then I draw a text box. Then I select a data source and put the fields I wish to print (Name, address, zip, etc.) in the text box.
Now, under the Create Merged Document menu I select Multiple Records and then use the Multiple Records Layout tab to adjust the placement of this postcard on the page. I use the preview multiple records option to lay out 4 postcards on my page. Then I merge the document (it has 426 records).
Now that my merged document is created with four postcards per page and the mailing data on each card I go to print. When I print the file it spools up huge! The PDF I orginally placed in the document is 2.48 MB but when it spools I can only print 25 pages at a time and that still takes FOREVER. So again my question is... If the image is the same and only the text is different why not use the same image over and over again?
How can I prevent the gigantic spooling? I have tried putting the PDF on the master page and then using the document page to create the merged document and still the same result. I have also tried createing a merged document with just the addresses then adding the the pdf on the Master page afterward but again, huge filesize while spooling. Am I missing something? Any help is appreciated :)The size of the EMF spool file may become very large when you print a document that contains lots of raster data
View products that this article applies to.
Article ID : 919543
Last Review : June 7, 2006
Revision : 2.0
On This Page
SYMPTOMS
CAUSE
RESOLUTION
STATUS
MORE INFORMATION
Steps to reproduce the problem
SYMPTOMS
When you print a document that contains lots of raster data, the size of the Enhanced Metafile (EMF) spool file may become very large. Files such as Adobe .pdf files or Microsoft Word .doc documents may contain lots of raster data. Adobe .pdf files and Word .doc documents that contain gradients are even more likely to contain lots of raster data.
Back to the top
CAUSE
This problem occurs because Graphics Device Interface (GDI) does not compress raster data when the GDI processes EMF spool files and generates EMF spool files.
This problem is very prominent with printers that support higher resolutions. The size of the raster data increases by four times if the dots-per-inch (dpi) in the file increases by two times. For example, a .pdf file of 1 megabyte (MB) may generate an EMF spool file of 500 MB. Therefore, you may notice that the printing process decreases in performance.
Back to the top
RESOLUTION
To resolve this problem, bypass EMF spooling. To do this, follow these steps:1. Open the properties dialog box for the printer.
2. Click the Advanced tab.
3. Click the Print directly to the printer option.
Note This will disable all print processor-based features such as the following features: N-up
Watermark
Booklet printing
Driver collation
Scale-to-fit
Back to the top
STATUS
Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
Back to the top
MORE INFORMATION
Steps to reproduce the problem
1. Open the properties dialog box for any inbox printer.
2. Click the Advanced tab.
3. Make sure that the Print directly to the printer option is not selected.
4. Click to select the Keep printed documents check box.
5. Print an Adobe .pdf document that contains many groups of raster data.
6. Check the size of the EMF spool file. -
I am a scientist and run my own business. Money is tight. I have some very large Excel files (~200MB) that I need to sort and perform logic operations on. I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro. Some of the operations take half an hour to perform. How much faster should I expect these operations to happen on a new MacPro? Is there a significant speed advantage in the 6 core vs 4 core? Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB? Related to this I am using a 32 bit version of Excel. Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?
Grant Bennet-Alder,
It’s funny you mentioned using Activity Monitor. I use it all the time to watch when a computation cycle is finished so I can avoid a crash. I keep it up in the corner of my screen while I respond to email or work on a grant. Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so. As long as I leave Excel alone while it is working it will not crash. I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application. That is clearly a problem for this kind of work. Is there any work around for this? It seems like a 64-bit spreadsheet would help. I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns. I tried it out on my MacBook Pro but my files don’t fit.
The hatter,
This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products. When I started computing this was the sort of thing computers were designed to do. Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple? Excel is only 64-bit on their machines.
Many thanks to both of you for your quick and on point answers! -
I found very large "frame" files, what are they & can I delete them? (See screenshot). I'm a (17 today)-year-old film-maker and can't edit in FCP X anymore because I "don't have enough space". Every time I try to delete one, another identical file creates itself...
If that can help: I just upgraded to FCP 10.0.4 and every time I launch it it asks to convert my current projects (I know it would do it at least once) and I accept, but everytime I have to get it done AGAIN. My computer is slower than ever and I have a deadline this friday
I also just upgraded to Mac OS X 10.7.4, and the problem hasn't been here for long, so it may be linked...
Please help me!
AlexThe first thing you should do is to back up your personal data. It is possible that your hard drive is failing. If you are using Time Machine, that part is already done.
Then, I think it would be easiest to reformat the drive and restore. If you ARE using Time Machine, you can start up from your Leopard installation disc. At the first Installer screen, go up to the menu bar, and from the Utilities menu, first select to run Disk Utility. Completely erase the internal drive using the Erase tab; make sure you have the internal DRIVE (not the volume) selected in the sidebar, and make sure you are NOT erasing your Time Machine drive by mistake. After erasing, quit Disk Utility, and select the command to restore from backup from the same Utilities menu. Using that Time Machine volume restore utility, you can restore it to a time and date immediately before you went on vacation, when things were working.
If you are not using Time Machine, you can erase and reinstall the OS (after you have backed up your personal data). After restarting from the new installation and installing all the updates using Software Update, you can restore your personal data from the backup you just made. -
Best technology to navigate through a very large XML file in a web page
Hi!
I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
Could anyone please tell me the best technology and best parser to be used for very large XML files?Thank you for your suggestion. I have a question,
though. If I use a relational database and try to
access it for EACH and EVERY click the user makes,
wouldn't that take much time to populate the page with
data?
Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far. -
Have a very large text file, and need to read lines in the middle.
I have very large txt files (around several hundred megabytes), and I want to be able to skip and read specific lines. More specifically, say the file looks like:
scan 1
scan 2
scan 3
scan 100,000
I want to be able to skip move the filereader immediately to scan 50,000, rather than having to read through scan 1-49,999.
Thanks for any help.If the lines are all different lengths (as in your example) then there is nothing you can do except to read and ignore the lines you want to skip over.
If you are going to be doing this repeatedly, you should consider reformatting those text files into something that supports random access. -
How can NI FBUS Monitor display very large recorded files
NI FBUS Monitor version 3.0.1 outputs an error message "Out of memory", if I try to load a large recorded file of size 272 MB. Is there any combination of operating system (possible Vista32 or Vista64) and/or physical memory size, where NI FBUS Monitor can display such large recordings ? Are there any patches or workarounds or tools to display very large recorded files?
Hi,
NI-FBUS Monitor does not set the limitation on the maximum record file size. The physical memory size in the system is one of the most important factors that affect the loading of large record file. Monitor will try loading the entire file into the memory during file open operation.
272MB is a really large file size. To open the file, your system must have sufficient physical memory available. Otherwise "Out of memory" error will occur.
I would recommend you do not use Monitor to open a file larger than 100MB. Loading of a too large file will consume the system memory quickly and decrease the performance.
Message Edited by Vince Shen on 11-30-2009 09:38 PM
Feilian (Vince) Shen -
I am generating some very large XML files (600,000+ lines, 50MB+ characters). I finally have them all being valid XML and valid UTF-8.
But the files are so large Safari and Chrome will often not open them. FireFox will though.
Instead of these browsers, I was wondering if there are there any other recommended apps for the Mac for opening and viewing the XML, getting an error message if they are not valid for some reason and examing the XML tree?
I opened the file in the default app for XML which is Xcode, but that is just like opening it in a plain text editor. You can't expand/collapse the XML tree like you can with a browser, and it doesn't report errors.
Thanks,
DougHi Tom,
I had not seen that list. I'll look it over.
I'm also in touch with the developer of BBEdit (they are quite responsive) and they are willing to look at the file in question and see why it is not reporting UTF-8 errors while Chrome is.
For now I have all the invalid characters quashed and things are working. But it would be useful in the future.
By the by, some of those editors are quite pricey!
doug -
Very large bdump file sizes, how to solve?
Hi gurus,
I currently always find my disk space is not enough, after checking, it is the oraclexe/admin/bdump, there's currently 3.2G for it, my database is very small, only holding datas of 10mb.
It didn't happen before, only currently.
I don't know why it happened, I have deleted some old files in that folder, but today I found it is still very large compare to my database.
I am running an apex application with xe, the applcaitions works well, we didn't see anything wrong, but only the bdump file very big.
any tip to solve this? thanks
here comes my alert_xe.log file content:
hu Jun 03 16:15:43 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5600.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:15:48 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=5452
Thu Jun 03 16:15:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:16:16 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:20:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:21:50 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:25:56 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:26:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:30:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:31:19 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:36:00 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:36:46 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=1312
Thu Jun 03 16:36:49 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:37:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:41:51 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:42:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:46:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:47:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:51:57 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:52:35 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:56:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:57:10 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=3428
Thu Jun 03 16:57:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:57:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:02:16 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:02:48 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:07:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:08:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:12:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:12:41 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:17:21 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:17:34 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=5912
Thu Jun 03 17:17:37 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:18:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:22:37 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:23:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:27:39 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:28:02 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:32:42 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:33:07 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:37:45 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:38:40 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=1660
Thu Jun 03 17:38:43 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:39:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:42:54 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=31, OS id=6116
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174259', 'KUPC$S_1_20100603174259', 0);
Thu Jun 03 17:43:38 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=32, OS id=2792
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174338', 'KUPC$S_1_20100603174338', 0);
Thu Jun 03 17:43:44 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:44:06 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:44:47 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=33, OS id=3492
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174448', 'KUPC$S_1_20100603174448', 0);
kupprdp: worker process DW01 started with worker id=1, pid=34, OS id=748
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM');
Thu Jun 03 17:45:28 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 5684K exceeds notification threshold (2048K)
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
Thu Jun 03 17:45:28 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 5681K exceeds notification threshold (2048K)
Details in trace file c:\oraclexe\app\oracle\admin\xe\bdump\xe_dw01_748.trc
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
Thu Jun 03 17:48:47 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:49:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:53:49 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:54:28 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
Fri Jun 04 07:46:55 2010
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Windows XP Version V5.1 Service Pack 3
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:1653M/2047M, Ph+PgF:4706M/4958M, VA:1944M/2047M
Fri Jun 04 07:46:55 2010
Starting ORACLE instance (normal)
Fri Jun 04 07:47:06 2010
LICENSE_MAX_SESSION = 100
LICENSE_SESSIONS_WARNING = 80
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =33
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.1.0.
System parameters with non-default values:
processes = 200
sessions = 300
license_max_sessions = 100
license_sessions_warning = 80
sga_max_size = 838860800
__shared_pool_size = 260046848
shared_pool_size = 209715200
__large_pool_size = 25165824
__java_pool_size = 4194304
__streams_pool_size = 8388608
spfile = C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE.ORA
sga_target = 734003200
control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
__db_cache_size = 432013312
compatible = 10.2.0.1.0
db_recovery_file_dest = D:\
db_recovery_file_dest_size= 5368709120
undo_management = AUTO
undo_tablespace = UNDO
remote_login_passwordfile= EXCLUSIVE
dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
shared_servers = 10
job_queue_processes = 1000
audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
db_name = XE
open_cursors = 300
os_authent_prefix =
pga_aggregate_target = 209715200
PMON started with pid=2, OS id=3044
MMAN started with pid=4, OS id=3052
DBW0 started with pid=5, OS id=3196
LGWR started with pid=6, OS id=3200
CKPT started with pid=7, OS id=3204
SMON started with pid=8, OS id=3208
RECO started with pid=9, OS id=3212
CJQ0 started with pid=10, OS id=3216
MMON started with pid=11, OS id=3220
MMNL started with pid=12, OS id=3224
Fri Jun 04 07:47:31 2010
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 10 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
PSP0 started with pid=3, OS id=3048
Fri Jun 04 07:47:41 2010
alter database mount exclusive
Fri Jun 04 07:47:54 2010
Setting recovery target incarnation to 2
Fri Jun 04 07:47:56 2010
Successful mount of redo thread 1, with mount id 2601933156
Fri Jun 04 07:47:56 2010
Database mounted in Exclusive Mode
Completed: alter database mount exclusive
Fri Jun 04 07:47:57 2010
alter database open
Fri Jun 04 07:48:00 2010
Beginning crash recovery of 1 threads
Fri Jun 04 07:48:01 2010
Started redo scan
Fri Jun 04 07:48:03 2010
Completed redo scan
16441 redo blocks read, 442 data blocks need recovery
Fri Jun 04 07:48:04 2010
Started redo application at
Thread 1: logseq 1575, block 48102
Fri Jun 04 07:48:05 2010
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1575 Reading mem 0
Mem# 0 errs 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Fri Jun 04 07:48:07 2010
Completed redo application
Fri Jun 04 07:48:07 2010
Completed crash recovery at
Thread 1: logseq 1575, block 64543, scn 27413940
442 data blocks read, 442 data blocks written, 16441 redo blocks read
Fri Jun 04 07:48:09 2010
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=25, OS id=3288
ARC1 started with pid=26, OS id=3292
Fri Jun 04 07:48:10 2010
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 1576
Thread 1 opened at log sequence 1576
Current log# 3 seq# 1576 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
Successful open of redo thread 1
Fri Jun 04 07:48:13 2010
ARC0: STARTING ARCH PROCESSES
Fri Jun 04 07:48:13 2010
ARC1: Becoming the 'no FAL' ARCH
Fri Jun 04 07:48:13 2010
ARC1: Becoming the 'no SRL' ARCH
Fri Jun 04 07:48:13 2010
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC0: Becoming the heartbeat ARCH
Fri Jun 04 07:48:13 2010
SMON: enabling cache recovery
ARC2 started with pid=27, OS id=3580
Fri Jun 04 07:48:17 2010
db_recovery_file_dest_size of 5120 MB is 49.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Fri Jun 04 07:48:31 2010
Successfully onlined Undo Tablespace 1.
Fri Jun 04 07:48:31 2010
SMON: enabling tx recovery
Fri Jun 04 07:48:31 2010
Database Characterset is AL32UTF8
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=28, OS id=2412
Fri Jun 04 07:48:51 2010
Completed: alter database open
Fri Jun 04 07:49:22 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:32 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:57 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:54:10 2010
Shutting down archive processes
Fri Jun 04 07:54:15 2010
ARCH shutting down
ARC2: Archival stopped
Fri Jun 04 07:54:53 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:55:08 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:56:25 2010
Starting control autobackup
Fri Jun 04 07:56:27 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
Fri Jun 04 07:56:28 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_21
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_20
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_17
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_16
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_14
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_12
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_09
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_07
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_06
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_03
ORA-27093: 无法删除目录
Fri Jun 04 07:56:29 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_21
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_20
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_17
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_16
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_14
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_12
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_09
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_07
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_06
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_03
ORA-27093: 无法删除目录
Control autobackup written to DISK device
handle 'D:\XE\AUTOBACKUP\2010_06_04\O1_MF_S_720777385_60JJ9BNZ_.BKP'
Fri Jun 04 07:56:38 2010
Thread 1 advanced to log sequence 1577
Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Fri Jun 04 07:56:56 2010
Thread 1 cannot allocate new log, sequence 1578
Checkpoint not complete
Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Thread 1 advanced to log sequence 1578
Current log# 3 seq# 1578 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
Fri Jun 04 07:57:04 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 2208K exceeds notification threshold (2048K)
KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
Fri Jun 04 07:59:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:59:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []Hi Gurus,
there's a error ora-00600 in the big trc files as below, this is only part of this file, this file is more than 45mb in size:
xe_mmon_4424.trc
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_4424.trc
Fri Jun 04 17:03:22 2010
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Windows XP Version V5.1 Service Pack 3
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:992M/2047M, Ph+PgF:3422M/4958M, VA:1011M/2047M
Instance name: xe
Redo thread mounted by this instance: 1
Oracle process number: 11
Windows thread id: 4424, image: ORACLE.EXE (MMON)
*** SERVICE NAME:(SYS$BACKGROUND) 2010-06-04 17:03:22.265
*** SESSION ID:(284.23) 2010-06-04 17:03:22.265
*** 2010-06-04 17:03:22.265
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Current SQL statement for this session:
BEGIN :success := dbms_ha_alerts_prvt.check_ha_resources; END;
----- PL/SQL Call Stack -----
object line object
handle number name
41982E80 418 package body SYS.DBMS_HA_ALERTS_PRVT
41982E80 552 package body SYS.DBMS_HA_ALERTS_PRVT
41982E80 305 package body SYS.DBMS_HA_ALERTS_PRVT
419501A0 1 anonymous block
----- Call Stack Trace -----
calling call entry argument values in hex
location type point (? means dubious value)
ksedst+38 CALLrel ksedst1+0 0 1
ksedmp+898 CALLrel ksedst+0 0
ksfdmp+14 CALLrel ksedmp+0 3
_kgerinv+140 CALLreg 00000000 8EF0A38 3
kgeasnmierr+19 CALLrel kgerinv+0 8EF0A38 6610020 3672F70 0
6538808
kjhnpost_ha_alert CALLrel _kgeasnmierr+0 8EF0A38 6610020 3672F70 0
0+2909
__PGOSF57__kjhn_pos CALLrel kjhnpost_ha_alert 88 B21C4D0 B21C4D8 B21C4E0
t_ha_alert_plsql+43 0+0 B21C4E8 B21C4F0 B21C4F8
8 B21C500 B21C50C 0 FFFFFFFF 0
0 0 6
_spefcmpa+415 CALLreg 00000000
spefmccallstd+147 CALLrel spefcmpa+0 65395B8 16 B21C5AC 653906C 0
pextproc+58 CALLrel spefmccallstd+0 6539874 6539760 6539628
65395B8 0
__PGOSF302__peftrus CALLrel _pextproc+0
ted+115
_psdexsp+192 CALLreg 00000000 6539874
_rpiswu2+426 CALLreg 00000000 6539510
psdextp+567 CALLrel rpiswu2+0 41543288 0 65394F0 2 6539528
0 65394D0 0 2CD9E68 0 6539510
0
_pefccal+452 CALLreg 00000000
pefcal+174 CALLrel pefccal+0 6539874
pevmFCAL+128 CALLrel _pefcal+0
pfrinstrFCAL+55 CALLrel pevmFCAL+0 AF74F48 3DFB92B8
pfrrunno_tool+56 CALL??? 00000000 AF74F48 3DFBB728 AF74F84
pfrrun+781 CALLrel pfrrun_no_tool+0 AF74F48 3DFBB28C AF74F84
plsqlrun+738 CALLrel _pfrrun+0 AF74F48
peicnt+247 CALLrel plsql_run+0 AF74F48 1 0
kkxexe+413 CALLrel peicnt+0
opiexe+5529 CALLrel kkxexe+0 AF7737C
kpoal8+2165 CALLrel opiexe+0 49 3 653A4FC
_opiodr+1099 CALLreg 00000000 5E 0 653CBAC
kpoodr+483 CALLrel opiodr+0
_xupirtrc+1434 CALLreg 00000000 67384BC 5E 653CBAC 0 653CCBC
upirtrc+61 CALLrel xupirtrc+0 67384BC 5E 653CBAC 653CCBC
653D990 60FEF8B8 653E194
6736CD8 1 0 0
kpurcsc+100 CALLrel upirtrc+0 67384BC 5E 653CBAC 653CCBC
653D990 60FEF8B8 653E194
6736CD8 1 0 0
kpuexecv8+2815 CALLrel kpurcsc+0
kpuexec+2106 CALLrel kpuexecv8+0 673AE10 6736C4C 6736CD8 0 0
653EDE8
OCIStmtExecute+29 CALLrel kpuexec+0 673AE10 6736C4C 673AEC4 1 0 0
0 0 0
kjhnmmon_action+5 CALLrel _OCIStmtExecute+0 673AE10 6736C4C 673AEC4 1 0 0
26 0 0
kjhncheck_ha_reso CALLrel kjhnmmon_action+0 653EFCC 3E
urces+140
kebmronce_dispatc CALL??? 00000000
her+630
kebmronce_execute CALLrel kebmronce_dispatc
+12 her+0
_ksbcti+788 CALLreg 00000000 0 0
ksbabs+659 CALLrel ksbcti+0
kebmmmon_main+386 CALLrel _ksbabs+0 3C5DCB8
_ksbrdp+747 CALLreg 00000000 3C5DCB8
opirip+674 CALLrel ksbrdp+0
opidrv+857 CALLrel opirip+0 32 4 653FEBC
sou2o+45 CALLrel opidrv+0 32 4 653FEBC
opimaireal+227 CALLrel _sou2o+0 653FEB0 32 4 653FEBC
opimai+92 CALLrel opimai_real+0 3 653FEE8
BackgroundThreadSt CALLrel opimai+0
art@4+422
7C80B726 CALLreg 00000000
--------------------- Binary Stack Dump ---------------------
========== FRAME [1] (_ksedst+38 -> _ksedst1+0) ==========
Dump of memory from 0x065386DC to 0x065386EC
65386D0 065386EC [..S.]
65386E0 0040467B 00000000 00000001 [{F@.........]
========== FRAME [2] (_ksedmp+898 -> _ksedst+0) ==========
Dump of memory from 0x065386EC to 0x065387AC
65386E0 065387AC [..S.]
65386F0 00403073 00000000 53532E49 20464658 [[email protected] ]
6538700 54204D41 0000525A 00000000 08EF0EC0 [AM TZR..........]
6538710 6072D95A 08EF0EC5 03672F70 00000017 [Z.r`....p/g.....]
6538720 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
6538740 00000000 00000000 00000000 00000017 [................]
6538750 08EF0B3C 08EF0B34 03672F70 08F017F0 [<...4...p/g.....]
6538760 603AA0D3 065387A8 00000001 00000000 [..:`..S.........]
6538770 00000000 00000000 00000001 00000000 [................]
6538780 00000000 08EF0A38 06610020 031E1D20 [....8... .a. ...]
6538790 00000000 065386F8 08EF0A38 06538D38 [......S.8...8.S.]
65387A0 0265187C 031C8860 FFFFFFFF [|.e.`.......]
========== FRAME [3] (_ksfdmp+14 -> _ksedmp+0) ==========
and the file is keeping increasing, though I have deleted a lot of this, but:
as I marked:
time size
15:23 pm 795mb
16:55 pm 959mb
17:01 pm 970mb
17:19 pm 990mb
Any solution for that?
Thanks!! -
Hi guys,
I'm new here and just have browsed
some of the related topics here regarding my problem but could not seem
to find anything to help me fix this problem so I decided to post this.
I currently have a saved waveform from an oscilloscope that is quite
big in size(around 700MB, CSV file format) and I want to view this on
my PC using SignalExpress. Unfortunately when I try to load the file
using "Load/Save Signals -> Load From ASCII", I always get the "Not
enough memory to complete this operation" error. How can we view and
analyze large waveform files in SignalExpress? Is there a workaround on
this?
Thanks,
Louie
P.S.>I'm very new to Signal Express and haven't modified any settings on it.Hi Louie,
Are you encountering a read-only message when you tried to save the boot.ini file? If so, you can try this method: right-click on My Computer >> Select "Properties", and go to the "Advanced" tab. Select "Settings", and on the next screen, there is a button called Edit. If you click on Edit you should be able to modify the "/3GB" tag in boot.ini. Are you able to change it in this manner? After reboot, you can reload the file to see if it helps.
To open a file in SignalExpress, a contiguous chunk of memory is required. If SignalExpress cannot find a contiguous memory chunk that is large enough to hold this file, then an error will be generated. This can happen when fragmentation occurs in memory, and fragmentation (memory management) is managed by Windows, so unfortunately this is a limitation that we have.
As an alternative, have you looked at NI DIAdem before? It is a software tool that allows users to manage and analyze large volumes of data, and has some unique memory management method that lets it open and work on large amounts of data. There is an evaluation version which is available for download; you can try it out and see if it is suitable for your application.
Best regards,
Victor
NI ASEAN
Attachments:
Clipboard01.jpg 181 KB -
Loading, processing and transforming Large XML Files
Hi all,
I realize this may have been asked before, but searching the history of the forum isn't easy, considering it's not always a safe bet which words to use on the search.
Here's the situation. We're trying to load and manipulate large XML files of up to 100MB in size.
The difference from what we have in our hands to other related issues posted is that the XML isn't big because it has a largly branched tree of data, but rather because it includes large base64-encoded files in the xml itself. The size of the 'clean' xml is relatively small (a few hundred bytes to some kilobytes).
We had to deal with transferring the xml to our application using a webservice, loading the xml to memory in order to read values from it, and now we also need to transform the xml to a different format.
We solved the webservice issue using XFire.
We solved the loading of the xml using JAXB. Nevertheless, we use string manipulations to 'cut' the xml before we load it to memory - otherwise we get OutOfMemory errors. We don't need to load the whole XML to memory, but I really hate this solution because of the 'unorthodox' manipulation of the xml (i.e. the cutting of it).
Now we need to deal with the transofmation of those XMLs, but obviously we can't cut it down this time. We have little experience writing XSL, but no experience on how to use Java to use the XSL files. We're looking for suggestions on how to do it most efficiently.
The biggest problem we encounter is the OutOfMemory errors.
So I ask several questions in one post:
1. Is there a better way to transfer the large files using a webservice?
2. Is there a better way to load and manipulate the large XML files?
3. What's the best way for us to transform those large XMLs?
4. Are we missing something in terms of memory management? Is there a better way to control it? We really are struggling there.
I assume this is an important piece of information: We currently use JDK 1.4.2, and cannot upgrade to 1.5.
Thanks for the help.I think there may be a way to do it.
First, for low RAM needs, nothing beats SAX. as the first processor of the data. With SAX, you control the memory use since SAX only processes one "chunk" of the file at a time. You supply a class with methods named startElement, endElement, and characters. It calls the startElement method when it finds a new element. It calls the characters method when it wants to pass you some or all of the text between the start and end tags. It calls endElement to signal that passing characters is over, and to let you get ready for the next element. So, if your characters method did nothing with the base-64 data, you could see the XML go by with low memory needs.
Since we know in your case that the characters will process large chunks of data, you can expect many calls as SAX calls your code. The only workable solution is to use a StringBuffer to accumulate the data. When the endElement is called, you can decode the base-64 data and keep it somewhere. The most efficient way to do this is to have one StringBuffer for the class handling the SAX calls. Instantiate it with a big enough size to hold the largest of your binary data streams. In the startElement, you can set the length of the StringBuilder to zero and reuse it over and over.
You did not say what you wanted to do with the XML data once you have processed it. SAX is nice from a memory perspective, but it makes you do all the work of storing the data. Unless you build a structured set of classes "on the fly" nothing is kept. There is a way to pass the output of one SAX pass into a DOM processor (without the binary data, in this case) and then you would wind up with a nice tree object with the rest of your data and a group of binary data objects. I've never done the SAX/DOM combo, but it is called a SAXFilter, and you should be able to google an example.
So, the bottom line is that is is very possible to do what you want, but it will take some careful design on your part.
Dave Patterson -
Indesign data merge and a very arge csv file
Hello all,
I have a bit of an issue with data merge. I have about 35 product sell sheets, all with the same categories. I am trying to do this via data merge in indesign (ind cs6). Now the csv file that I am using is very large. It has about 75 columns in it. Some of the columns have a lot of text like the ingredients for each product which can be up to 200 words or more. My problme is that when I create a merged documnet I am only getting the first 6 columns to populate the document.
I have tried everything: taking out all myspaces in the header rows, eliminating all the empty columns etc. Funny thing is when I delete the first 6 columns and try again, the next 6 will populate just fine.
Is there some sort of column imit in the csv file?
I cannot figure this out for the life of me and have read every forum I can get my hands onbut they all seem to talk about business cards, envelopes or building a simple catalog template, nothing that includes this much information to be merged.
Any help at all would be greatly appreciated, thank you.
MikeSAUTEED VEGETABLE PUREE MIX (CARROTS, ONIONS, CELERY), SALT, SUGAR, MALTODEXTRIN, CORN OIL, LESS THAN 2% OF AUTOLYZED YEAST EXTRACT, WATER, POTATO STARCH, XANTHAN GUM, NATURAL FLAVORS, CARROT JUICE CONCENTRATE.
the text above is what comes through in the ingredients in one of my data rows. It's the last thing that works out of the 6 columns that are getting carried over properly.
so you're saying to replace all the long text with breaks with something like "ingtxt" then do a find and replace? I'll try that. -
Hello,
Sorry for my bad English! I am French!
I recently bought a Sony digital recorder (ICD-PX312) that records MP3 files.
I recorded my noisy neighbour, in continuous mode for three days. (I was not at home)
PX312 recorder has produced several files which maximum size is 511,845 kB corresponding to a lenght of 24H 15min.
Every file, checked with Mediainfo, have the same properties. (48kbps 44.1KHz)
I can read them with VLC media player without any problem, but I need to edit these files to keep only the interesting parts.
If I open (drag and drop or open file) these files with Audition 1.0 (I came from Cool-Edit), I found a very strange behavior.
The 24H 15min files are opened with a lenght of 1H and a half or so!.
I gave a try opening with Audition 3.0 and the result is strange too, and more "funny":
The lenght of one 24H15min file is 1H and a half
The lenght of another 24H 15min file is 21H or so.
In the Audition TMP directory, Audition has created several 4 GB temporay files that correspond to the limit of the WAV files.
I made a test with a 128kpbs, 44.1khz. This 511,845 kB file is 9H 5min long
Audition read it as 4H40min
The TMP file is 2,897,600 kB, far below the 4 GB limit
It seems Audition 1 and 3 (I read CS5 has the same problem too) does not share the WAV conversion very well.
Is it due to Audition itself or to the Fraunhofer MP3 codec?
Is there any workaround from Adobe?
As I am an AVS4YOU client, I tried AVS Audio Editor. It opens every file with the good lenght but does not have the needed editing functions. That demonstrates that it is possible to share very large files.
Many thanks in advance for any help or idea, because Audition is the editor I need!
Best RegardsSteveG wrote :
t's not their 'bug' at all. MP3 files work fine, and Adobe didn't invent the wav file format, Microsoft did. It's caused by the 32-bit unsigned integer to record the file size headerincorporated into wav files, which limits their size to one that can be described within that limit - 4Gb.
May be I was not clear enough in my explanation
I Agree partly with you.
I Agree with you when you wrote that the 4 GB limit is inherant to the Microsoft WAV format.
When reading 24H MP3 Files, Audition 3.0 creates, in TMP folder, two chunk of 4 GB and another smaller chunk, to try to overcome the limit. This cutting process is exactly what I am expecting from such a software.
The problem - and the bug - is that the duration that is "extracted" in audition is smaller than that of the original MP3 (e.g. 21H 30min instead of 24H 15min). Some part of the original MP3 files has been skipped in the cuttng/conversion process.
I dont think Microsoft is involved in this "cutting" process that belongs to Adobe an/or Fraunhofer. This is why I am surprised
Maybe you are looking for
-
Analyzing video 0%?
I have captured about 30 min of video through firewire 800 from a Sony Fx1 (so from tape). I chose to analize video for poeple and consolidate results. In the last two hours, backgroung tasks have done nothing. I am running final cut x on a new mac
-
Configuring CIF application log in APO
Folks I was trying to configure CIF Application log from an APO perspective by specifying a function module and the different table fields. I configured the same in R3 side using CFC6 but am having trouble finding it in APO Does any body know how to
-
Safari has been deleted entirely from my Mac. How do I get it back?
I have lost my safari, iPhoto, and system preferences from my Mac. They Are not in my applications folder. How can I get them back?
-
.PSD being brought in as "QuickTime Picture"
I exported a layered .PSD file from Photoshop CS and CS2 and when I import them into the assets they are called "QuickTime Picture" under the type column. I need to have access to the layers, because the .PSD will serve as a menu with overlay layers.
-
changed my apple id but in the app store it always says previous apple i