LOAD UNIT OF COMPONENT IS VERY LARGE (GENERATION LIMIT)
We are experiencing this meesage when compiling an ABAP WEB DYNPRO Application: "LOAD UNIT OF COMPONENT IS VERY LARGE (GENERATION LIMIT)"
When Checking the Generation Limits In the Context menu, I have determined our size of Generated Load in bytes is to big.
The documentation of recommendations is to restructure the program. I am not clear what this means and how this would reduce the Generation Load in bytes. Any ideas would be appreciated.
> How should we reorganize the application and at the same time ensure smooth and user-friendly handling?
We only want to use one Explorer window.
Using multiple components doesn't mean that the user will notice any difference. Component usages can be embedded within one another. Using the ALV for instance is a perfect example of a component usage.
>- Even the SAP reference application "LORD_MAINTAIN_COMP" (37 views) is way too big, according to the recommendation. Is there a better example from SAP?
I wouldn't consider LORD_MAINTAIN_COMP a reference applicatoin. It was one of the veryfirst WDA's shipped by SAP before we learned some of these lessons ourselves. Have a look at the guidelines for Floorplan Manager if you are on 7.01. The FPM provides a very good (and well used by SAP) framework for building large scale WDA applications.
>- How could a complex transaction be built and at the same time stay in the green limit area (< 500k
As described the usage of multiple components avoids the generation limit and is recommended for large scale applications.
>- What at all is the problem in loading 2 Megabytes of data into memory? Could you please describe the technical background in more detail?
It has nothing to do with 2Mb into memory. It has to do with the generation load size in the VM for the generated class that represents your WDA Component. The ABAP compiler and VM have limits (like all VMs and compilers) on total load size and the maximum size for operations and leaps. Generated code can be extremely verbose. Under normal conditions, these load limits are almost never reached in human created classes.
In 7.02 we backported the 7.20 ABAP complier - which in additon tpbe rewritten to support multipass compelation, also increases some of the load limits. However the general recommandation about componentization still stands. Componentization of you WDA application improves maintainabilityand reusability over time. My personal rule is that if you are getting between 10-12 views in your Component, it is time to think about breaking out into multiple components.
>- Is there a maximum load size, which would lead to an error (reject of generation)?
Yes there is. However the workbench throws warnings well in advance. At some point it won't even let you add more views to a component. However if you continue to add content to the existing views, you can reach a point where generation fails.
Similar Messages
-
Error: Load unit of somponent very large ( Generation Limit )
Dear Abapers,
I have a dynpro componenet which is having lot of views. When i am trying to add another view the error is given as " Load unit of component very large ( Generation Limit ) ".
I cannot delete my existing views. So is there any way of extending the Component size so as to allow addition of new view without deleting previous views.
Plz reply
Thanks>I cannot delete my existing views. So is there any way of extending the Component size so as to allow addition of new view without deleting previous views.
Quite simply, no. You need to redesign your application so that is uses multiple components instead of one component with too many views. A web dynpro dynamically generates ABAP classes and there are limits to how large a class load can be. If you reach this limit the class will no longer generate. -
Hi all,
I have a rather complex WD4A application with many windows and views. As it seems, one of its component has become to large now, I get the following error at every activation:
"Load unit of component is very large (generation limit)"
I have looked into the gerneration limit detail view, but everything is green there (except the main gen. limit, its sligthly over 2mb now). Since we only use this app. within the intranet, I don't think thats a problem (maybe it is, but now it works fine).
So the question: What to do? Do I really have to split up the component? This would be horrible cause of all the views, context nodes etc. Can I disable this error message somewhere, so that it does not popup at every activation?
Thnanks in advance!
Kind regards, MatthiasHi,
if you stick to <a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/c6/58e70398244a87a2c39e700bdae4a9/frameset.htm">this</a>, you're save
grtz,
Koen -
"Fatal error: Allowed memory size of 33554432 bytes exhausted"
I get this error message whenever I try to access very large threads at my favorite debate site using Firefox vs. 4 or 5 on my desktop or laptop computers. I do not get the error using IE8.
The only fixes I have been able to find are for servers that have Wordpress and php.ini files.It works, thanks
-
XML Report comes up as blank when a very large sequence is run
Forum,
We have multiple test sequences which me mix and match to do testing on different producst in our product line. We have no issues when we are working with small sequences ( small sequences : Which generate reports upto 12-50MB ). However when the sequences become large ( ~100-200 tests at one go , we get a blank report with the following text:
Begin Sequence:
No Sequence Results Found
We notice this typically for report sizes 60MB and more. Is there a limit to how much teststands result collection memory can store ?
My predominant report options
-- On the fly reporting disabled
-- XML - expand.xsl selected
The same set of settings do not make any difference for smaller reports, but give error for larger sequences - So I suspect its something to do with size of report being generated !I created a simple sequence file which had a 100 Pass/Fail steps using the None Adaptor. These 100 steps in the form of 10 SequenceCall steps each performed 10 Pass/Fail step was placed in a For loop in the MainSequence. The loop was set to run 1000 iterations. The ReportOptions were set for XML horizonal, On the Fly enabled.
This I set running. Explorer was open at the folder where the result file was being stored. Two files were generated, the actual result file and a temporay file. I also had the Taskmanager open to monitor the performance.
As the result file got larger, about 20MB, I noticed that the size of the file was first set to zero and the data was written to the file. (It seemed like the file was deleted and generated each step result time). I also noticed that as the file got larger and larger, the storing of the step results was having an effect on the performance of the Test Sequence execution.
I left this running over night and sometime later the execution crashed. (see attached image). Before closing the dialog, I checked explorer to see what the state of the result file was. The both files was empty.
I repeated the run but this time the number of iteration was set to 500, again the execution crashed but this time the result file did have some data, rather a lot of data, over 50MBs.
I tried to open the file to check the contents but unfortunately my PC didn't seem to beable to handle a file of that size, as it was taking a long time to load, so I kill the process.
I dont think changing the iterations from 100 to 500 had anything to do with the getting the results on the second. I just think the point were it crashed was slightly different allowing the result to be transferred back to the file.
It would be interesting to find out whats going on the On the fly routine.
It also seems that On the fly seems no better that normal reports generation. It also seems pointless generating a very large file of results and that generating smaller files would be the better way to go. Using HTML or XML a top level report file could be used to link all the smaller files together.
Regards
Ray Farmer
Regards
Ray Farmer -
In Mail on iMac, successfully running OS X Lion, one mailbox on My Mac for "Recovered Messages (from AOL)" keeps showing 1 very large message (more than 20 Mb) that I just cannot seem to delete. Each time I go into my In Box, the "loading" symbol spins and the message appears in the "Recovered Messages" mailbox. How can I get rid of this recurrent file, please?
At the same time, I'm not receviving any new mails in my In Box, although, if I look at the same account on my MacBook Pro, I can indeed see the incoming mails (but on that machine I do not have the "recovery" problem).
The help of a clear-thinking Apple fan would be greatly appreciated.
Many thanks.
From Ian in Paris, FranceIan
I worked it out.
Unhide your hidden files ( I used a widget from http://www.apple.com/downloads/dashboard/developer/hiddenfiles.html)
Go to your HD.
Go to Users.
Go to your House (home)
there should be a hidden Library folder there (it will be transparent)
Go to Mail in this folder
The next folder ( for me ) is V2
Click on that and the next one will be a whole list of your mail servers, and one folder called Mailboxes
Click on that and there should be a folder called recovered messages (server) . mbox
Click on that there a random numbered/lettered folder -> data
In that data folder is a list of random numbered folders (i.e a folder called 2, one called 9 etc) and in EACH of these, another numbered folder, and then a folder called messages.
In the messages folder delete all of the ebmx (I think that's what they were from memory, sorry I forgot as I already deleted my trash after my golden moment).
This was GOLDEN for me. Reason being, when I went to delete my "recovered file" in mail, it would give me an error message " cannot delete 2500 files". I knew it was only 1 file so this was weird. Why 2500 files? Because if you click on the ebmx files like I did, hey presto, it turned out that they were ALL THE SAME MESSAGE = 2500 times. In each of those folders in the random numbers, in their related message folder.
Now remember - DONT delete the folder, make sure you have gone to the message folder, found all those pesky ebmx files and deleted THOSE, not the folder.
It worked for me. No restarting or anything. And recovered file. GONE.
Started receiving and syncing mail again. Woohoo.
Best wishes. -
How can we suggest a new DBA OCE certification for very large databases?
How can we suggest a new DBA OCE certification for very large databases?
What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
The largest databases that I have ever worked with barely over 1 Trillion Bytes.
Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
I could guess that maybe some of the following topics of how to configure might be on it,
* Partitioning
* parallel
* bigger block size - DSS vs OLTP
* etc
Where could I send in a recommendation?
Thanks RogerI wish there were some details about the OCE data warehousing.
Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
Overview of Data Warehousing
Describe the benefits of a data warehouse
Describe the technical characteristics of a data warehouse
Describe the Oracle Database structures used primarily by a data warehouse
Explain the use of materialized views
Implement Database Resource Manager to control resource usage
Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
Parallelism
Explain how the Oracle optimizer determines the degree of parallelism
Configure parallelism
Explain how parallelism and partitioning work together
Partitioning
Describe types of partitioning
Describe the benefits of partitioning
Implement partition-wise joins
Result Cache
Describe how the SQL Result Cache operates
Identify the scenarios which benefit the most from Result Set Caching
OLAP
Explain how Oracle OLAP delivers high performance
Describe how applications can access data stored in Oracle OLAP cubes
Advanced Compression
Explain the benefits provided by Advanced Compression
Explain how Advanced Compression operates
Describe how Advanced Compression interacts with other Oracle options and utilities
Data integration
Explain Oracle's overall approach to data integration
Describe the benefits provided by ODI
Differentiate the components of ODI
Create integration data flows with ODI
Ensure data quality with OWB
Explain the concept and use of real-time data integration
Describe the architecture of Oracle's data integration solutions
Data mining and analysis
Describe the components of Oracle's Data Mining option
Describe the analytical functions provided by Oracle Data Mining
Identify use cases that can benefit from Oracle Data Mining
Identify which Oracle products use Oracle Data Mining
Sizing
Properly size all resources to be used in a data warehouse configuration
Exadata
Describe the architecture of the Sun Oracle Database Machine
Describe configuration options for an Exadata Storage Server
Explain the advantages provided by the Exadata Storage Server
Best practices for performance
Employ best practices to load incremental data into a data warehouse
Employ best practices for using Oracle features to implement high performance data warehouses -
I am a scientist and run my own business. Money is tight. I have some very large Excel files (~200MB) that I need to sort and perform logic operations on. I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro. Some of the operations take half an hour to perform. How much faster should I expect these operations to happen on a new MacPro? Is there a significant speed advantage in the 6 core vs 4 core? Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB? Related to this I am using a 32 bit version of Excel. Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?
Grant Bennet-Alder,
It’s funny you mentioned using Activity Monitor. I use it all the time to watch when a computation cycle is finished so I can avoid a crash. I keep it up in the corner of my screen while I respond to email or work on a grant. Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so. As long as I leave Excel alone while it is working it will not crash. I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application. That is clearly a problem for this kind of work. Is there any work around for this? It seems like a 64-bit spreadsheet would help. I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns. I tried it out on my MacBook Pro but my files don’t fit.
The hatter,
This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products. When I started computing this was the sort of thing computers were designed to do. Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple? Excel is only 64-bit on their machines.
Many thanks to both of you for your quick and on point answers! -
Capture Image Of A Very Large JPanel
Below is some code used to save an image of a JPanel to a file...
int w = panel.getSize().width;
int h = panel.getSize().height;
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
Graphics graphics = image.getGraphics();
// Make the component believe its visible and do its layout.
panel.addNotify();
panel.setVisible(true);
panel.validate();
// Draw the graphics.
panel.print(graphics);
// Write the image to a file.
ImageFile imageFile = new ImageFile("test.png");
imageFile.save(image);
// Dispose of the graphics.
graphics.dispose();This works fine but my problem is that I am trying to save what may be a very large JPanel, perhaps as large as 10000x10000 pixels. It doesn't take long for the java heap to be used up and an exception to be thrown.
I know I can increase the heap size of the JVM but since I can't ever be sure how large the panel may be that's a far from ideal solution.
So the question is how do I save an image of a very large JPanel to a file?1) Does the OoM happens while instantiating the buffered image, (which probably tries to allocate a big continuous native array of pixels)?
Or the Graphics object (same reason, though the Graphics is probably just an empty shell over the big pixel array)?
2) In which format do you need to save the image? Do you only need to be able to read it again in your own program?
If yes to both questions, then a pulled-by-the-hair solution coud be to instantiate your own Graphics subclass (no Buffered Image), whose operations would save their arguments directly to the image file, instead of into a big in-memory model of the panel image.
If the output format is a standard one though (GIF, JPG,...), then maybe your custom Graphics's operations could contain the logic to encode/compress as much as possible of the arguments into an in-memory bytearray of the target format?
I'm not very confident though; I d'ont know the GIF or JPEG encoding, but I suspect (especially for JPEG) that you need to know the "whole" image to encode it properly.
But if the target format supports encoders that work on the fly out of streams of bytes (e.g. BMP ) then you can use whatever compress/uncompress technique you see fit (e.g. RLE ): you know the nature of the panels, you may be aware of some optimizations you may perform wrt pixels storage. prior to encoding (e.g., bug empty areas, predictable chessboard pattern, black-and-white palette,...).
Edited by: jduprez on Sep 19, 2009 7:33 PM -
Need help optimizing the writing of a very large array and streaming it a file
Hi,
I have a very large array that I need to create and later write to a TDMS file. The array has 45 million entries, or 4.5x10^7 data points. These data points are of double format. The array is created by using a square pulse waveform generator and user-defined specifications of the delay, wait time, voltages, etc.
I'm not sure how to optimize the code so it doesn't take forever. It currently takes at least 40 minutes, and I'm still running it, to create and write this array. I know there needs to be a better way, as the array is large and consumes a lot of memory but it's not absurdly large. The computer I'm running this on is running Windows Vista 32-bit, and has 4GB RAM and an Intel Core 2 CPU @ 1.8Mhz.
I've read the "Managing Large Data Sets in LabVIEW" article (http://zone.ni.com/devzone/cda/tut/p/id/3625), but I'm unsure how to apply the principles here. I believe the problem lies in making too many copies of the array, as creating and writing 1x10^6 values takes < 10 seconds, but writing 4x10^6 values, which should theoretically take < 40 seconds, takes minutes.
Is there a way to work with a reference of an array instead of a copy of an array?
Attached is my current VI, Generate_Square_Pulse_With_TDMS_Stream.VI and it's two dependencies, although I doubt they are bottlenecking the program.
Any advice will be very much appreciated.
Thanks
Attachments:
Generate_Square_Pulse_With_TDMS_Stream.vi 13 KB
Square_Pulse.vi 13 KB
Write_TDMS_File.vi 27 KBThanks Ravens Fan, using replace array subset and initializing the array beforehand sped up the process immensely. I can now generate an array of 45,000,000 doubles in about one second.
However, when I try to write all of that out to TDMS at the end LV runs out of memory and crashes. Is it possible to write out the data in blocks and make sure memory is freed up before writing out the next block? I can use a simple loop to write out the blocks, but I'm unsure how to verify that memory has been cleared before proceeding. Furthermore, is there a way to ensure that memory and all resources are freed up at the end of the waveform generation VI?
Attached is my new VI, and a refined TDMS write VI (I just disabled the file viewer at the end). Sorry that it's a tad bit messy at the moment, but most of that mess comes from doing some arithmetic to determine which indices to replace array subsets with. I currently have the TDMS write disabled.
Just to clarify the above, I understand how to write out the data in blocks; my question is: how do I ensure that memory is freed up between subsequent writes, and how do I ensure that memory is freed up after execution of the VI?
@Jeff: I'm generating the waveform here, not reading it. I guess I'm not generating a "waveform" but rather a set of doubles. However, converting that into an actual waveform can come later.
Thanks for the replies!
Attachments:
Generate_Square_Pulse_With_TDMS_Stream.vi 14 KB
Write_TDMS_File.vi 27 KB -
Can iCloud be used to synchronize a very large Aperture library across machines effectively?
Just purchased a new 27" iMac (3.5 GHz i7 with 8 GB and 3 TB fusion drive) for my home office to provide support. Use a 15" MBPro (Retina) 90% of the time. Have a number of Aperture libraries/files varying from 10 to 70 GB that are rapidly growing. Have copied them to the iMac using a Thunderbolt cable starting the MBP in target mode.
While this works I can see problems keeping the files in sync. Thought briefly of putting the files in DropBox but when I tried that with a small test file the load time was unacceptable so I can imagine it really wouldn't be practical when the files get north of 100 GB. What about iCloud? Doesn't appear a way to do this but wonder if that's an option.
What are the rest of you doing when you need access to very large files across multiple machines?
David VoranHi David,
dvoran wrote:
Don't you have similar issues when the libraries exceed several thousand images? If not what's your secret to image management.
No, I don't .
It's an open secret: database maintenance requires steady application of naming conventions, tagging, and backing-up. With the digitization of records, losing records by mis-filing is no longer possible. But proper, consistent labeling is all the more important, because every database functions as its own index -- and is only as useful as the index is uniform and holds content that is meaningful.
I use one, single, personal Library. It is my master index of every digital photo I've recorded.
I import every shoot into its own Project.
I name my Projects with a verbal identifier, a date, and a location.
I apply a metadata pre-set to all the files I import. This metadata includes my contact inf. and my copyright.
I re-name all the files I import. The file name includes the date, the Project's verbal identifier and location, and the original file name given by the camera that recorded the data.
I assign a location to all the Images in each Project (easy, since "Project" = shoot; I just use the "Assign Location" button on the Project Inf. dialog).
I _always_ apply a keyword specifying the genre of the picture. The genres I use are "Still-life; Portrait; Family; Friends; People; Rural; Urban; Birds; Insects; Flowers; Flora (not Flowers); Fauna; Test Shots; and Misc." I give myself ready access to these by assigning them to a Keyword Button Set, which shows in the Control Bar.
That's the core part. Should be "do-able". (Search the forum for my naming conventions, if interested.) Or course, there is much more, but the above should allow you to find most of your Images (you have assigned when, where, why, and what genre to every Image). The additional steps include using Color Labels, Project Descriptions, keywords, and a meaningful Folder structure. NB: set up your Library to help YOU. For example, I don't sell stock images, and so I have no need for anyone else's keyword list. I created my own, and use the keywords that I think I will think of when I am searching for an Image.
One thing I found very helpful was separating my "input and storage" structure from my "output" structure. All digicam files get put in Projects by shoot, and stay there. I use Folders and Albums to group my outputs. This works for me because my outputs come from many inputs (my inputs and outputs have a many-to-many relationship). What works for you will depend on what you do with the picture data you record with your cameras. (Note that "Project" is a misleading term for the core storage group in Aperture. In my system they are shoots, and all my Images are stored by shoot. For each output project I have (small "p"), I create a Folder in Aperture, and put Albums, populated with the Images I need, in the Folder. When these projects are done, I move the whole Folder into another Folder, called "Completed".)
Sorry to be windy. I don't have time right now for concision.
HTH,
--Kirby. -
Safari crashes when opening a very large pdf
I have a 1st generation ipad running 4.3.5.
Everytime I try to download a very large pdf in Safari ie. 175+ megs in gets about three quarters of the way through then Safari crashes and loose my progress. I will then restart Safari and the process restarts and crashes again.
I've set auto-lock to never but that didn't help. Any ideas how to get Safari to download this file.
ps I've considered other methods of getting the pdf but for this project I have to download it from a web site.
Thanks for any help.Other apps can download PDFs - I don't know whether it can cope with a 175 meg download, but GoodReader can download files : http://www.goodreader.net/gr-man-tr-web.html
-
How can NI FBUS Monitor display very large recorded files
NI FBUS Monitor version 3.0.1 outputs an error message "Out of memory", if I try to load a large recorded file of size 272 MB. Is there any combination of operating system (possible Vista32 or Vista64) and/or physical memory size, where NI FBUS Monitor can display such large recordings ? Are there any patches or workarounds or tools to display very large recorded files?
Hi,
NI-FBUS Monitor does not set the limitation on the maximum record file size. The physical memory size in the system is one of the most important factors that affect the loading of large record file. Monitor will try loading the entire file into the memory during file open operation.
272MB is a really large file size. To open the file, your system must have sufficient physical memory available. Otherwise "Out of memory" error will occur.
I would recommend you do not use Monitor to open a file larger than 100MB. Loading of a too large file will consume the system memory quickly and decrease the performance.
Message Edited by Vince Shen on 11-30-2009 09:38 PM
Feilian (Vince) Shen -
Very large bdump file sizes, how to solve?
Hi gurus,
I currently always find my disk space is not enough, after checking, it is the oraclexe/admin/bdump, there's currently 3.2G for it, my database is very small, only holding datas of 10mb.
It didn't happen before, only currently.
I don't know why it happened, I have deleted some old files in that folder, but today I found it is still very large compare to my database.
I am running an apex application with xe, the applcaitions works well, we didn't see anything wrong, but only the bdump file very big.
any tip to solve this? thanks
here comes my alert_xe.log file content:
hu Jun 03 16:15:43 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5600.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:15:48 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=5452
Thu Jun 03 16:15:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:16:16 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:20:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:21:50 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:25:56 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:26:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:30:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:31:19 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:36:00 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:36:46 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=1312
Thu Jun 03 16:36:49 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:37:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:41:51 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:42:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:46:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:47:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:51:57 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:52:35 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:56:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:57:10 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=3428
Thu Jun 03 16:57:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:57:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:02:16 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:02:48 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:07:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:08:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:12:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:12:41 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:17:21 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:17:34 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=5912
Thu Jun 03 17:17:37 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:18:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:22:37 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:23:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:27:39 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:28:02 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:32:42 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:33:07 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:37:45 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:38:40 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=1660
Thu Jun 03 17:38:43 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:39:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:42:54 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=31, OS id=6116
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174259', 'KUPC$S_1_20100603174259', 0);
Thu Jun 03 17:43:38 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=32, OS id=2792
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174338', 'KUPC$S_1_20100603174338', 0);
Thu Jun 03 17:43:44 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:44:06 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:44:47 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=33, OS id=3492
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174448', 'KUPC$S_1_20100603174448', 0);
kupprdp: worker process DW01 started with worker id=1, pid=34, OS id=748
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM');
Thu Jun 03 17:45:28 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 5684K exceeds notification threshold (2048K)
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
Thu Jun 03 17:45:28 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 5681K exceeds notification threshold (2048K)
Details in trace file c:\oraclexe\app\oracle\admin\xe\bdump\xe_dw01_748.trc
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
Thu Jun 03 17:48:47 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:49:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:53:49 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:54:28 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
Fri Jun 04 07:46:55 2010
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Windows XP Version V5.1 Service Pack 3
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:1653M/2047M, Ph+PgF:4706M/4958M, VA:1944M/2047M
Fri Jun 04 07:46:55 2010
Starting ORACLE instance (normal)
Fri Jun 04 07:47:06 2010
LICENSE_MAX_SESSION = 100
LICENSE_SESSIONS_WARNING = 80
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =33
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.1.0.
System parameters with non-default values:
processes = 200
sessions = 300
license_max_sessions = 100
license_sessions_warning = 80
sga_max_size = 838860800
__shared_pool_size = 260046848
shared_pool_size = 209715200
__large_pool_size = 25165824
__java_pool_size = 4194304
__streams_pool_size = 8388608
spfile = C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE.ORA
sga_target = 734003200
control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
__db_cache_size = 432013312
compatible = 10.2.0.1.0
db_recovery_file_dest = D:\
db_recovery_file_dest_size= 5368709120
undo_management = AUTO
undo_tablespace = UNDO
remote_login_passwordfile= EXCLUSIVE
dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
shared_servers = 10
job_queue_processes = 1000
audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
db_name = XE
open_cursors = 300
os_authent_prefix =
pga_aggregate_target = 209715200
PMON started with pid=2, OS id=3044
MMAN started with pid=4, OS id=3052
DBW0 started with pid=5, OS id=3196
LGWR started with pid=6, OS id=3200
CKPT started with pid=7, OS id=3204
SMON started with pid=8, OS id=3208
RECO started with pid=9, OS id=3212
CJQ0 started with pid=10, OS id=3216
MMON started with pid=11, OS id=3220
MMNL started with pid=12, OS id=3224
Fri Jun 04 07:47:31 2010
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 10 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
PSP0 started with pid=3, OS id=3048
Fri Jun 04 07:47:41 2010
alter database mount exclusive
Fri Jun 04 07:47:54 2010
Setting recovery target incarnation to 2
Fri Jun 04 07:47:56 2010
Successful mount of redo thread 1, with mount id 2601933156
Fri Jun 04 07:47:56 2010
Database mounted in Exclusive Mode
Completed: alter database mount exclusive
Fri Jun 04 07:47:57 2010
alter database open
Fri Jun 04 07:48:00 2010
Beginning crash recovery of 1 threads
Fri Jun 04 07:48:01 2010
Started redo scan
Fri Jun 04 07:48:03 2010
Completed redo scan
16441 redo blocks read, 442 data blocks need recovery
Fri Jun 04 07:48:04 2010
Started redo application at
Thread 1: logseq 1575, block 48102
Fri Jun 04 07:48:05 2010
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1575 Reading mem 0
Mem# 0 errs 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Fri Jun 04 07:48:07 2010
Completed redo application
Fri Jun 04 07:48:07 2010
Completed crash recovery at
Thread 1: logseq 1575, block 64543, scn 27413940
442 data blocks read, 442 data blocks written, 16441 redo blocks read
Fri Jun 04 07:48:09 2010
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=25, OS id=3288
ARC1 started with pid=26, OS id=3292
Fri Jun 04 07:48:10 2010
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 1576
Thread 1 opened at log sequence 1576
Current log# 3 seq# 1576 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
Successful open of redo thread 1
Fri Jun 04 07:48:13 2010
ARC0: STARTING ARCH PROCESSES
Fri Jun 04 07:48:13 2010
ARC1: Becoming the 'no FAL' ARCH
Fri Jun 04 07:48:13 2010
ARC1: Becoming the 'no SRL' ARCH
Fri Jun 04 07:48:13 2010
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC0: Becoming the heartbeat ARCH
Fri Jun 04 07:48:13 2010
SMON: enabling cache recovery
ARC2 started with pid=27, OS id=3580
Fri Jun 04 07:48:17 2010
db_recovery_file_dest_size of 5120 MB is 49.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Fri Jun 04 07:48:31 2010
Successfully onlined Undo Tablespace 1.
Fri Jun 04 07:48:31 2010
SMON: enabling tx recovery
Fri Jun 04 07:48:31 2010
Database Characterset is AL32UTF8
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=28, OS id=2412
Fri Jun 04 07:48:51 2010
Completed: alter database open
Fri Jun 04 07:49:22 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:32 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:57 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:54:10 2010
Shutting down archive processes
Fri Jun 04 07:54:15 2010
ARCH shutting down
ARC2: Archival stopped
Fri Jun 04 07:54:53 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:55:08 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:56:25 2010
Starting control autobackup
Fri Jun 04 07:56:27 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
Fri Jun 04 07:56:28 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_21
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_20
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_17
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_16
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_14
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_12
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_09
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_07
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_06
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_03
ORA-27093: 无法删除目录
Fri Jun 04 07:56:29 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_21
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_20
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_17
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_16
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_14
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_12
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_09
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_07
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_06
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_03
ORA-27093: 无法删除目录
Control autobackup written to DISK device
handle 'D:\XE\AUTOBACKUP\2010_06_04\O1_MF_S_720777385_60JJ9BNZ_.BKP'
Fri Jun 04 07:56:38 2010
Thread 1 advanced to log sequence 1577
Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Fri Jun 04 07:56:56 2010
Thread 1 cannot allocate new log, sequence 1578
Checkpoint not complete
Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Thread 1 advanced to log sequence 1578
Current log# 3 seq# 1578 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
Fri Jun 04 07:57:04 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 2208K exceeds notification threshold (2048K)
KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
Fri Jun 04 07:59:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:59:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []Hi Gurus,
there's a error ora-00600 in the big trc files as below, this is only part of this file, this file is more than 45mb in size:
xe_mmon_4424.trc
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_4424.trc
Fri Jun 04 17:03:22 2010
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Windows XP Version V5.1 Service Pack 3
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:992M/2047M, Ph+PgF:3422M/4958M, VA:1011M/2047M
Instance name: xe
Redo thread mounted by this instance: 1
Oracle process number: 11
Windows thread id: 4424, image: ORACLE.EXE (MMON)
*** SERVICE NAME:(SYS$BACKGROUND) 2010-06-04 17:03:22.265
*** SESSION ID:(284.23) 2010-06-04 17:03:22.265
*** 2010-06-04 17:03:22.265
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Current SQL statement for this session:
BEGIN :success := dbms_ha_alerts_prvt.check_ha_resources; END;
----- PL/SQL Call Stack -----
object line object
handle number name
41982E80 418 package body SYS.DBMS_HA_ALERTS_PRVT
41982E80 552 package body SYS.DBMS_HA_ALERTS_PRVT
41982E80 305 package body SYS.DBMS_HA_ALERTS_PRVT
419501A0 1 anonymous block
----- Call Stack Trace -----
calling call entry argument values in hex
location type point (? means dubious value)
ksedst+38 CALLrel ksedst1+0 0 1
ksedmp+898 CALLrel ksedst+0 0
ksfdmp+14 CALLrel ksedmp+0 3
_kgerinv+140 CALLreg 00000000 8EF0A38 3
kgeasnmierr+19 CALLrel kgerinv+0 8EF0A38 6610020 3672F70 0
6538808
kjhnpost_ha_alert CALLrel _kgeasnmierr+0 8EF0A38 6610020 3672F70 0
0+2909
__PGOSF57__kjhn_pos CALLrel kjhnpost_ha_alert 88 B21C4D0 B21C4D8 B21C4E0
t_ha_alert_plsql+43 0+0 B21C4E8 B21C4F0 B21C4F8
8 B21C500 B21C50C 0 FFFFFFFF 0
0 0 6
_spefcmpa+415 CALLreg 00000000
spefmccallstd+147 CALLrel spefcmpa+0 65395B8 16 B21C5AC 653906C 0
pextproc+58 CALLrel spefmccallstd+0 6539874 6539760 6539628
65395B8 0
__PGOSF302__peftrus CALLrel _pextproc+0
ted+115
_psdexsp+192 CALLreg 00000000 6539874
_rpiswu2+426 CALLreg 00000000 6539510
psdextp+567 CALLrel rpiswu2+0 41543288 0 65394F0 2 6539528
0 65394D0 0 2CD9E68 0 6539510
0
_pefccal+452 CALLreg 00000000
pefcal+174 CALLrel pefccal+0 6539874
pevmFCAL+128 CALLrel _pefcal+0
pfrinstrFCAL+55 CALLrel pevmFCAL+0 AF74F48 3DFB92B8
pfrrunno_tool+56 CALL??? 00000000 AF74F48 3DFBB728 AF74F84
pfrrun+781 CALLrel pfrrun_no_tool+0 AF74F48 3DFBB28C AF74F84
plsqlrun+738 CALLrel _pfrrun+0 AF74F48
peicnt+247 CALLrel plsql_run+0 AF74F48 1 0
kkxexe+413 CALLrel peicnt+0
opiexe+5529 CALLrel kkxexe+0 AF7737C
kpoal8+2165 CALLrel opiexe+0 49 3 653A4FC
_opiodr+1099 CALLreg 00000000 5E 0 653CBAC
kpoodr+483 CALLrel opiodr+0
_xupirtrc+1434 CALLreg 00000000 67384BC 5E 653CBAC 0 653CCBC
upirtrc+61 CALLrel xupirtrc+0 67384BC 5E 653CBAC 653CCBC
653D990 60FEF8B8 653E194
6736CD8 1 0 0
kpurcsc+100 CALLrel upirtrc+0 67384BC 5E 653CBAC 653CCBC
653D990 60FEF8B8 653E194
6736CD8 1 0 0
kpuexecv8+2815 CALLrel kpurcsc+0
kpuexec+2106 CALLrel kpuexecv8+0 673AE10 6736C4C 6736CD8 0 0
653EDE8
OCIStmtExecute+29 CALLrel kpuexec+0 673AE10 6736C4C 673AEC4 1 0 0
0 0 0
kjhnmmon_action+5 CALLrel _OCIStmtExecute+0 673AE10 6736C4C 673AEC4 1 0 0
26 0 0
kjhncheck_ha_reso CALLrel kjhnmmon_action+0 653EFCC 3E
urces+140
kebmronce_dispatc CALL??? 00000000
her+630
kebmronce_execute CALLrel kebmronce_dispatc
+12 her+0
_ksbcti+788 CALLreg 00000000 0 0
ksbabs+659 CALLrel ksbcti+0
kebmmmon_main+386 CALLrel _ksbabs+0 3C5DCB8
_ksbrdp+747 CALLreg 00000000 3C5DCB8
opirip+674 CALLrel ksbrdp+0
opidrv+857 CALLrel opirip+0 32 4 653FEBC
sou2o+45 CALLrel opidrv+0 32 4 653FEBC
opimaireal+227 CALLrel _sou2o+0 653FEB0 32 4 653FEBC
opimai+92 CALLrel opimai_real+0 3 653FEE8
BackgroundThreadSt CALLrel opimai+0
art@4+422
7C80B726 CALLreg 00000000
--------------------- Binary Stack Dump ---------------------
========== FRAME [1] (_ksedst+38 -> _ksedst1+0) ==========
Dump of memory from 0x065386DC to 0x065386EC
65386D0 065386EC [..S.]
65386E0 0040467B 00000000 00000001 [{F@.........]
========== FRAME [2] (_ksedmp+898 -> _ksedst+0) ==========
Dump of memory from 0x065386EC to 0x065387AC
65386E0 065387AC [..S.]
65386F0 00403073 00000000 53532E49 20464658 [[email protected] ]
6538700 54204D41 0000525A 00000000 08EF0EC0 [AM TZR..........]
6538710 6072D95A 08EF0EC5 03672F70 00000017 [Z.r`....p/g.....]
6538720 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
6538740 00000000 00000000 00000000 00000017 [................]
6538750 08EF0B3C 08EF0B34 03672F70 08F017F0 [<...4...p/g.....]
6538760 603AA0D3 065387A8 00000001 00000000 [..:`..S.........]
6538770 00000000 00000000 00000001 00000000 [................]
6538780 00000000 08EF0A38 06610020 031E1D20 [....8... .a. ...]
6538790 00000000 065386F8 08EF0A38 06538D38 [......S.8...8.S.]
65387A0 0265187C 031C8860 FFFFFFFF [|.e.`.......]
========== FRAME [3] (_ksfdmp+14 -> _ksedmp+0) ==========
and the file is keeping increasing, though I have deleted a lot of this, but:
as I marked:
time size
15:23 pm 795mb
16:55 pm 959mb
17:01 pm 970mb
17:19 pm 990mb
Any solution for that?
Thanks!! -
Best data Structor for dealing with very large CSV files
hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
any suggestions would be greatly apprechiated.
Message was edited by:
ninjarobHow much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
If the data size turns out to be prohibitive of loading into memory, how about a relational database?
Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).
Maybe you are looking for
-
Hi everybody, I want to insert an area with a scrollbar in a floater panel, like dw's binding one. I tried frame, iframe, layer tags, div with overflow but nothing worked. Any ideas of how doing that ? Thx T0m_
-
is there a way to make a PDF from Dreamweaver?? Thanks
-
Business Logic in Taghandler methods???
Hi, Is there any reason that the business operation(logic) in a TagHandler class should take place in a doEndTag() for TagSupport class and doAfterBody() in BodyTagSupport class? If so, why it can't be done in the doStartTag() and doInitBody() ? Than
-
New Template: OSX Brushed Metal
Okay folks, I've had this template in the works for a long time now, and I've just been trying to work with too many other things so I need to release this and move on. It's not quite ready yet and here's the deal... the more I worked with it, the le
-
Remove the Depreciation amount
Hello Experts I have a few assets for which there has been acquisition entries and finally they have been retired. After this the assets still have ordianry dpreciation showing in AW01N. I want to remove the ordianry depreciation amount so that the a