Very large time series database
Hi,
I am planning to use BDB-JE to store time series data.
I plan to store 1 month worth of data in a single record.
My key consists of the following parts: id,year_and_month,day_in_month
My data is an array of 31 doubles (One slot per day)
For example, a data record for May 10, 2008 will be stored as follows
Data Record: item_1, 20080510, 22
Key will be: 1, 200805, 9
data will be: double[31] and 10nth slot will be populated with 22
Expected volume:
6,000,000 records/per day
Usage pattern:
1) Access pattern is random (random ids). May be per id, I have to
retrieve multiple records depending on how much history I need to
retrieve
2) Updates happen simultaneously
3) Wrt ACID properties, only durability is important
(data overwrites are very rare)
I built a few prototypes using BDB-JE and BDB versions. As per my estimates,
with the data I have currently, my database size will be 300GB and the growth
rate will be 4GB per month. This is huge database and access pattern is random.
In order to scale, I plan to distribute the data to multiple nodes (the database on
each node will have certain range of ids) and process each request in parallel.
However, I have to live with only 1GB RAM for every 20GB BDB-JE database.
I have a few questions:
1) Since the data cannot fit in memory, and I am looking for ~5ms response time,
is BDB/BDB-JE right solution?
2) I read about the architectural differences between BDB-JE and BDB
(Log based Vs Page based). Which is better fit for this kind of app?
3) Besides distributing the data to multiple nodes and do parallel processing,
is there anything I can do to improve throughput & scalability?
4) When do you plan to release Replication API for BDB-JE?
Thanks in advance,
Sashi
Sashi,
Thanks for taking the time to sketch out your application. It's still
hard to provide concise answers to your questions though, because so much is
specific to each application, and there can be so many factors.
1) Since the data cannot fit in memory, and I am looking for ~5ms
response time, is BDB/BDB-JE right solution?
2) I read about the architectural differences between BDB-JE and BDB
(Log based Vs Page based). Which is better fit for this kind of app?There are certainly applications based on BDB-JE and BDB that have
very stringent response times requirements. The BDB products try to
have lower overhead and are often good matches for applications that
need good response time. But in the end, you have do some experimentation
and some estimation to translate your platform capabilities and
application access pattern into a guess of what you might end up seeing.
For example, it sounds like a typical request might require multiple
reads and then a write operation. It sounds like you expect all these
accesses to incur I/O. As a rule of thumb,
you can think of a typical disk seek as being on the order of 10 ms, so to
have a response time of around 5ms, your data accesses need to be mainly
cached.
That doesn't mean your whole data set has to fit in memory, it means
your working set has to mostly fit. In the end, most application access
isn't purely random either, and there is some kind of working set.
BDB-C has better key-based locality of data on disk, and stores data
more compactly on disk and in memory. Whether that helps your
application depends on how much locality of reference you have in the
app -- perhaps the multiple database operations you're making per
request are clustered by key. BDB-JE usually has better concurrency
and better write performance. How much that impacts your application
is a function of what degree of data collision you see.
For both products, some general principles, such as reducing the size
of your key as much as possible will help. For BDB-JE, you also need
to consider options like experimenting with setting je.evictor.lruOnly to
false may give better performance. Also for JE, tuning garbage collection
to use a concurrent low pause collector can provide smoother response times.
But that's all secondary to what you could do in the application, which
is to make the cache as efficient as possible by reducing the size of the
record and clustering accesses as much as possible.
>
4) When do you plan to release Replication API for
r BDB-JE?Sorry, Oracle is very firm about not announcing release estimates.
Linda
Similar Messages
-
Time-series / temporal database - design advice for DWH/OLAP???
I am in front of task to design some DWH as effectively as it can be - for time series data analysis - are there some special design advices or best practices available? Or can the ordinary DWH/OLAP design concepts be used? I ask this - because I have seen the term 'time series database' in academia literature (but without further references) and also - I have heard the term 'temporal database' (as far as I have heard - it is not just a matter for logging of data changes etc.)
So - it would be very nice if some can give me some hints about this type design problems?Hi Frank,
Thanks for that - after 8 years of working with Oracle Forms and afterwards the same again with ADF, I still find it hard sometimes when using ADF to understand the best approach to a particular problem - there is so many different ways of doing things/where to put the code/how to call it etc... ! Things seemed so much simplier back in the Forms days !
Chandra - thanks for the information but this doesn't suit my requirements - I originally went down that path thinking/expecting it to be the holy grail but ran into all sorts of problems as it means that the dates are always being converted into users timezone regardless of whether or not they are creating the transaction or viewing an earlier one. I need the correct "date" to be stored in the database when a user creates/updates a record (for example in California) and this needs to be preserved for other users in different timezones. For example, when a management user in London views that record, the date has got to remain the date that the user entered, and not what the date was in London at the time (eg user entered 14th Feb (23:00) - when London user views it, it must still say 14th Feb even though it was the 15th in London at the time). Global settings like you are using in the adf-config file made this difficult. This is why I went back to stripping all timezone settings back out of the ADF application and relied on database session timezones instead - and when displaying a default date to the user, use the timestamp from the database to ensure the users "date" is displayed.
Cheers,
Brent -
How can we suggest a new DBA OCE certification for very large databases?
How can we suggest a new DBA OCE certification for very large databases?
What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
The largest databases that I have ever worked with barely over 1 Trillion Bytes.
Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
I could guess that maybe some of the following topics of how to configure might be on it,
* Partitioning
* parallel
* bigger block size - DSS vs OLTP
* etc
Where could I send in a recommendation?
Thanks RogerI wish there were some details about the OCE data warehousing.
Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
Overview of Data Warehousing
Describe the benefits of a data warehouse
Describe the technical characteristics of a data warehouse
Describe the Oracle Database structures used primarily by a data warehouse
Explain the use of materialized views
Implement Database Resource Manager to control resource usage
Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
Parallelism
Explain how the Oracle optimizer determines the degree of parallelism
Configure parallelism
Explain how parallelism and partitioning work together
Partitioning
Describe types of partitioning
Describe the benefits of partitioning
Implement partition-wise joins
Result Cache
Describe how the SQL Result Cache operates
Identify the scenarios which benefit the most from Result Set Caching
OLAP
Explain how Oracle OLAP delivers high performance
Describe how applications can access data stored in Oracle OLAP cubes
Advanced Compression
Explain the benefits provided by Advanced Compression
Explain how Advanced Compression operates
Describe how Advanced Compression interacts with other Oracle options and utilities
Data integration
Explain Oracle's overall approach to data integration
Describe the benefits provided by ODI
Differentiate the components of ODI
Create integration data flows with ODI
Ensure data quality with OWB
Explain the concept and use of real-time data integration
Describe the architecture of Oracle's data integration solutions
Data mining and analysis
Describe the components of Oracle's Data Mining option
Describe the analytical functions provided by Oracle Data Mining
Identify use cases that can benefit from Oracle Data Mining
Identify which Oracle products use Oracle Data Mining
Sizing
Properly size all resources to be used in a data warehouse configuration
Exadata
Describe the architecture of the Sun Oracle Database Machine
Describe configuration options for an Exadata Storage Server
Explain the advantages provided by the Exadata Storage Server
Best practices for performance
Employ best practices to load incremental data into a data warehouse
Employ best practices for using Oracle features to implement high performance data warehouses -
Profile Performanc​e and Memory shows very large 'VI Time' value
When I run the Profile Performance and Memory tool on my project, I get very large numbers for VI Time (and Sub VIs Time and Total Time) for some VIs. For example 1844674407370752.5. I have selected only 'Timing statistics' and 'Timing details'. Sometimes the numbers start with reasonable values, then when updating the display with the snapshot button they might get large and stay large. Other VI Times remain reasonable.
LabVIEW 2011 Version 11.0 (32-bit). Windows 7.
What gives?
- lesles,
the number indicates some kind of overroll.... so, do you have a vi where this happens all the time? Can you share this with us?
thanks,
Norbert
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it. -
Perhaps via USB. I have a large amount of data that I want to back up and it is taking a very long time (35GB is taking 3 hrs, I have 2TB if files in total)...? I want to use TimeCapsule as back-up for an archive which is curently stored on a 2 TB WESC HD.
No, you cannot backup via direct usb connection..
But gigabit ethernet is much faster anyway.. are you connected directly by ethernet?
Is the drive you are backing up from plugged into the TC? That will slow it down something chronic.. plug that drive in by its fastest connection method.. WESC sorry I have no idea. If ethernet use that.. otherwise USB direct to the computer.. always think what way the files come and go.. but since you are copying from the computer everything has to go that way.. it makes things slower if they go over the same cable.. if you catch the drift. -
Fastest way to handle and store a large number of posts in a very short time?
I need to handle a very large number of HTTP posts in a very short period of time. The handling will consist of nothing more than storing the data posted and returning a redirect. The data will be quite small (email, postal code). I don't know exactly how
many posts, but somewhere between 50,000 and 500,000 over the course of a minute.
My plan is to use the traffic manager to distribute the load across several data centers, and to have a website scaled to 10-instances per data center. For storage, I thought that Azure table storage would be the ideal way to handle this, but I'm not sure
if the latency would prevent my app from handling this much data.
Has anyone done anything similar to this and have a suggestion for storing the data? Perhaps buffering everything into memory would be ideal and then batching from there to table storage. I'm starting to load-test the direct to table-storage solution and
am not encouraged.You are talking about a website with 500,000 posts per minute with re-direction, so you are talking about designing a system that can handle at least 500,000 users? Assuming that not all users are doing posts within a one minute timeframe, then you
are talking about designing a system that can handle millions of users at any one time.
Event hub architecture is completely different from the HTTP post architecture, every device/user/session writes directly to the hub. I was just wondering if that actually work better for you in your situation.
Frank
The site has no session or page displaying. It literally will record a few form values posted from another site and issue a redirect back to that originating site. It is purely for data collection. I'll see if it is possible to write directly to the event hub/service
bus system from a web page. If so, that might work well. -
Access very large objects using servlets from database
hai,
Please suggest me to access a very large object, for example an image file, from the database using servlets.
Thanks!
nullhai,
Please suggest me to access a very large object, for example an image file, from the database using servlets.
Thanks!
null -
Hi, I've got the unenviable task of rewriting the data storage back end for a very complex legacy system which analyses time series data for a range of different data sets. What I want to do is bring this data kicking an screaming into the 21st century but putting it into a database. While I have worked with databases for many years I've never really had to put large amounts of data into one and certainly never had to make sure I can get large chunks of that that data very quickly.
The data is shaped like this: multiple data sets (about 10 normally) each with up to 100k rows with each row containing up to 300 data points (grand total of about 300,000,000 data points). In each data set all rows contain the same number of points but not all data sets will contain the same number of points as each other. I will typically need to access a whole data set at a time but I need to be able to address individual points (or at least rows) as well.
My current thinking is that storing each data point separately, while great from a access point of view, probably isn't practical from a speed point of view. Combined with the fact that most operations are performed on a whole row at a time I think row based storage is probably the best option.
Of the row based storage solutions I think I have two options: multiple columns and array based. I'm favouring a single column holding an array of data points as it fits well with the requirement that different data sets can have different numbers of points. If I have separate columns I'm probably into multiple tables for the data and dynamic table / column creation.
To make sure this solution is fast I was thinking of using hibernate with caching turned on. Alternatively I've used JBoss Cache with great results in the past.
Does this sound like a solution that will fly? Have I missed anything obvious? I'm hoping someone might help me check over my thinking before I commit serious amounts of time to this...Hi,
Time Series Key Figure:
Basically Time series key figure is used in Demand planning only. Whenever you cerated a key figure & add it to DP planning area then it is automatically convert it in to time series key figure. Whenever you actiavte the planning area that means you activate each Key figure of planning area with time series planning version.
There is one more type of Key figure & i.e. an order series key figure & which mainly used in to SNP planning area.
Storage Bucket profile:
SBP is used to create a space in to live cache for the periodicity like from 2003 to 2010 etc. Whenever you create SBP then it will occupy space in the live cache for the respective periodicity & which we can use to planning area to store the data. So storage bucket is used for storing the data of planning area.
Time/Planning bucket profile:
basically TBP is used to define periodicity in to the data view. If you want to see the data view in the year, Monthly, Weekly & daily bucket that you have to define in to TBP.
Hope this will help you.
Regards
Sujay -
Read optimization time-series data
I am using Berkeley DB JE to store fairly high frequency (10hz) time-series data collected from ~80 sensors. The idea is to import a large number of csv files with this data, and allow quick access to time ranges of data to plot with a web front end. I have created a "sample" entity to hold these sampled metrics, indexed by the time stamp. My entity looks like this.
@Entity
public class Sample {
// Unix time; seconds since Unix epoch
@PrimaryKey
private double time;
private Map<String, Double> metricMap = new LinkedHashMap<String, Double>();
as you can see, there is quite a large amount of data for each entity (~70 - 80 doubles), and I'm not sure storing them in this way is best. This is my first question.
I am accessing the db from a web front end. I am not too worried about insertion performance, as this doesn't happen that often, and generally all at one time in bulk. For smaller ranges (~1-2 hr worth of samples) the read performance is decent enough for web calls. For larger ranges, the read operations take quite a while. What would be the best approach for configuring this application?
Also, I want to define granularity of samples. Basically, If the number of samples returned by a query is very large, I want to only return a fraction of the samples. Is there an easy way to count the number of entities that will be iterated over with a cursor without actually iterating over them?
Here are my current configuration params.
environmentConfig.setAllowCreateVoid(true);
environmentConfig.setTransactionalVoid(true);
environmentConfig.setTxnNoSyncVoid(true);
environmentConfig.setCacheModeVoid(CacheMode.EVICT_LN);
environmentConfig.setCacheSizeVoid(1000000000);
databaseConfig.setAllowCreateVoid(true);
databaseConfig.setTransactionalVoid(true);
databaseConfig.setCacheModeVoid(CacheMode.EVICT_LN);Hi Ben, sorry for the slow response.
as you can see, there is quite a large amount of data for each entity (~70 - 80 doubles), and I'm not sure storing them in this way is best. This is my first question.That doesn't sound like a large record, so I don't see a problem. If the map keys are repeated in each record, that's wasted space that you might want to store differently.
For larger ranges, the read operations take quite a while. What would be the best approach for configuring this application?What isolation level do you require? Do you need the keys and the data? If the amount you're reading is a significant portion of the index, have you looked at using DiskOrderedCursor?
Also, I want to define granularity of samples. Basically, If the number of samples returned by a query is very large, I want to only return a fraction of the samples. Is there an easy way to count the number of entities that will be iterated over with a cursor without actually iterating over them?Not currently. Using the DPL, reading with a key-only cursor is the best available option. If you want to drop down to the base API, you can use Cursor.skipNext and skipPrev, which are further optimized.
environmentConfig.setAllowCreateVoid(true);Please use the method names without the Void suffix -- those are just for bean editors.
--mark -
Data Mining Blog: New post on Time Series Multi-step Forecasting
I've posted the third part of the Time Series Forecasting series. It covers:
- How to use the SQL MODEL clause for multi-step forecasting
- Many example queries
- Two applications: a classical time series dataset and a electric load forecast competition dataset
- Accuracy comparison with a large number of other techniques
http://oracledmt.blogspot.com/2006/05/time-series-forecasting-3-multi-step.html
--MarcosMany biological databases can be queried directly via the Structure Query Language.SQL is at the heart of biological databases.Oracle Data Miner load data from flat files in the database.we would create a numeric ID for the genes while loading the data and remove some columns and rows from the data, it is better to load the data directly with SQL Loader.
http://www.genebyte.firm.in/
Edited by: 798168 on Sep 28, 2010 11:00 PM -
Best technology to navigate through a very large XML file in a web page
Hi!
I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
Could anyone please tell me the best technology and best parser to be used for very large XML files?Thank you for your suggestion. I have a question,
though. If I use a relational database and try to
access it for EACH and EVERY click the user makes,
wouldn't that take much time to populate the page with
data?
Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far. -
Can iCloud be used to synchronize a very large Aperture library across machines effectively?
Just purchased a new 27" iMac (3.5 GHz i7 with 8 GB and 3 TB fusion drive) for my home office to provide support. Use a 15" MBPro (Retina) 90% of the time. Have a number of Aperture libraries/files varying from 10 to 70 GB that are rapidly growing. Have copied them to the iMac using a Thunderbolt cable starting the MBP in target mode.
While this works I can see problems keeping the files in sync. Thought briefly of putting the files in DropBox but when I tried that with a small test file the load time was unacceptable so I can imagine it really wouldn't be practical when the files get north of 100 GB. What about iCloud? Doesn't appear a way to do this but wonder if that's an option.
What are the rest of you doing when you need access to very large files across multiple machines?
David VoranHi David,
dvoran wrote:
Don't you have similar issues when the libraries exceed several thousand images? If not what's your secret to image management.
No, I don't .
It's an open secret: database maintenance requires steady application of naming conventions, tagging, and backing-up. With the digitization of records, losing records by mis-filing is no longer possible. But proper, consistent labeling is all the more important, because every database functions as its own index -- and is only as useful as the index is uniform and holds content that is meaningful.
I use one, single, personal Library. It is my master index of every digital photo I've recorded.
I import every shoot into its own Project.
I name my Projects with a verbal identifier, a date, and a location.
I apply a metadata pre-set to all the files I import. This metadata includes my contact inf. and my copyright.
I re-name all the files I import. The file name includes the date, the Project's verbal identifier and location, and the original file name given by the camera that recorded the data.
I assign a location to all the Images in each Project (easy, since "Project" = shoot; I just use the "Assign Location" button on the Project Inf. dialog).
I _always_ apply a keyword specifying the genre of the picture. The genres I use are "Still-life; Portrait; Family; Friends; People; Rural; Urban; Birds; Insects; Flowers; Flora (not Flowers); Fauna; Test Shots; and Misc." I give myself ready access to these by assigning them to a Keyword Button Set, which shows in the Control Bar.
That's the core part. Should be "do-able". (Search the forum for my naming conventions, if interested.) Or course, there is much more, but the above should allow you to find most of your Images (you have assigned when, where, why, and what genre to every Image). The additional steps include using Color Labels, Project Descriptions, keywords, and a meaningful Folder structure. NB: set up your Library to help YOU. For example, I don't sell stock images, and so I have no need for anyone else's keyword list. I created my own, and use the keywords that I think I will think of when I am searching for an Image.
One thing I found very helpful was separating my "input and storage" structure from my "output" structure. All digicam files get put in Projects by shoot, and stay there. I use Folders and Albums to group my outputs. This works for me because my outputs come from many inputs (my inputs and outputs have a many-to-many relationship). What works for you will depend on what you do with the picture data you record with your cameras. (Note that "Project" is a misleading term for the core storage group in Aperture. In my system they are shoots, and all my Images are stored by shoot. For each output project I have (small "p"), I create a Folder in Aperture, and put Albums, populated with the Images I need, in the Folder. When these projects are done, I move the whole Folder into another Folder, called "Completed".)
Sorry to be windy. I don't have time right now for concision.
HTH,
--Kirby. -
Very large bdump file sizes, how to solve?
Hi gurus,
I currently always find my disk space is not enough, after checking, it is the oraclexe/admin/bdump, there's currently 3.2G for it, my database is very small, only holding datas of 10mb.
It didn't happen before, only currently.
I don't know why it happened, I have deleted some old files in that folder, but today I found it is still very large compare to my database.
I am running an apex application with xe, the applcaitions works well, we didn't see anything wrong, but only the bdump file very big.
any tip to solve this? thanks
here comes my alert_xe.log file content:
hu Jun 03 16:15:43 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5600.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:15:48 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=5452
Thu Jun 03 16:15:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:16:16 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:20:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:21:50 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:25:56 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:26:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:30:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:31:19 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:36:00 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:36:46 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=1312
Thu Jun 03 16:36:49 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:37:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:41:51 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:42:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:46:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:47:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:51:57 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:52:35 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:56:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:57:10 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=3428
Thu Jun 03 16:57:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:57:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:02:16 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:02:48 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:07:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:08:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:12:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:12:41 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:17:21 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:17:34 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=5912
Thu Jun 03 17:17:37 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:18:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:22:37 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:23:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:27:39 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:28:02 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:32:42 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:33:07 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:37:45 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:38:40 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=1660
Thu Jun 03 17:38:43 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:39:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:42:54 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=31, OS id=6116
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174259', 'KUPC$S_1_20100603174259', 0);
Thu Jun 03 17:43:38 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=32, OS id=2792
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174338', 'KUPC$S_1_20100603174338', 0);
Thu Jun 03 17:43:44 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:44:06 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:44:47 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=33, OS id=3492
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174448', 'KUPC$S_1_20100603174448', 0);
kupprdp: worker process DW01 started with worker id=1, pid=34, OS id=748
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM');
Thu Jun 03 17:45:28 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 5684K exceeds notification threshold (2048K)
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
Thu Jun 03 17:45:28 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 5681K exceeds notification threshold (2048K)
Details in trace file c:\oraclexe\app\oracle\admin\xe\bdump\xe_dw01_748.trc
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
Thu Jun 03 17:48:47 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:49:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:53:49 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:54:28 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
Fri Jun 04 07:46:55 2010
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Windows XP Version V5.1 Service Pack 3
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:1653M/2047M, Ph+PgF:4706M/4958M, VA:1944M/2047M
Fri Jun 04 07:46:55 2010
Starting ORACLE instance (normal)
Fri Jun 04 07:47:06 2010
LICENSE_MAX_SESSION = 100
LICENSE_SESSIONS_WARNING = 80
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =33
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.1.0.
System parameters with non-default values:
processes = 200
sessions = 300
license_max_sessions = 100
license_sessions_warning = 80
sga_max_size = 838860800
__shared_pool_size = 260046848
shared_pool_size = 209715200
__large_pool_size = 25165824
__java_pool_size = 4194304
__streams_pool_size = 8388608
spfile = C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE.ORA
sga_target = 734003200
control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
__db_cache_size = 432013312
compatible = 10.2.0.1.0
db_recovery_file_dest = D:\
db_recovery_file_dest_size= 5368709120
undo_management = AUTO
undo_tablespace = UNDO
remote_login_passwordfile= EXCLUSIVE
dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
shared_servers = 10
job_queue_processes = 1000
audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
db_name = XE
open_cursors = 300
os_authent_prefix =
pga_aggregate_target = 209715200
PMON started with pid=2, OS id=3044
MMAN started with pid=4, OS id=3052
DBW0 started with pid=5, OS id=3196
LGWR started with pid=6, OS id=3200
CKPT started with pid=7, OS id=3204
SMON started with pid=8, OS id=3208
RECO started with pid=9, OS id=3212
CJQ0 started with pid=10, OS id=3216
MMON started with pid=11, OS id=3220
MMNL started with pid=12, OS id=3224
Fri Jun 04 07:47:31 2010
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 10 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
PSP0 started with pid=3, OS id=3048
Fri Jun 04 07:47:41 2010
alter database mount exclusive
Fri Jun 04 07:47:54 2010
Setting recovery target incarnation to 2
Fri Jun 04 07:47:56 2010
Successful mount of redo thread 1, with mount id 2601933156
Fri Jun 04 07:47:56 2010
Database mounted in Exclusive Mode
Completed: alter database mount exclusive
Fri Jun 04 07:47:57 2010
alter database open
Fri Jun 04 07:48:00 2010
Beginning crash recovery of 1 threads
Fri Jun 04 07:48:01 2010
Started redo scan
Fri Jun 04 07:48:03 2010
Completed redo scan
16441 redo blocks read, 442 data blocks need recovery
Fri Jun 04 07:48:04 2010
Started redo application at
Thread 1: logseq 1575, block 48102
Fri Jun 04 07:48:05 2010
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1575 Reading mem 0
Mem# 0 errs 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Fri Jun 04 07:48:07 2010
Completed redo application
Fri Jun 04 07:48:07 2010
Completed crash recovery at
Thread 1: logseq 1575, block 64543, scn 27413940
442 data blocks read, 442 data blocks written, 16441 redo blocks read
Fri Jun 04 07:48:09 2010
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=25, OS id=3288
ARC1 started with pid=26, OS id=3292
Fri Jun 04 07:48:10 2010
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 1576
Thread 1 opened at log sequence 1576
Current log# 3 seq# 1576 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
Successful open of redo thread 1
Fri Jun 04 07:48:13 2010
ARC0: STARTING ARCH PROCESSES
Fri Jun 04 07:48:13 2010
ARC1: Becoming the 'no FAL' ARCH
Fri Jun 04 07:48:13 2010
ARC1: Becoming the 'no SRL' ARCH
Fri Jun 04 07:48:13 2010
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC0: Becoming the heartbeat ARCH
Fri Jun 04 07:48:13 2010
SMON: enabling cache recovery
ARC2 started with pid=27, OS id=3580
Fri Jun 04 07:48:17 2010
db_recovery_file_dest_size of 5120 MB is 49.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Fri Jun 04 07:48:31 2010
Successfully onlined Undo Tablespace 1.
Fri Jun 04 07:48:31 2010
SMON: enabling tx recovery
Fri Jun 04 07:48:31 2010
Database Characterset is AL32UTF8
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=28, OS id=2412
Fri Jun 04 07:48:51 2010
Completed: alter database open
Fri Jun 04 07:49:22 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:32 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:57 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:54:10 2010
Shutting down archive processes
Fri Jun 04 07:54:15 2010
ARCH shutting down
ARC2: Archival stopped
Fri Jun 04 07:54:53 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:55:08 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:56:25 2010
Starting control autobackup
Fri Jun 04 07:56:27 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
Fri Jun 04 07:56:28 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_21
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_20
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_17
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_16
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_14
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_12
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_09
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_07
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_06
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_03
ORA-27093: 无法删除目录
Fri Jun 04 07:56:29 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_21
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_20
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_17
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_16
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_14
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_12
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_09
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_07
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_06
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_03
ORA-27093: 无法删除目录
Control autobackup written to DISK device
handle 'D:\XE\AUTOBACKUP\2010_06_04\O1_MF_S_720777385_60JJ9BNZ_.BKP'
Fri Jun 04 07:56:38 2010
Thread 1 advanced to log sequence 1577
Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Fri Jun 04 07:56:56 2010
Thread 1 cannot allocate new log, sequence 1578
Checkpoint not complete
Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Thread 1 advanced to log sequence 1578
Current log# 3 seq# 1578 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
Fri Jun 04 07:57:04 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 2208K exceeds notification threshold (2048K)
KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
Fri Jun 04 07:59:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:59:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []Hi Gurus,
there's a error ora-00600 in the big trc files as below, this is only part of this file, this file is more than 45mb in size:
xe_mmon_4424.trc
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_4424.trc
Fri Jun 04 17:03:22 2010
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Windows XP Version V5.1 Service Pack 3
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:992M/2047M, Ph+PgF:3422M/4958M, VA:1011M/2047M
Instance name: xe
Redo thread mounted by this instance: 1
Oracle process number: 11
Windows thread id: 4424, image: ORACLE.EXE (MMON)
*** SERVICE NAME:(SYS$BACKGROUND) 2010-06-04 17:03:22.265
*** SESSION ID:(284.23) 2010-06-04 17:03:22.265
*** 2010-06-04 17:03:22.265
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Current SQL statement for this session:
BEGIN :success := dbms_ha_alerts_prvt.check_ha_resources; END;
----- PL/SQL Call Stack -----
object line object
handle number name
41982E80 418 package body SYS.DBMS_HA_ALERTS_PRVT
41982E80 552 package body SYS.DBMS_HA_ALERTS_PRVT
41982E80 305 package body SYS.DBMS_HA_ALERTS_PRVT
419501A0 1 anonymous block
----- Call Stack Trace -----
calling call entry argument values in hex
location type point (? means dubious value)
ksedst+38 CALLrel ksedst1+0 0 1
ksedmp+898 CALLrel ksedst+0 0
ksfdmp+14 CALLrel ksedmp+0 3
_kgerinv+140 CALLreg 00000000 8EF0A38 3
kgeasnmierr+19 CALLrel kgerinv+0 8EF0A38 6610020 3672F70 0
6538808
kjhnpost_ha_alert CALLrel _kgeasnmierr+0 8EF0A38 6610020 3672F70 0
0+2909
__PGOSF57__kjhn_pos CALLrel kjhnpost_ha_alert 88 B21C4D0 B21C4D8 B21C4E0
t_ha_alert_plsql+43 0+0 B21C4E8 B21C4F0 B21C4F8
8 B21C500 B21C50C 0 FFFFFFFF 0
0 0 6
_spefcmpa+415 CALLreg 00000000
spefmccallstd+147 CALLrel spefcmpa+0 65395B8 16 B21C5AC 653906C 0
pextproc+58 CALLrel spefmccallstd+0 6539874 6539760 6539628
65395B8 0
__PGOSF302__peftrus CALLrel _pextproc+0
ted+115
_psdexsp+192 CALLreg 00000000 6539874
_rpiswu2+426 CALLreg 00000000 6539510
psdextp+567 CALLrel rpiswu2+0 41543288 0 65394F0 2 6539528
0 65394D0 0 2CD9E68 0 6539510
0
_pefccal+452 CALLreg 00000000
pefcal+174 CALLrel pefccal+0 6539874
pevmFCAL+128 CALLrel _pefcal+0
pfrinstrFCAL+55 CALLrel pevmFCAL+0 AF74F48 3DFB92B8
pfrrunno_tool+56 CALL??? 00000000 AF74F48 3DFBB728 AF74F84
pfrrun+781 CALLrel pfrrun_no_tool+0 AF74F48 3DFBB28C AF74F84
plsqlrun+738 CALLrel _pfrrun+0 AF74F48
peicnt+247 CALLrel plsql_run+0 AF74F48 1 0
kkxexe+413 CALLrel peicnt+0
opiexe+5529 CALLrel kkxexe+0 AF7737C
kpoal8+2165 CALLrel opiexe+0 49 3 653A4FC
_opiodr+1099 CALLreg 00000000 5E 0 653CBAC
kpoodr+483 CALLrel opiodr+0
_xupirtrc+1434 CALLreg 00000000 67384BC 5E 653CBAC 0 653CCBC
upirtrc+61 CALLrel xupirtrc+0 67384BC 5E 653CBAC 653CCBC
653D990 60FEF8B8 653E194
6736CD8 1 0 0
kpurcsc+100 CALLrel upirtrc+0 67384BC 5E 653CBAC 653CCBC
653D990 60FEF8B8 653E194
6736CD8 1 0 0
kpuexecv8+2815 CALLrel kpurcsc+0
kpuexec+2106 CALLrel kpuexecv8+0 673AE10 6736C4C 6736CD8 0 0
653EDE8
OCIStmtExecute+29 CALLrel kpuexec+0 673AE10 6736C4C 673AEC4 1 0 0
0 0 0
kjhnmmon_action+5 CALLrel _OCIStmtExecute+0 673AE10 6736C4C 673AEC4 1 0 0
26 0 0
kjhncheck_ha_reso CALLrel kjhnmmon_action+0 653EFCC 3E
urces+140
kebmronce_dispatc CALL??? 00000000
her+630
kebmronce_execute CALLrel kebmronce_dispatc
+12 her+0
_ksbcti+788 CALLreg 00000000 0 0
ksbabs+659 CALLrel ksbcti+0
kebmmmon_main+386 CALLrel _ksbabs+0 3C5DCB8
_ksbrdp+747 CALLreg 00000000 3C5DCB8
opirip+674 CALLrel ksbrdp+0
opidrv+857 CALLrel opirip+0 32 4 653FEBC
sou2o+45 CALLrel opidrv+0 32 4 653FEBC
opimaireal+227 CALLrel _sou2o+0 653FEB0 32 4 653FEBC
opimai+92 CALLrel opimai_real+0 3 653FEE8
BackgroundThreadSt CALLrel opimai+0
art@4+422
7C80B726 CALLreg 00000000
--------------------- Binary Stack Dump ---------------------
========== FRAME [1] (_ksedst+38 -> _ksedst1+0) ==========
Dump of memory from 0x065386DC to 0x065386EC
65386D0 065386EC [..S.]
65386E0 0040467B 00000000 00000001 [{F@.........]
========== FRAME [2] (_ksedmp+898 -> _ksedst+0) ==========
Dump of memory from 0x065386EC to 0x065387AC
65386E0 065387AC [..S.]
65386F0 00403073 00000000 53532E49 20464658 [[email protected] ]
6538700 54204D41 0000525A 00000000 08EF0EC0 [AM TZR..........]
6538710 6072D95A 08EF0EC5 03672F70 00000017 [Z.r`....p/g.....]
6538720 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
6538740 00000000 00000000 00000000 00000017 [................]
6538750 08EF0B3C 08EF0B34 03672F70 08F017F0 [<...4...p/g.....]
6538760 603AA0D3 065387A8 00000001 00000000 [..:`..S.........]
6538770 00000000 00000000 00000001 00000000 [................]
6538780 00000000 08EF0A38 06610020 031E1D20 [....8... .a. ...]
6538790 00000000 065386F8 08EF0A38 06538D38 [......S.8...8.S.]
65387A0 0265187C 031C8860 FFFFFFFF [|.e.`.......]
========== FRAME [3] (_ksfdmp+14 -> _ksedmp+0) ==========
and the file is keeping increasing, though I have deleted a lot of this, but:
as I marked:
time size
15:23 pm 795mb
16:55 pm 959mb
17:01 pm 970mb
17:19 pm 990mb
Any solution for that?
Thanks!! -
TDMS Shell - DB Export from source/sender system taking a VERY long time
We're trying to build a TDMS Receiver system using the TDMS Shell technique. We've run into a situation wherein the initial DB Export from source/sender system is taking a VERY long time.
We are on ECC 6.0, running on AIX 6.1 and DB UDB v9.7. We're executing the DB export from sapinst, per instructions. Our DB export parallelizes, then the parallel processes one by one whittle away to just one remaining, and there we find out that the export is at that point single-threaded, and exporting table BSIS.
BSIS is an FI transactional data table. We're wondering why is the DB export trying to get BSIS and its contents out??? Isn't the DB export in TDMS Shell technique only supposed to get SAP essential only config and master data, and NOT transactional data?
Our BSIS table is nearly 700 GB in size by itself. That export has been running for nearly a week now, with no end in site.
What are we doing wrong? We suspect we may have missed something, but really don't think we do. We also suspect that the EXCLUSION table in the TDMS Shell technique may be the KEY to this whole thing. It's supposed to automatically exclude very large tables, but in this case, it most certainly missed out in excluding BSIS for some reason.
Anyway, we're probably going to fire up an OSS Message with SAP Support to help us address this perplexing issue. Just thought we'd throw it out there to the board to see if anyone else somewhere has run into similar circumstances and challenges. In the meantime, any feedback and/or advice would be dearly appreciated. Cheers,Hello
Dont be bothered by the other TPL file DDLDB6_LRG.TPL, we are only concerned with DDLDB6.TPl.
Answer following questions to help me analyze the situation -
1) What is the current size of export dump
2) Since when is the exports running
3) What is the size of the source DB? Do you have huge amount of custom developments?
4) Did you try to use table splitting?
5) Do you doubt that there may be other transaction tables (like BSIS) which have been exported completely?
6) Did you update the SAP Kernel of your source system to latest version before starting the Shell package?
7) Were the DB statistics update during the shell or were they already updated before starting Shell?
8) Is your system a distributed system i.e. Central instance and Database instance are on different application servers? -
Keeping two very large datastores in sync
I'm looking at options for keeping a very large (potentially 400GB) TimesTen (11.2.2.5) datastore in sync between a Production server and a [warm] Standby.
Replication has been discounted because it doesn't support compressed tables, nor the types of table our closed-code application is creating (without non-null PKs)
I've done some testing with smaller datastores to get indicative numbers, and a 7.4GB datastore (according to dssize) resulted in a 35GB backup set (using ttBackup -type fileIncrOrFull). Is that large increase in volume expected, and would it extrapolate up for a 400GB data store (2TB backup set??)?
I've seen that there are Incremental backups, but to maintain our standby as warm, we'll be restoring these backups and from what I'd read & tested only a ttDestroy/ttRestore is possible, i.e. complete restore of the complete DSN each time, which is time consuming. Am I missing a smarter way of doing this?
Other than building our application to keep the two datastores in sync, are there any other tricks we can use to efficiently keep the two datastores in sync?
Random last question - I see "datastore" and "database" (and to an extent, "DSN") used apparently interchangeably - are they the same thing in TimesTen?
Update: the 35GB compresses down with 7za to just over 2.2GB, but takes 5.5 hours to do so. If I take a standalone fileFull backup it is just 7.4GB on disk, and completes faster too.
thanks,
rmoff.
Message was edited by: rmoff - add additional detailThis must be an Exalytics system, right? I ask this because compressed tables are not licensed for use outside of an Exalytics system...
As you note, currently replication is not possible in an Exalytics environment, but that is likely to change in the future and then it will definitely be the preferred mechanism for this. There is not really any other viable way to do this other than through the application.
With regard to your specific questions:
1. A backup consists primarily of the most recent checkpoint file plus all log files/records that are newer than that file. So, to minimise the size of a full backup ensure
that a checkpoint occurs (for example 'call ttCkpt' from a ttIsql session) immediately prior to starting the backup.
2. No, only complete restore is possible from an incremental backup set. Also note that due to the large amount of rollforward needed, restoring a large incremental backup set may take quite a long time. Backup and restore are not really intended for this purpose.
3. If you cannot use replication then some kind of application level sync is your only option.
4. Datastore and database mean the same thing - a physical TimesTen database. We prefer the term database nowadays; datastore is a legacy term. A DSN is a different thing (Data Source Name) and should not be used interchangeably with datastore/database. A DSN is a logical entity that defines the attributes for a database and how to connect to it. It is not the same as a database.
Chris
Maybe you are looking for
-
New insight into my dropdown list problems: loss of session info
I tried again, this time with NetBeans 5.5.1 (with Tomcat 5.5.17) and stepped through my code; code executed as a result of selecting an item in the second, misbehaving dropdown box. I have this function: public void prerender() { Session
-
While creating po, the vendor material no should updated in PIR.
hi all, i am facing one issue the issue is, while creating po, we can search the material no according to vendor material no, but when we change the vendor material no in po item details in material tab and save the PO, that is updated in the PIR, if
-
MB Pro Retina mid-2012 with OSX 10.9.5 no longer recognizes 2nd monitor
After the OSX 10.9.5 update, my MB Pro Retina mid-2012 started to crash whenever I had it connected to an external monitor via a VGA to Thunderbolt adapter. The crash Kernel report follows below. After a couple of days my laptop stopped crashing, but
-
Hi, I am in a process of creating a form in acrobat 9 pro. i ve different text fields which include some same fields like name, city, state etc etc. I ve a "same as above" checkbox. What i want is that when i check this checkbox, the information in s
-
Select a range of alphanumeric values
hi, i have a column with values like this: 001A45 124323 V1234C these values have a theoretical order. numbers have higher priority than chars. the ranking of the chars is like: A = 1, ... , Z = 26. is there a possibility to select an alphanumerical