Micro completely chokes on very large audio tra
I am having a lot of problems with my Zen Micro (..0) trying to deal with a large audio track. In my specific case the file is about 35M, and 24 hours and some odd minutes. Yes, a book. It's WMA, 32kbps, 22kHz.
Problems:
) Micro shows the track duration as only 6:27:02.
2) You can't fast forward past 6:27:02, but if you just let it play, it will continue playing past that, showing the correct playing time on the left, and showing 00:00:00 for the remaining time.
3) If you set a bookmark in the track, Micro fails to shutdown. It hangs on "Shutting Down..." and the battery drains. You must remove the battery, and the bookmark is lost.
4) Attempting to shut down while the long track is the Now Playing track also hangs the device.
and 2 are annoying, but 3 and 4 make it absolutely impossible to listen to this book.
Is this a known issue? Any workaround?
Thanks,
Curt
Thanks for trying to help, but I don't follow the logic. Let me recap my data:
File A: 4 hours. Works fine in WMP & Micro.
File B: 6 hours. Works fine in WMP & Micro.
File C: 0 hours. Works fine in WMP & Micro.
File D: 2 hours. Works fine in WMP & Micro.
File E: 24 hours. Works fine in WMP, but chokes in Micro.
You say it's definitely the file, although I don't see any such logical conclusion. I say it's more likely that there's problem with the Micros handling files over a certain size. I don't know what the magic size is. I just know that this is the largest file I've ever tried.
As for the suggested utilities:
http://audacity.sourceforge.net does not support WMA at all apparrently (I tried both versions).
http://www.dbpoweramp.com/dmc.htm does not support DRM WMA apparently (even after installing the WMA codec).
Note that I forgot to mention that it also might be a pertinent detail that this is a DRM track.
As for the "HOW THE ID TRACK INFO WORKS:" stuff, I'm aware of all that. But speaking of which, the tag shows the correct time of 24:39:8 both in WMP, and when just looking at the tags through the file properties (in Windows Explorer). But NOT in Micro. So Micro is the one not showing the duration correctly.
I appreciate the effort to help find a utility to break apart the file (still looking--feel free to send more suggestions), but what I really want is for someone from Creative to either confirm or refute the problem. I want this fixed.Message Edited by curtc on 02-08-2006 :2 PM
Similar Messages
-
I inadvertently created a very large file on my hard drive. Not needing it I sent it to the Trash Can. However, the deletion process never completes. Any suggestions as to how to delete it ? I re-installed the OS but the file was still there taking up needed space.
Command-click all of the files you want to delete and don't actually move them to the Trash until you've gone through the entire folder.
(63341) -
I wonder if I can get the lyrics (lyrics) are scrolled on the screen, as most lyrics is very large, and can almost never get the entire text box. is possible to make each verse appears, while the audio playback? please help me???
Just connect the new iPod to your computer and setup the iPod via iTunes (instead of via wifi).
If you want to copy all the infor from an old iPod touch to the inew iPod see:
iOS: Transferring information from your current iPhone, iPad, or iPod touch to a new device -
I am a multimedia design student working on an interactive
educational program for a local company. I took this project for
the opportunity to learn more about Director than was covered in
our class, which wasn't much. The project they have given me is
quite large. It contains roughly 10 modules with an average of 15
pages per module. Each page is a combination of text, photos, video
and audio. As you can see, this project is going to get very large.
The first module is complete and is roughly 400MB.(I'm told that
this will be the largest of all modules) My question is, is it
possible to create a seperate movie for each module and then
reference them from a main menu page, or will I have to compile all
of it into one movie? Any input would be helpful.Fanaka72 wrote:
> I do have a couple of questions, though. It seems by
using the go to movie
> command, that the .dir file is accessed.
> Will this work when the project is published to a cd.
Most of the people using
> this cd will not have director on their computer.
> Or do I need to publish a projector file for each module
and link to the
> projector? Any additional publishing information would
be helpful.
Hi again,
When you publish your project for CD, you should create the
projector out of the
first movie ONLY. This executable will be able to open DIR
files and will not
require the end user to have Director installed. You can
create a stub projector
out of a movie that has minimal content. I have info on this
at:
http://www.fbe.unsw.edu.au/learning/director/publishing/projector.asp
It is common to break up a project into multiple movies and
have a single projector
to start the presentation. However, if all your files are in
DIR format, the end
user will be able to open them and see what's inside if
he/she as Director. So, it
is good practice to protect your movies. You can convert them
to a protected DXR
format through the Xtras > Update Movies menu. Make sure
you make a backup of all
your DIR files (outside of Director) before you do this. The
Protect option in
Director does let you back up movies, but I'd suggest you do
it outside the program
to ensure you have a second copy of all the files in case
anything goes wrong. DXR
files cannot be converted back to DIRs.
If you have all your 'go' statements as
go to movie "movie2"
then this will open 'movie2.dir' or 'movie2.dxr'
If you have
go to movie "movie2.dir"
then it won't work if you use protected movies. That's why
it's best not to put the
3 letter extension when linking one movie to another.
Hope that provides the additional info you were after.
regards
Dean
Director Lecturer / Consultant
http://www.fbe.unsw.edu.au/learning/director/
http://www.multimediacreative.com.au -
I am a scientist and run my own business. Money is tight. I have some very large Excel files (~200MB) that I need to sort and perform logic operations on. I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro. Some of the operations take half an hour to perform. How much faster should I expect these operations to happen on a new MacPro? Is there a significant speed advantage in the 6 core vs 4 core? Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB? Related to this I am using a 32 bit version of Excel. Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?
Grant Bennet-Alder,
It’s funny you mentioned using Activity Monitor. I use it all the time to watch when a computation cycle is finished so I can avoid a crash. I keep it up in the corner of my screen while I respond to email or work on a grant. Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so. As long as I leave Excel alone while it is working it will not crash. I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application. That is clearly a problem for this kind of work. Is there any work around for this? It seems like a 64-bit spreadsheet would help. I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns. I tried it out on my MacBook Pro but my files don’t fit.
The hatter,
This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products. When I started computing this was the sort of thing computers were designed to do. Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple? Excel is only 64-bit on their machines.
Many thanks to both of you for your quick and on point answers! -
Unable to copy very large file to eSATA external HDD
I am trying to copy a VMWare Fusion virtual machine, 57 GB, from my Macbook Pro's laptop hard drive to an external, eSATA hard drive, which is attached through an ExpressPort adapter. VMWare Fusion is not running and the external drive has lots of room. The disk utility finds no problems with either drive. I have excluded both the external disk and the folder on my laptop hard drive that contains my virtual machine from my Time Machihne backups. At about the 42 GB mark, an error message appears:
The Finder cannot complete the operation because some data in "Windows1-Snapshot6.vmem" could not be read or written. (Error code -36)
After I press OK to remove the dialog, the copy does not continue, and I cannot cancel the copy. I have to force-quit the Finder to make the copy dialog go away before I can attempt the copy again. I've tried rebooting between attempts, still no luck. I have tried a total of 4 times now, exact same result at the exact same place, 42 GB / 57 GB.
Any ideas?Still no breakthrough from Apple. They're telling me to terminate the VMWare processes before attempting the copy, but had they actually read my description of the problem first, they would have known that I already tried this. Hopefully they'll continue to investigate.
From a correspondence with Tim, a support representative at Apple:
Hi Tim,
Thank you for getting back to me, I got your message. Although it is true that at the time I ran the Capture Data program there were some VMWare-related processes running (PID's 105, 106, 107 and 108), this was not the case when the issue occurred earlier. After initially experiencing the problem, this possibility had occurred to me so I took the time to terminate all VMWare processes using the activity monitor before again attempting to copy the files, including the processes mentioned by your engineering department. I documented this in my posting to apple's forum as follows: (quote is from my post of Feb 19, 2008, 1:28pm, to the thread "Unable to copy very large file to eSATA external HDD", relevant section in >bold print<)
Thanks for the suggestions. I have since tried this operation with 3 different drives through two different interface types. Two of the drives are identical - 3.5" 7200 RPM 1TB Western Digital WD10EACS (WD Caviar SE16) in external hard drive enclosures, and the other is a smaller USB2 100GB Western Digital WD1200U0170-001 external drive. I tried the two 1TB drives through eSATA - ExpressPort and also over USB2. I have tried the 100GB drive only over USB2 since that is the only interface on the drive. In all cases the result is the same. All 3 drives are formatted Mac OS Extended (Journaled).
I know the files work on my laptop's hard drive. They are a VMWare virtual machine that works just fine when I use it every day. >Before attempting the copy, I shut down VMWare and terminated all VMWare processes using the Activity Monitor for good measure.< I have tried the copy operation both through the finder and through the Unix command prompt using the drive's mount point of /Volumes/jfinney-ext-3.
Any more ideas?
Furthermore, to prove that there were no file locks present on the affected files, I moved them to a different location on my laptop's HDD and renamed them, which would not have been possible if there had been interference from vmware-related processes. So, that's not it.
Your suggested workaround, to compress the files before copying them to the external drive, may serve as a temporary workaround but it is not a solution. This VM will grow over time to the point where even the compressed version is larger than the 42GB maximum, and compressing and uncompressing the files will take me a lot of time for files of this size. Could you please continue to pursue this issue and identify the underlying cause?
Thank you,
- Jeremy -
I found very large "frame" files, what are they & can I delete them? (See screenshot). I'm a (17 today)-year-old film-maker and can't edit in FCP X anymore because I "don't have enough space". Every time I try to delete one, another identical file creates itself...
If that can help: I just upgraded to FCP 10.0.4 and every time I launch it it asks to convert my current projects (I know it would do it at least once) and I accept, but everytime I have to get it done AGAIN. My computer is slower than ever and I have a deadline this friday
I also just upgraded to Mac OS X 10.7.4, and the problem hasn't been here for long, so it may be linked...
Please help me!
AlexThe first thing you should do is to back up your personal data. It is possible that your hard drive is failing. If you are using Time Machine, that part is already done.
Then, I think it would be easiest to reformat the drive and restore. If you ARE using Time Machine, you can start up from your Leopard installation disc. At the first Installer screen, go up to the menu bar, and from the Utilities menu, first select to run Disk Utility. Completely erase the internal drive using the Erase tab; make sure you have the internal DRIVE (not the volume) selected in the sidebar, and make sure you are NOT erasing your Time Machine drive by mistake. After erasing, quit Disk Utility, and select the command to restore from backup from the same Utilities menu. Using that Time Machine volume restore utility, you can restore it to a time and date immediately before you went on vacation, when things were working.
If you are not using Time Machine, you can erase and reinstall the OS (after you have backed up your personal data). After restarting from the new installation and installing all the updates using Software Update, you can restore your personal data from the backup you just made. -
How do share a very large file?
How do share a very large file?
Do you want to send a GarageBand project or the bounced audio file? To send an audio file is not critical, but if you want to send the project use "File > Compress" to create a .zip file of the project before you send it.
If you have a Dropbox account, I'd simply copy the file into the Dropbox "public" folder and mail the link. Right-click the file in the Dropbox, then choose Dropbox > Copy Public Link. This copies an Internet link to your file that you can paste anywhere: emails, instant messages, blogs, etc.
2 GB on Dropbox are free. https://www.dropbox.com/help/category/Sharing -
Can iCloud be used to synchronize a very large Aperture library across machines effectively?
Just purchased a new 27" iMac (3.5 GHz i7 with 8 GB and 3 TB fusion drive) for my home office to provide support. Use a 15" MBPro (Retina) 90% of the time. Have a number of Aperture libraries/files varying from 10 to 70 GB that are rapidly growing. Have copied them to the iMac using a Thunderbolt cable starting the MBP in target mode.
While this works I can see problems keeping the files in sync. Thought briefly of putting the files in DropBox but when I tried that with a small test file the load time was unacceptable so I can imagine it really wouldn't be practical when the files get north of 100 GB. What about iCloud? Doesn't appear a way to do this but wonder if that's an option.
What are the rest of you doing when you need access to very large files across multiple machines?
David VoranHi David,
dvoran wrote:
Don't you have similar issues when the libraries exceed several thousand images? If not what's your secret to image management.
No, I don't .
It's an open secret: database maintenance requires steady application of naming conventions, tagging, and backing-up. With the digitization of records, losing records by mis-filing is no longer possible. But proper, consistent labeling is all the more important, because every database functions as its own index -- and is only as useful as the index is uniform and holds content that is meaningful.
I use one, single, personal Library. It is my master index of every digital photo I've recorded.
I import every shoot into its own Project.
I name my Projects with a verbal identifier, a date, and a location.
I apply a metadata pre-set to all the files I import. This metadata includes my contact inf. and my copyright.
I re-name all the files I import. The file name includes the date, the Project's verbal identifier and location, and the original file name given by the camera that recorded the data.
I assign a location to all the Images in each Project (easy, since "Project" = shoot; I just use the "Assign Location" button on the Project Inf. dialog).
I _always_ apply a keyword specifying the genre of the picture. The genres I use are "Still-life; Portrait; Family; Friends; People; Rural; Urban; Birds; Insects; Flowers; Flora (not Flowers); Fauna; Test Shots; and Misc." I give myself ready access to these by assigning them to a Keyword Button Set, which shows in the Control Bar.
That's the core part. Should be "do-able". (Search the forum for my naming conventions, if interested.) Or course, there is much more, but the above should allow you to find most of your Images (you have assigned when, where, why, and what genre to every Image). The additional steps include using Color Labels, Project Descriptions, keywords, and a meaningful Folder structure. NB: set up your Library to help YOU. For example, I don't sell stock images, and so I have no need for anyone else's keyword list. I created my own, and use the keywords that I think I will think of when I am searching for an Image.
One thing I found very helpful was separating my "input and storage" structure from my "output" structure. All digicam files get put in Projects by shoot, and stay there. I use Folders and Albums to group my outputs. This works for me because my outputs come from many inputs (my inputs and outputs have a many-to-many relationship). What works for you will depend on what you do with the picture data you record with your cameras. (Note that "Project" is a misleading term for the core storage group in Aperture. In my system they are shoots, and all my Images are stored by shoot. For each output project I have (small "p"), I create a Folder in Aperture, and put Albums, populated with the Images I need, in the Folder. When these projects are done, I move the whole Folder into another Folder, called "Completed".)
Sorry to be windy. I don't have time right now for concision.
HTH,
--Kirby. -
Very large bdump file sizes, how to solve?
Hi gurus,
I currently always find my disk space is not enough, after checking, it is the oraclexe/admin/bdump, there's currently 3.2G for it, my database is very small, only holding datas of 10mb.
It didn't happen before, only currently.
I don't know why it happened, I have deleted some old files in that folder, but today I found it is still very large compare to my database.
I am running an apex application with xe, the applcaitions works well, we didn't see anything wrong, but only the bdump file very big.
any tip to solve this? thanks
here comes my alert_xe.log file content:
hu Jun 03 16:15:43 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5600.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:15:48 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=5452
Thu Jun 03 16:15:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:16:16 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:20:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:21:50 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:25:56 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:26:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:30:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:31:19 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:36:00 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:36:46 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=1312
Thu Jun 03 16:36:49 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:37:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:41:51 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:42:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:46:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:47:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:51:57 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:52:35 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:56:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:57:10 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=3428
Thu Jun 03 16:57:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:57:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:02:16 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:02:48 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:07:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:08:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:12:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:12:41 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:17:21 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:17:34 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=5912
Thu Jun 03 17:17:37 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:18:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:22:37 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:23:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:27:39 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:28:02 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:32:42 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:33:07 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:37:45 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:38:40 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=1660
Thu Jun 03 17:38:43 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:39:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:42:54 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=31, OS id=6116
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174259', 'KUPC$S_1_20100603174259', 0);
Thu Jun 03 17:43:38 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=32, OS id=2792
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174338', 'KUPC$S_1_20100603174338', 0);
Thu Jun 03 17:43:44 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:44:06 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:44:47 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=33, OS id=3492
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174448', 'KUPC$S_1_20100603174448', 0);
kupprdp: worker process DW01 started with worker id=1, pid=34, OS id=748
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM');
Thu Jun 03 17:45:28 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 5684K exceeds notification threshold (2048K)
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
Thu Jun 03 17:45:28 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 5681K exceeds notification threshold (2048K)
Details in trace file c:\oraclexe\app\oracle\admin\xe\bdump\xe_dw01_748.trc
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
Thu Jun 03 17:48:47 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:49:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:53:49 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:54:28 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
Fri Jun 04 07:46:55 2010
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Windows XP Version V5.1 Service Pack 3
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:1653M/2047M, Ph+PgF:4706M/4958M, VA:1944M/2047M
Fri Jun 04 07:46:55 2010
Starting ORACLE instance (normal)
Fri Jun 04 07:47:06 2010
LICENSE_MAX_SESSION = 100
LICENSE_SESSIONS_WARNING = 80
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =33
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.1.0.
System parameters with non-default values:
processes = 200
sessions = 300
license_max_sessions = 100
license_sessions_warning = 80
sga_max_size = 838860800
__shared_pool_size = 260046848
shared_pool_size = 209715200
__large_pool_size = 25165824
__java_pool_size = 4194304
__streams_pool_size = 8388608
spfile = C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE.ORA
sga_target = 734003200
control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
__db_cache_size = 432013312
compatible = 10.2.0.1.0
db_recovery_file_dest = D:\
db_recovery_file_dest_size= 5368709120
undo_management = AUTO
undo_tablespace = UNDO
remote_login_passwordfile= EXCLUSIVE
dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
shared_servers = 10
job_queue_processes = 1000
audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
db_name = XE
open_cursors = 300
os_authent_prefix =
pga_aggregate_target = 209715200
PMON started with pid=2, OS id=3044
MMAN started with pid=4, OS id=3052
DBW0 started with pid=5, OS id=3196
LGWR started with pid=6, OS id=3200
CKPT started with pid=7, OS id=3204
SMON started with pid=8, OS id=3208
RECO started with pid=9, OS id=3212
CJQ0 started with pid=10, OS id=3216
MMON started with pid=11, OS id=3220
MMNL started with pid=12, OS id=3224
Fri Jun 04 07:47:31 2010
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 10 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
PSP0 started with pid=3, OS id=3048
Fri Jun 04 07:47:41 2010
alter database mount exclusive
Fri Jun 04 07:47:54 2010
Setting recovery target incarnation to 2
Fri Jun 04 07:47:56 2010
Successful mount of redo thread 1, with mount id 2601933156
Fri Jun 04 07:47:56 2010
Database mounted in Exclusive Mode
Completed: alter database mount exclusive
Fri Jun 04 07:47:57 2010
alter database open
Fri Jun 04 07:48:00 2010
Beginning crash recovery of 1 threads
Fri Jun 04 07:48:01 2010
Started redo scan
Fri Jun 04 07:48:03 2010
Completed redo scan
16441 redo blocks read, 442 data blocks need recovery
Fri Jun 04 07:48:04 2010
Started redo application at
Thread 1: logseq 1575, block 48102
Fri Jun 04 07:48:05 2010
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1575 Reading mem 0
Mem# 0 errs 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Fri Jun 04 07:48:07 2010
Completed redo application
Fri Jun 04 07:48:07 2010
Completed crash recovery at
Thread 1: logseq 1575, block 64543, scn 27413940
442 data blocks read, 442 data blocks written, 16441 redo blocks read
Fri Jun 04 07:48:09 2010
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=25, OS id=3288
ARC1 started with pid=26, OS id=3292
Fri Jun 04 07:48:10 2010
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 1576
Thread 1 opened at log sequence 1576
Current log# 3 seq# 1576 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
Successful open of redo thread 1
Fri Jun 04 07:48:13 2010
ARC0: STARTING ARCH PROCESSES
Fri Jun 04 07:48:13 2010
ARC1: Becoming the 'no FAL' ARCH
Fri Jun 04 07:48:13 2010
ARC1: Becoming the 'no SRL' ARCH
Fri Jun 04 07:48:13 2010
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC0: Becoming the heartbeat ARCH
Fri Jun 04 07:48:13 2010
SMON: enabling cache recovery
ARC2 started with pid=27, OS id=3580
Fri Jun 04 07:48:17 2010
db_recovery_file_dest_size of 5120 MB is 49.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Fri Jun 04 07:48:31 2010
Successfully onlined Undo Tablespace 1.
Fri Jun 04 07:48:31 2010
SMON: enabling tx recovery
Fri Jun 04 07:48:31 2010
Database Characterset is AL32UTF8
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=28, OS id=2412
Fri Jun 04 07:48:51 2010
Completed: alter database open
Fri Jun 04 07:49:22 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:32 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:57 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:54:10 2010
Shutting down archive processes
Fri Jun 04 07:54:15 2010
ARCH shutting down
ARC2: Archival stopped
Fri Jun 04 07:54:53 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:55:08 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:56:25 2010
Starting control autobackup
Fri Jun 04 07:56:27 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
Fri Jun 04 07:56:28 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_21
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_20
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_17
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_16
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_14
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_12
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_09
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_07
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_06
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_03
ORA-27093: 无法删除目录
Fri Jun 04 07:56:29 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_21
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_20
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_17
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_16
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_14
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_12
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_09
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_07
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_06
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_03
ORA-27093: 无法删除目录
Control autobackup written to DISK device
handle 'D:\XE\AUTOBACKUP\2010_06_04\O1_MF_S_720777385_60JJ9BNZ_.BKP'
Fri Jun 04 07:56:38 2010
Thread 1 advanced to log sequence 1577
Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Fri Jun 04 07:56:56 2010
Thread 1 cannot allocate new log, sequence 1578
Checkpoint not complete
Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Thread 1 advanced to log sequence 1578
Current log# 3 seq# 1578 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
Fri Jun 04 07:57:04 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 2208K exceeds notification threshold (2048K)
KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
Fri Jun 04 07:59:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:59:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []Hi Gurus,
there's a error ora-00600 in the big trc files as below, this is only part of this file, this file is more than 45mb in size:
xe_mmon_4424.trc
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_4424.trc
Fri Jun 04 17:03:22 2010
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Windows XP Version V5.1 Service Pack 3
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:992M/2047M, Ph+PgF:3422M/4958M, VA:1011M/2047M
Instance name: xe
Redo thread mounted by this instance: 1
Oracle process number: 11
Windows thread id: 4424, image: ORACLE.EXE (MMON)
*** SERVICE NAME:(SYS$BACKGROUND) 2010-06-04 17:03:22.265
*** SESSION ID:(284.23) 2010-06-04 17:03:22.265
*** 2010-06-04 17:03:22.265
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Current SQL statement for this session:
BEGIN :success := dbms_ha_alerts_prvt.check_ha_resources; END;
----- PL/SQL Call Stack -----
object line object
handle number name
41982E80 418 package body SYS.DBMS_HA_ALERTS_PRVT
41982E80 552 package body SYS.DBMS_HA_ALERTS_PRVT
41982E80 305 package body SYS.DBMS_HA_ALERTS_PRVT
419501A0 1 anonymous block
----- Call Stack Trace -----
calling call entry argument values in hex
location type point (? means dubious value)
ksedst+38 CALLrel ksedst1+0 0 1
ksedmp+898 CALLrel ksedst+0 0
ksfdmp+14 CALLrel ksedmp+0 3
_kgerinv+140 CALLreg 00000000 8EF0A38 3
kgeasnmierr+19 CALLrel kgerinv+0 8EF0A38 6610020 3672F70 0
6538808
kjhnpost_ha_alert CALLrel _kgeasnmierr+0 8EF0A38 6610020 3672F70 0
0+2909
__PGOSF57__kjhn_pos CALLrel kjhnpost_ha_alert 88 B21C4D0 B21C4D8 B21C4E0
t_ha_alert_plsql+43 0+0 B21C4E8 B21C4F0 B21C4F8
8 B21C500 B21C50C 0 FFFFFFFF 0
0 0 6
_spefcmpa+415 CALLreg 00000000
spefmccallstd+147 CALLrel spefcmpa+0 65395B8 16 B21C5AC 653906C 0
pextproc+58 CALLrel spefmccallstd+0 6539874 6539760 6539628
65395B8 0
__PGOSF302__peftrus CALLrel _pextproc+0
ted+115
_psdexsp+192 CALLreg 00000000 6539874
_rpiswu2+426 CALLreg 00000000 6539510
psdextp+567 CALLrel rpiswu2+0 41543288 0 65394F0 2 6539528
0 65394D0 0 2CD9E68 0 6539510
0
_pefccal+452 CALLreg 00000000
pefcal+174 CALLrel pefccal+0 6539874
pevmFCAL+128 CALLrel _pefcal+0
pfrinstrFCAL+55 CALLrel pevmFCAL+0 AF74F48 3DFB92B8
pfrrunno_tool+56 CALL??? 00000000 AF74F48 3DFBB728 AF74F84
pfrrun+781 CALLrel pfrrun_no_tool+0 AF74F48 3DFBB28C AF74F84
plsqlrun+738 CALLrel _pfrrun+0 AF74F48
peicnt+247 CALLrel plsql_run+0 AF74F48 1 0
kkxexe+413 CALLrel peicnt+0
opiexe+5529 CALLrel kkxexe+0 AF7737C
kpoal8+2165 CALLrel opiexe+0 49 3 653A4FC
_opiodr+1099 CALLreg 00000000 5E 0 653CBAC
kpoodr+483 CALLrel opiodr+0
_xupirtrc+1434 CALLreg 00000000 67384BC 5E 653CBAC 0 653CCBC
upirtrc+61 CALLrel xupirtrc+0 67384BC 5E 653CBAC 653CCBC
653D990 60FEF8B8 653E194
6736CD8 1 0 0
kpurcsc+100 CALLrel upirtrc+0 67384BC 5E 653CBAC 653CCBC
653D990 60FEF8B8 653E194
6736CD8 1 0 0
kpuexecv8+2815 CALLrel kpurcsc+0
kpuexec+2106 CALLrel kpuexecv8+0 673AE10 6736C4C 6736CD8 0 0
653EDE8
OCIStmtExecute+29 CALLrel kpuexec+0 673AE10 6736C4C 673AEC4 1 0 0
0 0 0
kjhnmmon_action+5 CALLrel _OCIStmtExecute+0 673AE10 6736C4C 673AEC4 1 0 0
26 0 0
kjhncheck_ha_reso CALLrel kjhnmmon_action+0 653EFCC 3E
urces+140
kebmronce_dispatc CALL??? 00000000
her+630
kebmronce_execute CALLrel kebmronce_dispatc
+12 her+0
_ksbcti+788 CALLreg 00000000 0 0
ksbabs+659 CALLrel ksbcti+0
kebmmmon_main+386 CALLrel _ksbabs+0 3C5DCB8
_ksbrdp+747 CALLreg 00000000 3C5DCB8
opirip+674 CALLrel ksbrdp+0
opidrv+857 CALLrel opirip+0 32 4 653FEBC
sou2o+45 CALLrel opidrv+0 32 4 653FEBC
opimaireal+227 CALLrel _sou2o+0 653FEB0 32 4 653FEBC
opimai+92 CALLrel opimai_real+0 3 653FEE8
BackgroundThreadSt CALLrel opimai+0
art@4+422
7C80B726 CALLreg 00000000
--------------------- Binary Stack Dump ---------------------
========== FRAME [1] (_ksedst+38 -> _ksedst1+0) ==========
Dump of memory from 0x065386DC to 0x065386EC
65386D0 065386EC [..S.]
65386E0 0040467B 00000000 00000001 [{F@.........]
========== FRAME [2] (_ksedmp+898 -> _ksedst+0) ==========
Dump of memory from 0x065386EC to 0x065387AC
65386E0 065387AC [..S.]
65386F0 00403073 00000000 53532E49 20464658 [[email protected] ]
6538700 54204D41 0000525A 00000000 08EF0EC0 [AM TZR..........]
6538710 6072D95A 08EF0EC5 03672F70 00000017 [Z.r`....p/g.....]
6538720 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
6538740 00000000 00000000 00000000 00000017 [................]
6538750 08EF0B3C 08EF0B34 03672F70 08F017F0 [<...4...p/g.....]
6538760 603AA0D3 065387A8 00000001 00000000 [..:`..S.........]
6538770 00000000 00000000 00000001 00000000 [................]
6538780 00000000 08EF0A38 06610020 031E1D20 [....8... .a. ...]
6538790 00000000 065386F8 08EF0A38 06538D38 [......S.8...8.S.]
65387A0 0265187C 031C8860 FFFFFFFF [|.e.`.......]
========== FRAME [3] (_ksfdmp+14 -> _ksedmp+0) ==========
and the file is keeping increasing, though I have deleted a lot of this, but:
as I marked:
time size
15:23 pm 795mb
16:55 pm 959mb
17:01 pm 970mb
17:19 pm 990mb
Any solution for that?
Thanks!! -
Keeping two very large datastores in sync
I'm looking at options for keeping a very large (potentially 400GB) TimesTen (11.2.2.5) datastore in sync between a Production server and a [warm] Standby.
Replication has been discounted because it doesn't support compressed tables, nor the types of table our closed-code application is creating (without non-null PKs)
I've done some testing with smaller datastores to get indicative numbers, and a 7.4GB datastore (according to dssize) resulted in a 35GB backup set (using ttBackup -type fileIncrOrFull). Is that large increase in volume expected, and would it extrapolate up for a 400GB data store (2TB backup set??)?
I've seen that there are Incremental backups, but to maintain our standby as warm, we'll be restoring these backups and from what I'd read & tested only a ttDestroy/ttRestore is possible, i.e. complete restore of the complete DSN each time, which is time consuming. Am I missing a smarter way of doing this?
Other than building our application to keep the two datastores in sync, are there any other tricks we can use to efficiently keep the two datastores in sync?
Random last question - I see "datastore" and "database" (and to an extent, "DSN") used apparently interchangeably - are they the same thing in TimesTen?
Update: the 35GB compresses down with 7za to just over 2.2GB, but takes 5.5 hours to do so. If I take a standalone fileFull backup it is just 7.4GB on disk, and completes faster too.
thanks,
rmoff.
Message was edited by: rmoff - add additional detailThis must be an Exalytics system, right? I ask this because compressed tables are not licensed for use outside of an Exalytics system...
As you note, currently replication is not possible in an Exalytics environment, but that is likely to change in the future and then it will definitely be the preferred mechanism for this. There is not really any other viable way to do this other than through the application.
With regard to your specific questions:
1. A backup consists primarily of the most recent checkpoint file plus all log files/records that are newer than that file. So, to minimise the size of a full backup ensure
that a checkpoint occurs (for example 'call ttCkpt' from a ttIsql session) immediately prior to starting the backup.
2. No, only complete restore is possible from an incremental backup set. Also note that due to the large amount of rollforward needed, restoring a large incremental backup set may take quite a long time. Backup and restore are not really intended for this purpose.
3. If you cannot use replication then some kind of application level sync is your only option.
4. Datastore and database mean the same thing - a physical TimesTen database. We prefer the term database nowadays; datastore is a legacy term. A DSN is a different thing (Data Source Name) and should not be used interchangeably with datastore/database. A DSN is a logical entity that defines the attributes for a database and how to connect to it. It is not the same as a database.
Chris -
Hello
Here [http://download.oracle.com/docs/cd/E11882_01/server.112/e10839/appi_vlm.htm] there is a guide for using very large memory on Linux32-bit
Now my question is can I user this method for using large memory in Linux64-bit ?
If yes , are there some restrictions such as we can't use SGA_TARGET and MEMORY_TARGET ,and it doesn't support "multiple database block sizes" , in Linux46-bin ?
thank you so muchTakhteJamshid wrote:
Here [http://download.oracle.com/docs/cd/E11882_01/server.112/e10839/appi_vlm.htm] there is a guide for using very large memory on Linux32-bit
Now my question is can I user this method for using large memory in Linux64-bit ?
If yes , are there some restrictions such as we can't use SGA_TARGET and MEMORY_TARGET ,and it doesn't support "multiple database block sizes" , in Linux46-bin ?
You'll have to check the details for yourself, but one trap to watch out for with very large memories is whether you can use the O/S feature "large pages" (aka "huge pages").
If you have a very large buffer cache then you can waste a lot memory on the memory maps used by each process that attaches to the SGA. If you enable "large pages" this reduces the sizes of the memory maps dramatically. But if you enable the memory_target (which allows 11g to switch memory between SGA and PGA usage) this may make it impossible for Oracle to use large pages.
For more information on large pages see: http://www.pythian.com/news/741/pythian-goodies-free-memory-swap-oracle-and-everything/
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
+"I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be."+
Isaac Asimov -
Jtree - Problem creating a very large one
Hi,
I need to create a multi-level JTree that is very large (aroung 600-650 nodes all in all). At present I am doing the same by creating DefaultMutableTreeNodes and adding them one by one to the tree. It takes a very long time to complete(obviously).
Is there any way of doing this faster? The tree is being constructed from run-time data.
Thanks in advance,
Regards,
AchyuthTwo thoughts. First, I create 100's of nodes and it's pretty fast. It could be how you are creating/adding them. Make sure your code is not the bottleneck. If you are traversing the entire tree or some other odd way of doing it, it could be the problem. I only say that because I am able to create an object structre, parse XML as I create that structe and create all the nodes and its less than 1 second on a PIV 1.7Ghz system. Maybe on slower systems its a lot slower, but for me its up immediately.
Another way, however, is to keep a Map of "paths" to child items in memory. As you build your object from whever, use a Map to keep path info. Now, add listeners to the tree for expand/collapse. Each time it is about to expand, build up the nodes and add them to the expanding node at that point. If it collapses, discard them. This does one of two things. First, for large trees, you aren't wasting tons of time building the tree at the start, and more so, its probably likely that all those nodes aren't going to be expanded/shown right away. Second, it conserves resources. If your tree MUST open in a fully epxanded state, well, then you may be out of luck. But I would much prefer epxanding and building the child nodes at that moment, rather than do the whole thing first. -
Large Audio Files Skip on Resume from Bookmark
For years I have been making large audio recordings using Audio Hijack Pro, importing them to iTunes, and listening to them on my various iPods. My current iPod is a 5.5gen, 80 GB unit. The records are AAC, 128kpbs, Stero, Bookmarkable, with a Silence Monitor = Remove (Analog). The files are about 6 hours long (just short of that because of occasional small silences) and about 300 MB (301.4 MB for example).
If I listen to the track straight through, no problem. If I pause it, let the iPod sleep, resume it, no problem. If I pause it, and "go away from the track" then come back to the track and rely on the bookmark to resume playing in the right place I have my problem. Specifically, after listening for about 1 minute the track abruptly ends, and the next track starts. Incidentally, this does increment the track play count. Earlier I said "go away from the track", some examples of that are: 1) play another track, 2) sync the iPod.
I assume this has something to do with the iPod's cache.
I have tried a few workarounds... for example, once I resume using the bookmark I have tried scrubbing backwards or forwards a few seconds or a few minutes in an attempt to reload the cache... to no avail.
If the bookmarked location is near the beginning of the track, approximately the first 15 minutes, it does not skip and will play through just fine. Also, every one in a while it will work later in the track. In this case, "every once in a while" is something like 1 out of 50 times.
Any ideas?
Thanks,
ChrisI have exactly the same problem.
I have a number of long audio recordings - ranging from 5 to 16 hours in length. All are encoded as M4A. When resuming any of them it seems completely random as to whether they will start at the bookmark, or will immediately skip and increment the play count.
I haven't found any reliable workaround to ensure they start.
iPod 5th gen 80G running 1.2.3. -
Large Audio file: desire to cut it into tracks. pls. recommed util software
have a large audio file which needs to be cutted into the different tracks; also to match it with cddb database. can someone recomend a simple piece of software? in garage band it takes me hours.. thank you!
If you want to try out a free open source audio editor you could have a look at this: Audacity
I use Sound Studio as I record a lot from vinyl, it's very user friendly but it's a commercial program. If you are only doing occasional edits it's probably not an economical option: Sound Studio
Maybe you are looking for
-
Hi. I had sound until my realtek ethernet card went dead so i used the onboard ethernet.. now i have no sound from onboard sound card need help thnx
-
System build-in or system variable to get number of records from a query
Hi, Is there a system build-in or system variable to get number of records from a Oracle Forms query? Thank you
-
How do you include JARs in a classpath so that the classes inside can be referred to in <%@ page import="mypackageinajar.myclassinajar" %> ? I've looked through the JSP docs but it only covered import statements. If I can't use a JAR, how would I ena
-
EMAIL SECURITY SETTINGS CHANGE
Ok..well this simply isn't working out for my Galaxy S4. I'm signed into my Verizon account and receiving email. On going into the account to make the changes, on either the incoming or outgoing servers...I enter the information, click DONE and am
-
Deleting two years of sent messages
How do I delete two years of Sent messages? I do NOT want to delete the messages one by one. I want to delete a large number of messages (maybe a thousand or so) all at one time.