Build Distribution - PXI version VERY large

I have an app that has a PC and PXI version. The PC version is just a CAN (communications) interface, while the PXI version allows control of cards in the PXI box (Switch, DMM, ...).
In previous versions of Lab Windows CVI I would not include the drivers, only the runtime engine.
In version 9.0 of CVI, I cannot remove the distribution build that includes a bunch of drivers, so the PC version is just under 15 MB the PXI version is over 800 MB.
How can I get the drivers to not be included in the build (they are already installed on the PXI boxes)?
Or, how can I figure out exactly what the heck-fire is so big?

Hi Ken,
It is possible to remove all additional drivers from a distribution in CVI by going to Build->Distributions->Manage Distributions. You can then select the distribution and click "Edit". You cen then select the "Drivers and Components" tab and uncheck all the components that are unnecessary including the runtime engine and any drivers. this will ensure that your distribution only includes the files within your application.
Raj
National Instruments
Applications Engineer

Similar Messages

  • Re: building resultset using a very large sqlstatement  fails

    Well should be an error of the jaspesrreports because the maximum length of an sql statement is 65536 character.
    I do not know what a package it is, that you use, but do you think that such a big statement makes sense?
    You probably should do some single statements or maybe just select* ? ;-)

    Yes, the problem seems to be graphics related.
    I am not sure that upgrading the jvm will help.
    From what I can see, graphics is drawn outside the EDT.
    Doing that is bad and can cause random crashes.
    Reading the java tutorial on swing and threads would be a good idea.
    /robo

  • Hello, I am having issues open very large files I created, one being 1,241,776 KB. I have PS 12.1 with 64 bit version. I am not sure if they issues I am having is because of the PS version I have, and whether or not I have to upgrade?

    Hello, I am having issues open very large files I created, one being 1,241,776 KB. I have PS 12.1 with 64 bit version. I am not sure if they issues I am having is because of the PS version I have, and whether or not I have to upgrade?

    I think more likely, it's a memory / scratch disk issue.  1.25 gigabytes is a very big image file!!
    Nancy O.

  • Building secondary Index fails for large number(25,000,000) of records

    I am inserting 25,000,000 records of the type:
    Key --> Data
    [long,String,long] --> [{long,long}, {String}}
    using setSecondaryBulkLoad(true) and then build two Secondary indexes on {long,long} and {String} of the data portion.
         private void buildSecondaryIndex(DataAccessLayer dataAccessLayer ) {
              try {
                   SecondaryIndex<TDetailSecondaryKey, TDetailStringKey, TDetailStringRecord> secondaryIndex = store.getSecondaryIndex(dataAccessLayer.getPrimaryIndex(), TDetailSecondaryKey.class, SECONDARY_KEY_NAME);
              } catch (DatabaseException e) {
                   throw new RuntimeException(e);
    It fails when I build the SecondaryIndex probably due to Java Heap Space Error. See the failure trace below.
    I do not face this problem when I deal with 250,000 records.
    Is there a work around that without haveing to set the memory space configurations of the JVM.
    Failure Trace:
    java.lang.RuntimeException: Environment invalid because of previous exception: com.sleepycat.je.RunRecoveryException
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:444)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.insertCellSetInOneTxn(TDetailStringDAOInsertTest.java:280)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.mainTest(TDetailStringDAOInsertTest.java:93)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99)
         at org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81)
         at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
         at org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75)
         at org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45)
         at org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:66)
         at org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35)
         at org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42)
         at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
         at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52)
         at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:38)
         at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
    Caused by: Environment invalid because of previous exception: com.sleepycat.je.RunRecoveryException
         at com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:976)
         at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:584)
         at com.sleepycat.je.txn.Txn.undo(Txn.java:713)
         at com.sleepycat.je.txn.Txn.abortInternal(Txn.java:631)
         at com.sleepycat.je.txn.Txn.abort(Txn.java:599)
         at com.sleepycat.je.txn.AutoTxn.operationEnd(AutoTxn.java:36)
         at com.sleepycat.je.Environment.openDb(Environment.java:505)
         at com.sleepycat.je.Environment.openSecondaryDatabase(Environment.java:382)
         at com.sleepycat.persist.impl.Store.openSecondaryIndex(Store.java:684)
         at com.sleepycat.persist.impl.Store.getSecondaryIndex(Store.java:579)
         at com.sleepycat.persist.EntityStore.getSecondaryIndex(EntityStore.java:286)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:441)
         ... 22 more
    Caused by: java.lang.OutOfMemoryError: Java heap space
         at java.util.HashMap.resize(HashMap.java:462)
         at java.util.HashMap.addEntry(HashMap.java:755)
         at java.util.HashMap.put(HashMap.java:385)
         at java.util.HashSet.add(HashSet.java:200)
         at com.sleepycat.je.txn.Txn.addReadLock(Txn.java:964)
         at com.sleepycat.je.txn.Txn.addLock(Txn.java:952)
         at com.sleepycat.je.txn.LockManager.attemptLockInternal(LockManager.java:347)
         at com.sleepycat.je.txn.SyncedLockManager.attemptLock(SyncedLockManager.java:43)
         at com.sleepycat.je.txn.LockManager.lock(LockManager.java:178)
         at com.sleepycat.je.txn.Txn.lockInternal(Txn.java:295)
         at com.sleepycat.je.txn.Locker.nonBlockingLock(Locker.java:288)
         at com.sleepycat.je.dbi.CursorImpl.lockLNDeletedAllowed(CursorImpl.java:2357)
         at com.sleepycat.je.dbi.CursorImpl.lockLN(CursorImpl.java:2297)
         at com.sleepycat.je.dbi.CursorImpl.fetchCurrent(CursorImpl.java:2227)
         at com.sleepycat.je.dbi.CursorImpl.getCurrentAlreadyLatched(CursorImpl.java:1296)
         at com.sleepycat.je.dbi.CursorImpl.getNextWithKeyChangeStatus(CursorImpl.java:1442)
         at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1368)
         at com.sleepycat.je.Cursor.retrieveNextAllowPhantoms(Cursor.java:1587)
         at com.sleepycat.je.Cursor.retrieveNext(Cursor.java:1397)
         at com.sleepycat.je.SecondaryDatabase.init(SecondaryDatabase.java:182)
         at com.sleepycat.je.SecondaryDatabase.initNew(SecondaryDatabase.java:118)
         at com.sleepycat.je.Environment.openDb(Environment.java:484)
         at com.sleepycat.je.Environment.openSecondaryDatabase(Environment.java:382)
         at com.sleepycat.persist.impl.Store.openSecondaryIndex(Store.java:684)
         at com.sleepycat.persist.impl.Store.getSecondaryIndex(Store.java:579)
         at com.sleepycat.persist.EntityStore.getSecondaryIndex(EntityStore.java:286)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:441)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.insertCellSetInOneTxn(TDetailStringDAOInsertTest.java:280)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.mainTest(TDetailStringDAOInsertTest.java:93)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    1. Does the speed of building of secondary index
    depend on the type of the data in the key? Will
    having integers in secondary key as opposed to string
    be better?The byte size of the key and data is significant of course, but the data type is not.
    2. How much are we bound of the memory? Lets assume
    my memory setting is fixed.
    a. I know with current memory settings if I set txn
    n on, I have java Heap Error.
    So will I be limited on the size of
    secondary index or
    will it just get really slow swapping
    tree information from the disk as it builds it.No. The out-of-memory error was caused by a very large transaction that holds locks. When using small transactions or non-transactional access, you won't have this problem. In general, like most databases, JE writes and reads information to/from disk as needed.
    b. Is there any other way of speeding the build of
    f secondary database?No, other then general performance tuning, nothing I know of.
    c. Will it be more beneficial not to bulk
    load when the datasize gets large
    so that secondary database is built
    incrementally?It's up to you whether you want to pay the price during an initial load or incrementally.
    d. Do you think it will help to partition the
    e original database into smaller databases
    using some criteria, and thus build
    smaller trees.          Why? You can use deferred write or non-transactional access to load any size database.
    The only weak point in this is if we have to bulk
    bulk load in one partition
    at some time increasing its size we may
    face the same problem againFace what problem?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Is there a way to split a very large 1 page pdf into letter size multiple page pdf?

    I often have very large single page pdfs that need to be printed onto letter size paper.  Usually I don't have access to the printer where I'm working so I have to send the file to someone for printing. 
    I have AXI pro, they don't. 
    I want to make sure the job is printed as I specify and most of the users are using Reader.  So I want to give the someone the pdf ready to print sized in legal.  This requires manipulation of the pdf that I don't seem to be able to figure out how to do.
    In older versions of Acrobat, I could print to a new pdf and designate the page size.  Acrobat would create the multipage pdf.  The newer versions don't allow this. 
    With OSX 10.8 & AXI you can't save, export, split a one page (68" x 16") document into multiple page letter size (16 pages) pdf.
    Perhaps this can be done by printing to eps and running through distiller again or something else, but I'm stumped at the moment.
    Any suggestions of how to attach this would be appreciated.
    Thanks.

    That's a tough one. Acrobat is not designed for tiling PDF files to create another PDF. That's really what you're asking. There is the option to PRINT to a PDF, and turn on the Poster feature. If were in Windows where there is a real Adobe PDF printer driver, you could probably use that feature. But for various reasons (too complicated to describe here), that was withdrawn on the Macintosh.
    If you have a copy of Adobe InDesign, and if you installed an Adobe PDF 9 PPD file (see description below), it could be done in a somewhat awkward way. InDesign allows you to place PDF files so you would need to make a page of the proper size and place your large PDF:
    Then after installing the Adobe PDF 9 PPD file, you could choose File > Print. Then choose to print a PostScript file to the Adobe PDF 9.0 PPD file. In the Setup panel, you'd choose a Letter size page. Then you'd choose the Tile option at the bottom and set the Overlap amount:
    Then you'd save the PostScript file and process through Distiller.
    My blog post below describes how to find and install the Adobe PDF 9.0 PPD file:
    http://indesignsecrets.com/creating-postscript-files-in-snow-leopard-for-older-print-workf lows.php

  • Profile Performanc​e and Memory shows very large 'VI Time' value

    When I run the Profile Performance and Memory tool on my project, I get very large numbers for VI Time (and Sub VIs Time and Total Time) for some VIs.  For example 1844674407370752.5.  I have selected only 'Timing statistics' and 'Timing details'.  Sometimes the numbers start with reasonable values, then when updating the display with the snapshot button they might get large and stay large.  Other VI Times remain reasonable.
    LabVIEW 2011 Version 11.0 (32-bit).  Windows 7.
    What gives?
     - les

    les,
    the number indicates some kind of overroll.... so, do you have a vi where this happens all the time? Can you share this with us?
    thanks,
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • I need to sort very large Excel files and perform other operations.  How much faster would this be on a MacPro rather than my MacBook Pro i7, 2.6, 15R?

    I am a scientist and run my own business.  Money is tight.  I have some very large Excel files (~200MB) that I need to sort and perform logic operations on.  I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro.  Some of the operations take half an hour to perform.  How much faster should I expect these operations to happen on a new MacPro?  Is there a significant speed advantage in the 6 core vs 4 core?  Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB?  Related to this I am using a 32 bit version of Excel.  Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?

    Grant Bennet-Alder,
    It’s funny you mentioned using Activity Monitor.  I use it all the time to watch when a computation cycle is finished so I can avoid a crash.  I keep it up in the corner of my screen while I respond to email or work on a grant.  Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so.  As long as I leave Excel alone while it is working it will not crash.  I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application.  That is clearly a problem for this kind of work.  Is there any work around for this?   It seems like a 64-bit spreadsheet would help.  I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns.  I tried it out on my MacBook Pro but my files don’t fit.
    The hatter,
    This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products.  When I started computing this was the sort of thing computers were designed to do.  Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple?  Excel is only 64-bit on their machines.
    Many thanks to both of you for your quick and on point answers!

  • Unable to copy very large file to eSATA external HDD

    I am trying to copy a VMWare Fusion virtual machine, 57 GB, from my Macbook Pro's laptop hard drive to an external, eSATA hard drive, which is attached through an ExpressPort adapter. VMWare Fusion is not running and the external drive has lots of room. The disk utility finds no problems with either drive. I have excluded both the external disk and the folder on my laptop hard drive that contains my virtual machine from my Time Machihne backups. At about the 42 GB mark, an error message appears:
    The Finder cannot complete the operation because some data in "Windows1-Snapshot6.vmem" could not be read or written. (Error code -36)
    After I press OK to remove the dialog, the copy does not continue, and I cannot cancel the copy. I have to force-quit the Finder to make the copy dialog go away before I can attempt the copy again. I've tried rebooting between attempts, still no luck. I have tried a total of 4 times now, exact same result at the exact same place, 42 GB / 57 GB.
    Any ideas?

    Still no breakthrough from Apple. They're telling me to terminate the VMWare processes before attempting the copy, but had they actually read my description of the problem first, they would have known that I already tried this. Hopefully they'll continue to investigate.
    From a correspondence with Tim, a support representative at Apple:
    Hi Tim,
    Thank you for getting back to me, I got your message. Although it is true that at the time I ran the Capture Data program there were some VMWare-related processes running (PID's 105, 106, 107 and 108), this was not the case when the issue occurred earlier. After initially experiencing the problem, this possibility had occurred to me so I took the time to terminate all VMWare processes using the activity monitor before again attempting to copy the files, including the processes mentioned by your engineering department. I documented this in my posting to apple's forum as follows: (quote is from my post of Feb 19, 2008, 1:28pm, to the thread "Unable to copy very large file to eSATA external HDD", relevant section in >bold print<)
    Thanks for the suggestions. I have since tried this operation with 3 different drives through two different interface types. Two of the drives are identical - 3.5" 7200 RPM 1TB Western Digital WD10EACS (WD Caviar SE16) in external hard drive enclosures, and the other is a smaller USB2 100GB Western Digital WD1200U0170-001 external drive. I tried the two 1TB drives through eSATA - ExpressPort and also over USB2. I have tried the 100GB drive only over USB2 since that is the only interface on the drive. In all cases the result is the same. All 3 drives are formatted Mac OS Extended (Journaled).
    I know the files work on my laptop's hard drive. They are a VMWare virtual machine that works just fine when I use it every day. >Before attempting the copy, I shut down VMWare and terminated all VMWare processes using the Activity Monitor for good measure.< I have tried the copy operation both through the finder and through the Unix command prompt using the drive's mount point of /Volumes/jfinney-ext-3.
    Any more ideas?
    Furthermore, to prove that there were no file locks present on the affected files, I moved them to a different location on my laptop's HDD and renamed them, which would not have been possible if there had been interference from vmware-related processes. So, that's not it.
    Your suggested workaround, to compress the files before copying them to the external drive, may serve as a temporary workaround but it is not a solution. This VM will grow over time to the point where even the compressed version is larger than the 42GB maximum, and compressing and uncompressing the files will take me a lot of time for files of this size. Could you please continue to pursue this issue and identify the underlying cause?
    Thank you,
    - Jeremy

  • I support a very large school district currently running Firefox 3.6. What will happen at end of life date? We're in the middle of online testing this week.

    I run the test center for a very large school district with over 120k students. We've got a current deployed base of 54k client machines using Firefox 3.6. We haven't upgraded due to multiple reasons, the most important of which is removing the possibility of using In Private Browsing from the students, and dealing with plugin-updates for the non digital natives (read dumber than a bag of hammers users) that make up the majority of the client base.
    We're testing ESR now, but just found out that end of life for 3.6 is tomorrow, 4/24. We are currently in the middle of statewide online testing. The question is, what will happen tomorrow when the browser goes end of life. The ESR wiki mentions that "an update to the current version of Desktop Firefox will be offered through the Application Update Service"
    So the main question is, are my students/teachers going to get a popup telling them they have to update the browser if we have the updates already turned off? If so, can I turn it off remotely using SCCM, because it will cause all kinds of havoc.
    Please advise asap, and thanks in advance.

    We had to do some serious gymnastics to remove at least most of the ability to use IPB. We removed it from the gui, but unfortunately, if they know the hotkey, they can still bring it up. Security has some serious headaches with this, as by law they have to be able to track where students go, and going with private browsing removes their ability to do forensic work they're required to be able to do. Not a very well thought out feature from Mozilla in my opinion, but it is what it is. Successive versions have made it even more difficult to remove even the gui portion.
    We do plan to release ESR due to the aforementioned security issues, but testing has been slow.
    But thanks for the reply. I think we can turn off the updates if it isn't already done.

  • Please help!! "Can't open the illustration. This artwork contains a very large image that can not...

    Hi all, I am subscribing Illustrator CS6 16.0.0 and using it on Windows 7.
    Few days ago I was working with one file, I saved it successfully and now when I'm trying to open it an error message occures "Can't open the illustration. This artwork contains a very large image that can not be read on this version of AI. Please try with 64-bit version." and ALMOST ALL of the objects (vector) are missing as if they were deleted!
    It's kind of strange since I created the file with the same program and everything was working properly before.
    Please Please advice further steps for recovering my file.

    Thank you so much! the file is recovered (as well as my emotional state )
    The finding of the day - apparently I have two versions of AI in my PC!

  • How can NI FBUS Monitor display very large recorded files

    NI FBUS Monitor version 3.0.1 outputs an error message "Out of memory", if I try to load a large recorded file of size 272 MB. Is there any combination of operating system (possible Vista32 or Vista64) and/or physical memory size, where NI FBUS Monitor can display such large recordings ? Are there any patches or workarounds or tools to display very large recorded files?

    Hi,
    NI-FBUS Monitor does not set the limitation on the maximum record file size. The physical memory size in the system is one of the most important factors that affect the loading of large record file.  Monitor will try loading the entire file into the memory during file open operation.
    272MB is a really large file size. To open the file, your system must have sufficient physical memory available.  Otherwise "Out of memory" error will occur.
    I would recommend you do not use Monitor to open a file larger than 100MB. Loading of a too large file will consume the system memory quickly and decrease the performance.
    Message Edited by Vince Shen on 11-30-2009 09:38 PM
    Feilian (Vince) Shen

  • Very large bdump file sizes, how to solve?

    Hi gurus,
    I currently always find my disk space is not enough, after checking, it is the oraclexe/admin/bdump, there's currently 3.2G for it, my database is very small, only holding datas of 10mb.
    It didn't happen before, only currently.
    I don't know why it happened, I have deleted some old files in that folder, but today I found it is still very large compare to my database.
    I am running an apex application with xe, the applcaitions works well, we didn't see anything wrong, but only the bdump file very big.
    any tip to solve this? thanks
    here comes my alert_xe.log file content:
    hu Jun 03 16:15:43 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5600.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:15:48 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=5452
    Thu Jun 03 16:15:52 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:16:16 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:20:54 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:21:50 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:25:56 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:26:18 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:30:58 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:31:19 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:36:00 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:36:46 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=1312
    Thu Jun 03 16:36:49 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:37:13 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:41:51 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:42:13 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:46:54 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:47:17 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:51:57 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:52:35 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:56:58 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:57:10 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=3428
    Thu Jun 03 16:57:13 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:57:52 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:02:16 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:02:48 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:07:18 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:08:01 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:12:18 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:12:41 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:17:21 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:17:34 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=5912
    Thu Jun 03 17:17:37 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:18:01 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:22:37 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:23:01 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:27:39 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:28:02 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:32:42 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:33:07 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:37:45 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:38:40 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=1660
    Thu Jun 03 17:38:43 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:39:17 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:42:54 2010
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=31, OS id=6116
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174259', 'KUPC$S_1_20100603174259', 0);
    Thu Jun 03 17:43:38 2010
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=32, OS id=2792
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174338', 'KUPC$S_1_20100603174338', 0);
    Thu Jun 03 17:43:44 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:44:06 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:44:47 2010
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=33, OS id=3492
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174448', 'KUPC$S_1_20100603174448', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=34, OS id=748
    to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM');
    Thu Jun 03 17:45:28 2010
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 5684K exceeds notification threshold (2048K)
    KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
    Thu Jun 03 17:45:28 2010
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 5681K exceeds notification threshold (2048K)
    Details in trace file c:\oraclexe\app\oracle\admin\xe\bdump\xe_dw01_748.trc
    KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
    Thu Jun 03 17:48:47 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:49:17 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:53:49 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:54:28 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
    Fri Jun 04 07:46:55 2010
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Windows XP Version V5.1 Service Pack 3
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:1653M/2047M, Ph+PgF:4706M/4958M, VA:1944M/2047M
    Fri Jun 04 07:46:55 2010
    Starting ORACLE instance (normal)
    Fri Jun 04 07:47:06 2010
    LICENSE_MAX_SESSION = 100
    LICENSE_SESSIONS_WARNING = 80
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =33
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.1.0.
    System parameters with non-default values:
    processes = 200
    sessions = 300
    license_max_sessions = 100
    license_sessions_warning = 80
    sga_max_size = 838860800
    __shared_pool_size = 260046848
    shared_pool_size = 209715200
    __large_pool_size = 25165824
    __java_pool_size = 4194304
    __streams_pool_size = 8388608
    spfile = C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE.ORA
    sga_target = 734003200
    control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
    __db_cache_size = 432013312
    compatible = 10.2.0.1.0
    db_recovery_file_dest = D:\
    db_recovery_file_dest_size= 5368709120
    undo_management = AUTO
    undo_tablespace = UNDO
    remote_login_passwordfile= EXCLUSIVE
    dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
    shared_servers = 10
    job_queue_processes = 1000
    audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
    background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
    user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
    core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
    db_name = XE
    open_cursors = 300
    os_authent_prefix =
    pga_aggregate_target = 209715200
    PMON started with pid=2, OS id=3044
    MMAN started with pid=4, OS id=3052
    DBW0 started with pid=5, OS id=3196
    LGWR started with pid=6, OS id=3200
    CKPT started with pid=7, OS id=3204
    SMON started with pid=8, OS id=3208
    RECO started with pid=9, OS id=3212
    CJQ0 started with pid=10, OS id=3216
    MMON started with pid=11, OS id=3220
    MMNL started with pid=12, OS id=3224
    Fri Jun 04 07:47:31 2010
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 10 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    PSP0 started with pid=3, OS id=3048
    Fri Jun 04 07:47:41 2010
    alter database mount exclusive
    Fri Jun 04 07:47:54 2010
    Setting recovery target incarnation to 2
    Fri Jun 04 07:47:56 2010
    Successful mount of redo thread 1, with mount id 2601933156
    Fri Jun 04 07:47:56 2010
    Database mounted in Exclusive Mode
    Completed: alter database mount exclusive
    Fri Jun 04 07:47:57 2010
    alter database open
    Fri Jun 04 07:48:00 2010
    Beginning crash recovery of 1 threads
    Fri Jun 04 07:48:01 2010
    Started redo scan
    Fri Jun 04 07:48:03 2010
    Completed redo scan
    16441 redo blocks read, 442 data blocks need recovery
    Fri Jun 04 07:48:04 2010
    Started redo application at
    Thread 1: logseq 1575, block 48102
    Fri Jun 04 07:48:05 2010
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 1575 Reading mem 0
    Mem# 0 errs 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
    Fri Jun 04 07:48:07 2010
    Completed redo application
    Fri Jun 04 07:48:07 2010
    Completed crash recovery at
    Thread 1: logseq 1575, block 64543, scn 27413940
    442 data blocks read, 442 data blocks written, 16441 redo blocks read
    Fri Jun 04 07:48:09 2010
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=25, OS id=3288
    ARC1 started with pid=26, OS id=3292
    Fri Jun 04 07:48:10 2010
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    Thread 1 advanced to log sequence 1576
    Thread 1 opened at log sequence 1576
    Current log# 3 seq# 1576 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
    Successful open of redo thread 1
    Fri Jun 04 07:48:13 2010
    ARC0: STARTING ARCH PROCESSES
    Fri Jun 04 07:48:13 2010
    ARC1: Becoming the 'no FAL' ARCH
    Fri Jun 04 07:48:13 2010
    ARC1: Becoming the 'no SRL' ARCH
    Fri Jun 04 07:48:13 2010
    ARC2: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    ARC0: Becoming the heartbeat ARCH
    Fri Jun 04 07:48:13 2010
    SMON: enabling cache recovery
    ARC2 started with pid=27, OS id=3580
    Fri Jun 04 07:48:17 2010
    db_recovery_file_dest_size of 5120 MB is 49.00% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Fri Jun 04 07:48:31 2010
    Successfully onlined Undo Tablespace 1.
    Fri Jun 04 07:48:31 2010
    SMON: enabling tx recovery
    Fri Jun 04 07:48:31 2010
    Database Characterset is AL32UTF8
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=28, OS id=2412
    Fri Jun 04 07:48:51 2010
    Completed: alter database open
    Fri Jun 04 07:49:22 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:49:32 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:49:52 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:49:57 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:54:10 2010
    Shutting down archive processes
    Fri Jun 04 07:54:15 2010
    ARCH shutting down
    ARC2: Archival stopped
    Fri Jun 04 07:54:53 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:55:08 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:56:25 2010
    Starting control autobackup
    Fri Jun 04 07:56:27 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
    Fri Jun 04 07:56:28 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_21
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_20
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_17
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_16
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_14
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_12
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_09
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_07
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_06
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_03
    ORA-27093: 无法删除目录
    Fri Jun 04 07:56:29 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_21
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_20
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_17
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_16
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_14
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_12
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_09
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_07
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_06
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_03
    ORA-27093: 无法删除目录
    Control autobackup written to DISK device
    handle 'D:\XE\AUTOBACKUP\2010_06_04\O1_MF_S_720777385_60JJ9BNZ_.BKP'
    Fri Jun 04 07:56:38 2010
    Thread 1 advanced to log sequence 1577
    Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
    Fri Jun 04 07:56:56 2010
    Thread 1 cannot allocate new log, sequence 1578
    Checkpoint not complete
    Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
    Thread 1 advanced to log sequence 1578
    Current log# 3 seq# 1578 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
    Fri Jun 04 07:57:04 2010
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 2208K exceeds notification threshold (2048K)
    KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
    Fri Jun 04 07:59:54 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:59:58 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []

    Hi Gurus,
    there's a error ora-00600 in the big trc files as below, this is only part of this file, this file is more than 45mb in size:
    xe_mmon_4424.trc
    Dump file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_4424.trc
    Fri Jun 04 17:03:22 2010
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Windows XP Version V5.1 Service Pack 3
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:992M/2047M, Ph+PgF:3422M/4958M, VA:1011M/2047M
    Instance name: xe
    Redo thread mounted by this instance: 1
    Oracle process number: 11
    Windows thread id: 4424, image: ORACLE.EXE (MMON)
    *** SERVICE NAME:(SYS$BACKGROUND) 2010-06-04 17:03:22.265
    *** SESSION ID:(284.23) 2010-06-04 17:03:22.265
    *** 2010-06-04 17:03:22.265
    ksedmp: internal or fatal error
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Current SQL statement for this session:
    BEGIN :success := dbms_ha_alerts_prvt.check_ha_resources; END;
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    41982E80 418 package body SYS.DBMS_HA_ALERTS_PRVT
    41982E80 552 package body SYS.DBMS_HA_ALERTS_PRVT
    41982E80 305 package body SYS.DBMS_HA_ALERTS_PRVT
    419501A0 1 anonymous block
    ----- Call Stack Trace -----
    calling call entry argument values in hex
    location type point (? means dubious value)
    ksedst+38           CALLrel  ksedst1+0 0 1
    ksedmp+898          CALLrel  ksedst+0 0
    ksfdmp+14           CALLrel  ksedmp+0 3
    _kgerinv+140         CALLreg  00000000             8EF0A38 3
    kgeasnmierr+19      CALLrel  kgerinv+0 8EF0A38 6610020 3672F70 0
    6538808
    kjhnpost_ha_alert CALLrel _kgeasnmierr+0       8EF0A38 6610020 3672F70 0
    0+2909
    __PGOSF57__kjhn_pos CALLrel kjhnpost_ha_alert 88 B21C4D0 B21C4D8 B21C4E0
    t_ha_alert_plsql+43 0+0 B21C4E8 B21C4F0 B21C4F8
    8 B21C500 B21C50C 0 FFFFFFFF 0
    0 0 6
    _spefcmpa+415        CALLreg  00000000            
    spefmccallstd+147   CALLrel  spefcmpa+0 65395B8 16 B21C5AC 653906C 0
    pextproc+58         CALLrel  spefmccallstd+0 6539874 6539760 6539628
    65395B8 0
    __PGOSF302__peftrus CALLrel _pextproc+0         
    ted+115
    _psdexsp+192         CALLreg  00000000             6539874
    _rpiswu2+426         CALLreg  00000000             6539510
    psdextp+567         CALLrel  rpiswu2+0 41543288 0 65394F0 2 6539528
    0 65394D0 0 2CD9E68 0 6539510
    0
    _pefccal+452         CALLreg  00000000            
    pefcal+174          CALLrel  pefccal+0 6539874
    pevmFCAL+128 CALLrel _pefcal+0           
    pfrinstrFCAL+55 CALLrel pevmFCAL+0 AF74F48 3DFB92B8
    pfrrunno_tool+56 CALL??? 00000000 AF74F48 3DFBB728 AF74F84
    pfrrun+781          CALLrel  pfrrun_no_tool+0 AF74F48 3DFBB28C AF74F84
    plsqlrun+738 CALLrel _pfrrun+0            AF74F48
    peicnt+247          CALLrel  plsql_run+0 AF74F48 1 0
    kkxexe+413          CALLrel  peicnt+0
    opiexe+5529         CALLrel  kkxexe+0 AF7737C
    kpoal8+2165         CALLrel  opiexe+0 49 3 653A4FC
    _opiodr+1099         CALLreg  00000000             5E 0 653CBAC
    kpoodr+483          CALLrel  opiodr+0
    _xupirtrc+1434       CALLreg  00000000             67384BC 5E 653CBAC 0 653CCBC
    upirtrc+61          CALLrel  xupirtrc+0 67384BC 5E 653CBAC 653CCBC
    653D990 60FEF8B8 653E194
    6736CD8 1 0 0
    kpurcsc+100         CALLrel  upirtrc+0 67384BC 5E 653CBAC 653CCBC
    653D990 60FEF8B8 653E194
    6736CD8 1 0 0
    kpuexecv8+2815      CALLrel  kpurcsc+0
    kpuexec+2106        CALLrel  kpuexecv8+0 673AE10 6736C4C 6736CD8 0 0
    653EDE8
    OCIStmtExecute+29   CALLrel  kpuexec+0 673AE10 6736C4C 673AEC4 1 0 0
    0 0 0
    kjhnmmon_action+5 CALLrel _OCIStmtExecute+0    673AE10 6736C4C 673AEC4 1 0 0
    26 0 0
    kjhncheck_ha_reso CALLrel kjhnmmon_action+0 653EFCC 3E
    urces+140
    kebmronce_dispatc CALL??? 00000000
    her+630
    kebmronce_execute CALLrel kebmronce_dispatc
    +12 her+0
    _ksbcti+788          CALLreg  00000000             0 0
    ksbabs+659          CALLrel  ksbcti+0
    kebmmmon_main+386 CALLrel _ksbabs+0            3C5DCB8
    _ksbrdp+747          CALLreg  00000000             3C5DCB8
    opirip+674          CALLrel  ksbrdp+0
    opidrv+857          CALLrel  opirip+0 32 4 653FEBC
    sou2o+45            CALLrel  opidrv+0 32 4 653FEBC
    opimaireal+227 CALLrel _sou2o+0             653FEB0 32 4 653FEBC
    opimai+92           CALLrel  opimai_real+0 3 653FEE8
    BackgroundThreadSt  CALLrel  opimai+0
    art@4+422
    7C80B726 CALLreg 00000000
    --------------------- Binary Stack Dump ---------------------
    ========== FRAME [1] (_ksedst+38 -> _ksedst1+0) ==========
    Dump of memory from 0x065386DC to 0x065386EC
    65386D0 065386EC [..S.]
    65386E0 0040467B 00000000 00000001 [{F@.........]
    ========== FRAME [2] (_ksedmp+898 -> _ksedst+0) ==========
    Dump of memory from 0x065386EC to 0x065387AC
    65386E0 065387AC [..S.]
    65386F0 00403073 00000000 53532E49 20464658 [[email protected] ]
    6538700 54204D41 0000525A 00000000 08EF0EC0 [AM TZR..........]
    6538710 6072D95A 08EF0EC5 03672F70 00000017 [Z.r`....p/g.....]
    6538720 00000000 00000000 00000000 00000000 [................]
    Repeat 1 times
    6538740 00000000 00000000 00000000 00000017 [................]
    6538750 08EF0B3C 08EF0B34 03672F70 08F017F0 [<...4...p/g.....]
    6538760 603AA0D3 065387A8 00000001 00000000 [..:`..S.........]
    6538770 00000000 00000000 00000001 00000000 [................]
    6538780 00000000 08EF0A38 06610020 031E1D20 [....8... .a. ...]
    6538790 00000000 065386F8 08EF0A38 06538D38 [......S.8...8.S.]
    65387A0 0265187C 031C8860 FFFFFFFF [|.e.`.......]
    ========== FRAME [3] (_ksfdmp+14 -> _ksedmp+0) ==========
    and the file is keeping increasing, though I have deleted a lot of this, but:
    as I marked:
    time size
    15:23 pm 795mb
    16:55 pm 959mb
    17:01 pm 970mb
    17:19 pm 990mb
    Any solution for that?
    Thanks!!

  • Keeping two very large datastores in sync

    I'm looking at options for keeping a very large (potentially 400GB) TimesTen (11.2.2.5) datastore in sync between a Production server and a [warm] Standby.
    Replication has been discounted because it doesn't support compressed tables, nor the types of table our closed-code application is creating (without non-null PKs)
    I've done some testing with smaller datastores to get indicative numbers, and a 7.4GB datastore (according to dssize) resulted in a 35GB backup set (using ttBackup -type fileIncrOrFull). Is that large increase in volume expected, and would it extrapolate up for a 400GB data store (2TB backup set??)?
    I've seen that there are Incremental backups, but to maintain our standby as warm, we'll be restoring these backups and from what I'd read & tested only a ttDestroy/ttRestore is possible, i.e. complete restore of the complete DSN each time, which is time consuming. Am I missing a smarter way of doing this?
    Other than building our application to keep the two datastores in sync, are there any other tricks we can use to efficiently keep the two datastores in sync?
    Random last question - I see "datastore" and "database" (and to an extent, "DSN") used apparently interchangeably - are they the same thing in TimesTen?
    Update: the 35GB compresses down with 7za to just over 2.2GB, but takes 5.5 hours to do so. If I take a standalone fileFull backup it is just 7.4GB on disk, and completes faster too.
    thanks,
    rmoff.
    Message was edited by: rmoff - add additional detail

    This must be an Exalytics system, right? I ask this because compressed tables are not licensed for use outside of an Exalytics system...
    As you note, currently replication is not possible in an Exalytics environment, but that is likely to change in the future and then it will definitely be the preferred mechanism for this. There is not really any other viable way to do this other than through the application.
    With regard to your specific questions:
    1.   A backup consists primarily of the most recent checkpoint file plus all log files/records that are newer than that file. So, to minimise the size of a full backup ensure
         that a checkpoint occurs (for example 'call ttCkpt' from a ttIsql session) immediately prior to starting the backup.
    2.   No, only complete restore is possible from an incremental backup set. Also note that due to the large amount of rollforward needed, restoring a large incremental backup set may take quite a long time. Backup and restore are not really intended for this purpose.
    3.   If you cannot use replication then some kind of application level sync is your only option.
    4.   Datastore and database mean the same thing - a physical TimesTen database. We prefer the term database nowadays; datastore is a legacy term. A DSN is a different thing (Data Source Name) and should not be used interchangeably with datastore/database. A DSN is a logical entity that defines the attributes for a database and how to connect to it. It is not the same as a database.
    Chris

  • LOAD UNIT OF COMPONENT IS VERY LARGE (GENERATION LIMIT)

    We are experiencing this meesage when compiling an ABAP WEB DYNPRO Application: "LOAD UNIT OF COMPONENT IS VERY LARGE (GENERATION LIMIT)"
    When Checking the Generation Limits In the Context menu, I have determined our size of Generated Load in bytes is to big.
    The documentation of recommendations is to restructure the program. I am not clear what this means and how this would reduce the Generation Load in bytes. Any ideas would be appreciated.

    > How should we reorganize the application and at the same time ensure smooth and user-friendly handling?
    We only want to use one Explorer window.
    Using multiple components doesn't mean that the user will notice any difference.  Component usages can be embedded within one another.  Using the ALV for instance is a perfect example of a component usage.
    >- Even the SAP reference application "LORD_MAINTAIN_COMP" (37 views) is way too big, according to the recommendation. Is there a better example from SAP?
    I wouldn't consider LORD_MAINTAIN_COMP a reference applicatoin.  It was one of the veryfirst WDA's shipped by SAP before we learned some of these lessons ourselves.  Have a look at the guidelines for Floorplan Manager if you are on 7.01. The FPM provides a very good (and well used by SAP) framework for building large scale WDA applications. 
    >- How could a complex transaction be built and at the same time stay in the green limit area (< 500k
    As described the usage of multiple components avoids the generation limit and is recommended for large scale applications.
    >- What at all is the problem in loading 2 Megabytes of data into memory? Could you please describe the technical background in more detail?
    It has nothing to do with 2Mb into memory.  It has to do with the generation load size in the VM for the generated class that represents your WDA Component.  The ABAP compiler and VM have limits (like all VMs and compilers) on total load size and the maximum size for operations and leaps. Generated code can be extremely verbose.  Under normal conditions, these load limits are almost never reached in human created classes. 
    In 7.02 we backported the 7.20 ABAP complier - which in additon tpbe rewritten to support multipass compelation, also increases some of the load limits.  However the general recommandation about componentization still stands.  Componentization of you WDA application improves maintainabilityand reusability over time.  My personal rule is that if you are getting between 10-12 views in your Component, it is time to think about breaking out into multiple components.
    >- Is there a maximum load size, which would lead to an error (reject of generation)?
    Yes there is.  However the workbench throws warnings well in advance.  At some point it won't even let you add more views to a component. However if you continue to add content to the existing views, you can reach a point where generation fails.

  • Very Large Snapshot

    Hi
    We are using Hyper-V on Server 2012, we have 4 nodes in a cluster and the machines are kept on clustered storage.
    During some maintenance on one of our SQL servers I noticed that the VM had been snapshotted way back in January 2014.  This has led to some very large snapshots.
    The VM has 3 disks - an OS disk (running Server 2008R2 with SQL Server 2008R2), a disk for the DBs and a disk for the SQL transaction logs.  ALL disks have large (70GB+) snapshots against them.
    I don't know why this machine was snapshotted - I don't remember ever doing it myself (and certainly from experience using VM Ware Snapshots in the past steer well clear of these things).
    My main concern is not only are they large snapshots but the VM is a busy SQL server.  We migrated from VM Ware a year ago and I have never used Hyper-V snapshots, a bit of research shows up what I would expect (don't use for production unless necessary,
    don't use for DB/AD/Exchange servers etc).
    So my question to the community is what course of action should I take?  Delete the snapshots and hope for the best?
    Our DBs are backed up every 15mins and the OS is replicated offsite so if it went wrong then recovery is possible (but not painless!)
    Does anyone have any experience of such a situation?
    Any advice greatly appreciated!

    I have very little information in the clients setup outside its server 2012 and he has an unknown version of SQL server
    VM has 3 what, VHD or bare disks?
    I need more facts
    No, you don't, the problem was quite clearly stated in the original post. He has some large snapshots, doesn't know who took them or why, and wants to know if it is safe to delete them. A very simple question and one which has already been answered by Sam.
    It is obvious from your responses that you simply don't understand what this thread is all about. Providing you with more information is pointless. It would just be more information that you don't understand.
    I assumed he wanted new ones
    Corsair Carbide 300R with TX850V2
    Asus M5A99FX PRO R2.0 CFX/SLI
    AMD Phenom II 965 C3 Black Edition @ 4.0 GHz
    G.SKILL RipjawsX DDR3-2133 8 GB
    EVGA GTX 660 Ti FTW Signature 2 (GK104 Kepler)
    Asus PA238QR IPS LED HDMI DP 1080p
    ST2000DM001 & Windows 8.1 Professional x64
    Microsoft Wireless Desktop 2000 & Wacom Bamboo CHT470M
    Place your rig specifics into your signature like I have, makes it 100x easier to understand!
    Hardcore Games Legendary is the Only Way to Play!

Maybe you are looking for

  • Why is WPA2 still being used??

    This a message I wrote recently in a forum that brought up my idea from a couple of years and every year I see new products/WIFI being released, but not changes so here it is again....and hopefully someone who works with the WIFI alliance can bring t

  • TDS Config Related To Qatar

    Hello SAP TDS Experts, i am working for client in Qatar recently qatar government is implimented 5% TDS on sub-controctors for that can i go for Extended withhoulding tax configuration . i have worked on configuration for india.  i hve few queries on

  • VAT for Spain

    Hello all, could you, please, help me with this? Since 1st of january spanish VAT numbers validations rules seems to have changed. Up to now we have noticed that letters have been added in 3rd and last position: For example: ESW0601104C  ESY0041572W

  • Random date stamps

    I teach a class of students that maintain roughly a dozen websites for our school. It's important to show when a site has been updated, so I've had each of my students insert on the bottom of each of their sites a date stamp that automatically update

  • Re-connect to source file

    I'm new to the mac world and I mucked something up in the data transfer and now nearly all my songs in itunes have the ! sign saying that I need to reconnect to the source file. I can't possibly do this for 10k+ songs. How do I have iTunes reconnect