"rdmgr -n " commnad utility hangs after purging database

Hi,
Version: SunOne Portal 6.0
I did the following steps:
1. Started Robot process.
2. Run command "rdmgr -n". It returns count as 25.
3. Purge database without stopping the robot.
4. On running command "rdmgr -n ", it hangs.
Stopped the robot process, restarted robot, restarted portal server but of no avail. It still hangs.
I also tried running option (Database - Analysis) from Admin console but that also hangs.
The problem gets solved only after reinstallation of portal server 6.0.
Any clue what's got wrong or i should install portal server 6.1 instead of 6.0 as i have seen some bugs related to "purge" gets rectified in this version.
Any help will be appreciated...

I forget to mention that i also run "Database/Expire" option from admin console and there is a bug related to this in 6.0 whose fix has been integrated in 6.1.
Can this bug make rdmgr command hangs in 6.0?
Any comments???
From 6.1 release notes:
4812074 - Resource descriptions are not expired cleanly from the main database. The rdmgr -E command leaves the resource descriptions in the index but not in main database.

Similar Messages

  • Check Writer Hangs after 10G database upgrade

    Hi Gurus,
    Can anybody let me know how i can trace check writer concurrent process.
    It seems to hang after we upgraded the database to 10g version.
    Thanks,
    S.

    The log file shows
    /u31/oracle/fimsprodappl/pay/11.5.0/bin/PYUGEN
    Program was terminated by signal 11
    Should i do some thing with PYUGEN (rebuild/relink) after 10G database upgrade.
    We are on HP-UX 11 (PA-RISC) 64 bit.
    S

  • Refreshing mview is hanging after a database level gather stats

    hi guys,
    can you please help me identify the root cause of this issue.
    the scenario is this:
    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    we already identified during testing that the scenario where the refresh mview is failing is when after we are gathering stats in a database level.
    during gather stats in a schema level, refresh mview is successful.
    can you please help me understand why we are failing refreshing mview after we gather stats in the database level??
    we are using oracle 9i
    the creation of the mview goes something like below:
    create materialized view hanging_mview
    build deferred
    refresh on demand
    query rewrite disabled
    appreciate all your help.
    thanks a lot in advance.

    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    You know Tuesday's MV refresh "hangs".
    You don't know why it does not complete.
    You desire solution so that it does complete.
    You don't really know what is is doing on Tuesdays, but hope automagical solution will be offered here.
    The ONLY way I know how to possibly get some clues is SQL_TRACE.
    Only after knowing where time is being spent will you have a chance to take corrective action.
    The ball is in your court.
    Enjoy your mystery!

  • Database Hanging after Alter Database Suspend Command

    I issued the command for testing
    Alter database suspend
    Its hanging for more than 2 rhrs,I suspect there are some users connected to it...
    How to login and resume the DB for the USers
    Can i ask them to disconnect from the DB

    And your version number is?
    You suspect there are connected users?
    Is there some reason you can't query gv$session?
    Can you ask them to disconnect from the database?
    I don't know ... can you? <g>
    I know I certainly would.
    What is the business case that you are doing the suspend?

  • Database hangs after retrieving some records....

    hi,
    I create a two level B+-tree index, for the first level i'm using key as logical database name, for the second level B+-tree i'm using some other field in my data. I am retrieving the records based on logical database name. I am using C++ to implement my index.
    In the following code i'm retrieving records from database. Program is behaving differently for different logical database names. For example in my database i have around 4 lakhs records with logical database name 'A' and around 1 lakh of records with logical database name 'B'. The following code displays all B records but programs hangs after retrieving some A records.
    I'm using PAGE_SIZE=8192, numBuffers=2048, and runnig in Fedora Core3.
    DbEnv myEnv(0);
         myEnv.set_cachesize(0, PAGE_SIZE * numBuffers, 0);
         myEnv.set_data_dir("/home/raviov/new/database");
              try {
                   myEnv.open("./", DB_CREATE | DB_INIT_MPOOL, 0);
              catch (DbException &e) {
                   cerr << "Exception occurred: " << e.what() << endl;
                   exit(1);
              db=new Db(&myEnv,0);
              db->set_pagesize(PAGE_SIZE);     
              db->set_bt_compare(compare_int);
         dbname=itoa(p);
         try
         db->open(NULL,
                   "treedb.db",
                   dbname,
                   DB_BTREE,
                   0,
                   0);
         catch(DbException &e)
              cerr << "Failed to open DB object" << endl;
              exit(1);
         db->cursor(NULL,&cursorp,0);
         key=new Dbt();
         data=new Dbt();
         while(ret=(cursorp->get(key,data,DB_NEXT))==0)
              j=*((int*)key->get_data());
              q=(*((struct tuple*)data->get_data())).startPos;
              r=(*((struct tuple*)data->get_data())).endPos;
              s=(*((struct tuple*)data->get_data())).level;
              cout<<"position : "<<j<<"\t";
              cout<<"start : "<<q<<"\t";
              cout<<"end : "<<r<<"\t";
              cout<<"level : "<<s<<"\n";
         if(ret==DB_NOTFOUND)
              cout<<"no records";
              exit(1);
         if(cursorp!=NULL)
              cursorp->close();
         try
              db->close(0);
         catch(DbException &e)
              cerr << "Failed to close DB object" << endl;
              exit(1);
         myEnv.close(0);

    HI Andrei,
    thank you for giving reply, I'm not using any secondary indecies, subdatabases and not using any threads.
    the following code displays index...
    #include <stdio.h>
    #include <db_cxx.h>
    #include <iostream.h>
    #include <db.h>
    #include <stdlib.h>
    #include <string.h>
    #include <sys/types.h>
    int PAGE_SIZE=8192;
    int numBuffers=2048;
    char *itoa(const int x)
          char buf[100];
          snprintf(buf, sizeof(buf), "%d", x);
           return strdup(buf);
    int compare_int(Db dbp, const Dbt a, const Dbt *b)
      int ai;
      int bi;
      memcpy(&ai,a->get_data(),sizeof(int));
      memcpy(&bi,b->get_data(),sizeof(int));
      return(ai-bi);
    struct tuple
           int startPos;
           int endPos;
           int level;
    main()
         FILE *fp;
         int i,j,k,l,m,n,total=0;
         char dbname[500],filename[500],str [500];
         tuple t;
         Db *db;
         Dbt key,data;
         DbEnv myEnv(0);
         myEnv.set_cachesize(0, PAGE_SIZE * numBuffers, 0);
         try {
                myEnv.open("/home/raviov/Desktop/example", DB_CREATE | DB_INIT_MPOOL, 0);
           catch (DbException &e) {
                  cerr << "Exception occurred: " << e.what() << endl;
                  exit(1);
         for(n=0;n<=84;n++)
              db = new Db(&myEnv, 0);     // Instantiate the Db object
                    db->set_bt_compare(compare_int);
               db->set_pagesize(PAGE_SIZE);
              strcpy(filename,"/home/raviov/Desktop/GTCReport/code/sequence/splitter/tree/");
              strcat(filename,itoa(n));
              fp=fopen(filename,"r");
              if(fp==NULL)
                   cout<<"error in opening the file";
                   exit(0);
              while(fgets (str , 500 , fp)!=NULL)
                   sscanf(str,"%d(%d,%d,%d,%d)",&i,&j,&k,&l,&m);
                   key=new Dbt(&i,sizeof(int));
                   if(total==0)
                        strcpy(dbname,itoa(j));
                   t.startPos=k;
                   t.endPos=l;
                   t.level=m;
                   data=new Dbt(&t,sizeof(t));     
                   if(total==0)
                        try
                             db->open(NULL,
                             "tree.db",
                                       dbname,
                              DB_BTREE,
                                DB_CREATE,
                                0);
                        catch(DbException &e)
                               cerr << "Failed to create DB object" << endl;
                             exit(1);
                        total=99;
                   int ret=db->put(NULL,key,data,DB_NOOVERWRITE);
                   if(ret==DB_KEYEXIST)
                        cout<<"key already exist\n";
                        exit(1);
                   delete key;
                   delete data;
              total=0;
              fclose(fp);
              try
                   db->close(0);
              catch(DbException &e)
                     cerr << "Failed to close DB object" << endl;
                   exit(1);
         myEnv.close(0);
    }The following code retrieves the records from database that we had built above...
    #include <stdio.h>
    #include <db_cxx.h>
    #include <iostream.h>
    #include <db.h>
    #include <stdlib.h>
    #include <string.h>
    #include <sys/types.h>
    int PAGE_SIZE=8192;
    int numBuffers=2048;
    char *itoa(const int x)
          char buf[100];
          snprintf(buf, sizeof(buf), "%d", x);
           return strdup(buf);
    int compare_int(Db dbp, const Dbt a, const Dbt *b)
      int ai;
      int bi;
      memcpy(&ai,a->get_data(),sizeof(int));
      memcpy(&bi,b->get_data(),sizeof(int));
      return(ai-bi);
    struct tuple
           int startPos;
           int endPos;
           int level;
    main()
         FILE *fp;
         int i,j,k,l,m,n,total=0;
         char *dbname;
         char filename[200],str [100];
         tuple t;
         Db *db;
         Dbt key,data;
         Dbc *cursorp=NULL;
         int ret,x=4;
         char *ravi;
         int p=84763;
         int y=2134872;
         int r,s,q,count=0;
         DbEnv myEnv(0);
         myEnv.set_cachesize(0, PAGE_SIZE * numBuffers, 0);
         myEnv.set_data_dir("/home/raviov/new/database");
              try {
                      myEnv.open("./", DB_CREATE | DB_INIT_MPOOL, 0);
               catch (DbException &e) {
                      cerr << "Exception occurred: " << e.what() << endl;
                      exit(1);
              db=new Db(&myEnv,0);
              db->set_pagesize(PAGE_SIZE);     
              db->set_bt_compare(compare_int);
         dbname=itoa(p);
         try
              db->open(NULL,
                    "tree.db",
                     dbname,
                     DB_BTREE,
                     0,
                     0);
         catch(DbException &e)
                cerr << "Failed to open DB object" << endl;
              exit(1);
         db->cursor(NULL,&cursorp,0);
         key=new Dbt();
         data=new Dbt();
         while(ret=(cursorp->get(key,data,DB_NEXT))==0)
              j=*((int*)key->get_data());
              q=(*((struct tuple*)data->get_data())).startPos;
              r=(*((struct tuple*)data->get_data())).endPos;
              s=(*((struct tuple*)data->get_data())).level;
              cout<<"position   : "<<j<<"\t";
              cout<<"start : "<<q<<"\t";
              cout<<"end   : "<<r<<"\t";
              cout<<"level   : "<<s<<"\n";
              delete key;
              delete data;
         if(ret==DB_NOTFOUND)
              cout<<"no records";
              exit(1);
         if(cursorp!=NULL)
              cursorp->close();
         try
              db->close(0);
         catch(DbException &e)
                cerr << "Failed to close DB object" << endl;
              exit(1);
         myEnv.close(0);     
    }

  • Iphoto hangs after upgrading to Yosemite and latest iPhoto version

    It tells me the library needs upgrading, and starts to search the library. The progress bar suggests something is happening then hangs after about 99% completion. I have had to force close twice after several hours stuck at that point and then start the whole process from scratch when I restart.

    Have you followed these instructions - http://www.fatcatsoftware.com/iplm/Help/rebuilding%20a%20corrupted%20iphoto%20li brary.html
    iPhoto Library Manager > Help > Rebuilding a corrupted iPhoto library
    Printable help
    Rebuilding a corrupted iPhoto library
    If you have an iPhoto library that is corrupt and causing iPhoto to crash or otherwise be unusable, iPhoto Library Manager provides the ability to rebuild your library based on the information found in its library data files. Note that iPhoto also has a built-in rebuild function that can be sometimes be used to repair a corrupted library database. You can find instructions on how to use that on Apple's website at http://support.apple.com/kb/HT2638 (iPhoto 6 or later) or http://support.apple.com/kb/HT2042 (iPhoto 5 or earlier).
    iPhoto Library Manager's rebuild works differently, in that instead of trying to repair the library in place, it creates a brand new library and tries to reimport the entire contents of the original library into the new one, including reconstructing albums, photo metadata, etc. Note that rebuilding a library has all the same limitations as other photo transfer operations as far as what can and can't be copied between libraries. Also, depending on how badly damaged the library is, iPhoto Library Manager may or may not be able to piece together some or all of the library metadata.
    To start a rebuild, select the library you would like to rebuild, then choose the "Rebuild Library" item from the "Library" menu. You will be prompted to choose a location for the rebuilt library, and whether or not you want iPLM to scavenge orphaned photos it finds in the library package. Once you've made your choices, iPLM will examine the library and rebuild the library structure and photos as best it can, then display you a preview of what it was able to find. If your library is badly damaged and the preview is missing a lot of content from the original library, this can save you from going through with a rebuild that won’t end up being of much help.
    Once you've had a chance to examine the preview, if you want to go forward and create the rebuilt library, click the "Rebuild" button in the upper right. iPhoto Library Manager will create a new library and start importing the contents of the original library into the new one.
    Scavenging photos
    In some cases, either the iPhoto library database is too damaged for iPhoto Library Manager to be able to salvage any information from it, or the library data is incomplete and there are photos that still exist inside the library package, but iPhoto has lost track of them. In these cases, you may want to check the "Scavenge orphaned photos" checkbox when choosing a location for the rebuilt library. After iPhoto Library Manager has read the library data as best it can, it will perform an additional pass through the package and locate any photos that are no longer referenced in the library database. Any additional photos that are found will be included in the rebuild, and a new "Scavenged Photos" album will be created in the rebuilt library containing any scavenged photos.
    LN

  • Disk Utility hangs ("not responding") when attempting to Repair Disk on Time Capsule sparsebundle.  Help.

    I am trying to improve the performance of my Time Machine backup and retrievals via my 2nd Gen. Time Capsule from my iMac Core 2 Duo hard drive with SL 10.6.8.  The incremental backups generally take 20 - 40 minutes each hour generally with fewer than 100 MB of data being backed up.  I have read through numerous threads on these forums about Time Capsule and Time Machine issues as well as the very helpful Pondini site (http://web.me.com/pondini/Time_Machine/Home.html) and have tried many other suggestions (listed below), but have been unable to attempt a Disk Utility repair disk on the Time Capsule sparsebundle file.
    After locating the sparsebundle 'file' in the finder under Shared/TC/Data, I am only sometimes able to drag the sparsebundle into the Disk Utility volumes and disk window.  About fifty percent of the time, Disk Utility hangs when attempting to drag in the sparsebundle.  If I am able to get it listed and click on "repair disk", then the beach ball begins to spin and nothing happens.  This occurs whether or not I first use AirPort Utility to "Disconnect All Users" from the Time Capsule.  I have given it over one hour to attempt the repair disk, but nothing happens, i.e., not a single message shows up in the detail window.  The beach ball just spins, and Disk Utility shows up in red as "not responding" under the Force Quit menu.
    Also note that on my new Macbook Air, Time Machine runs quite quickly via this same Time Capsule from my new Macbook Air using OS X Lion (which seems to indicate that the Time Capsule drive itself is OK).
    I would appreciate any tips on getting Disk Utility to run "repair disk" on my sparsebundle, as I seem to have exhausted most other options (see below) for improving Time Capsule performance -- aside of course, from wiping the Time Capsule clean and starting all over.... (which I really don't want to do, and from what I've read, has not helped some people who've tried this anyway).
    Many thanks!
    Although I would like to focus this thread on the Disk Utility / sparsebundle "Repair Disk" problem, I should mention that I have tried the following other things:
    1.  Repaired disk permissions on my hard drive
    2.  Used Disk Utility to run "repair disk" on my hard drive after starting up from a SuperDuper external backup
    3.  Run DiskWarrior on my hard drive, again, from a separate startup disk
    4.  Did the "full reset of Time Machine", a la Pondini Troubleshooting #A4
    5.  Changed my computer and Time Capsule names to simple short names with only alphanumeric characters and no spaces
    6.  Ensured that my Time Capsule is being accessed only via ethernet
    7.  Relaunched the finder
    8.  And finally, attempting to use Disk Utility to repair the sparsebundle - thus the nature of this post

    OK, no takers out there, but I answered my own question (sort of).
    The answer is that even though Disk Utility shows up as "not responding" under the Force Quit menu and appears to hang, it is actually still working.  A bit of patience shows that after about an hour, my Disk Utility was able to show the 1.44 TB Time Machine sparsebundle from Time Capsule in the disk window on the left hand side.  Once this appeared, I was able to click on "Repair Disk" (note that prior to doing this, I had to "Disconnect All Users" from my Time Capsule using the Airport Utility program; otherwise you will get a "could not unmount disk" error).  Again, the Repair Disk function appeared to hang Disk Utility as it showed up as "not responding" under the Force Quit menu.  However, after about 8 hours, Disk Utility did run the entire Repair Disk protocol on the sparsebundle.  The results DO NOT show up in the Disk Utility details window - so even if you are patient enough to let it complete, you will not see anything and may think that nothing happened.  However, you can find the familiar results of the Repair Disk function by looking on Console - find the "DiskUtility.log" under Files: ˜/Library/Logs.
    Unfortunately, I was hoping to find a problem that would have been subsequently repaired which would result in dramatically increased Time Capsule speed, but this was not the case.  Disk Utility showed everything to be "OK" and I am stuck with a very slow Time Capsule....
    (And of note, my new Macbook Air (OS Lion) Time Machine functions blazingly fast on both backup and retrieval to and from the very same Time Capsule - even though, these are occuring wirelessly via AirPort -- granted the size of this sparsebundle is an order of magnitude smaller at this point, i.e., only 10 GB right now)
    Would still be happy to hear of any similar experiences or tips on getting Time Capsule to work more quickly.

  • Disk utility hangs during disk verify

    Initially 'disk verify' registered errors on exit - minor error/issue it said,some kind of header problem, so I ran disk utility off the install disk. Now the machine's native version of disk utility hangs during disk verify, blue & white spirals forever.
    This coincidentally straight after my gmail/google address/account was hacked a couple of days ago.

    Have you tried booting in SafeBoot mode (shift key held on start
    until login screen appears) then launch Disk Utility and then run
    its 'repair disk permissions' item, selecting the Macintosh HD;
    and then note the repair window. It should fix something.
    And when Disk Utility is done, quit it, and restart the Mac normally.
    {...It is unlikely something from gmail affected your computer unless
    you open up unknown sourced emails or go to odd web sites where
    something may be downloaded from there, which you would almost
    have to install (as admin, etc) for malware to work. You can get and
    try ClamXAV for free to see if it sees any malware virus content.}
    If you use Gmail, and do not log in using the https:// (secure http)
    you should change the habit and bookmark the secure login site.
    And when booted from the Installer disk and running Disk Utility
    from there, see about running "repair disk" and not verify. The
    repair choice will verify; but verify will not repair. Could be if
    you cannot use the Utility to Repair the issues, and the other
    SafeBoot mode and the computer's Disk Utility to get it working
    you may need to further troubleshoot and/or get third-party tools.
    In any event...
    Good luck & happy computing!

  • Disk Utility hangs attempting to erase SimpleTech 100gb external USB disk

    I'm attempting to erase and format a SimpleTech 100gb external USB disk. Disk Utility hangs. The system error log says:
    "AppleUSBEHCI[0x1d59000]::Found a transaction past the completion deadline on bus 91, timing out!
    Jul 21 17:30:35 Kari-Hohnbaums-Computer-2 kernel[0]: USBF:"
    The console log (and Disk Utility log) shows:
    Preparing to erase : “PBEX1”
    Creating Partition Map
    error writing partition map: Device not configured (6)
    Disk Erase failed with the error:
    What is the problem? This disk is recognized by Tiger but only as read-only as it ships as an NTFS volume and must be erased and reformatted before it can be used by a MAC.
    All help appreciated.

    I tried the partition approach and it unfortunately failed as well. The same "hung" symptoms as before. The green light (disk activity) on the disk flashed several times at the initial request, but that was all. I eventually interrupted it after a hourn which generated the same system log message: AppleUSBEHCI[0x1d59000]::Found a transaction past the completion deadline on bus 91, timing out!
    The disk utility log says:
    Preparing to partition disk: “SAMSUNG HM100JC Media”
    Partition Scheme: Apple Partition Scheme
    Mac OS 9 Disk Drivers installed
    1 partitions will be created
    Partition 1
    Name : “PBEX1”
    Size : 93.2 GB
    Filesystem : Mac OS Extended (Journaled)
    Creating Partition Map
    Partition failed for disk SAMSUNG HM100JC Media Device not configured
    Partition complete.
    I don't have easy access to a Window's box. I do have access to a Linux system. Perhaps I can configure it there and it will work on my PB.

  • Disk Utility hangs when erasing external disk

    In Time Machine I wasn't able to figure out how to attach an external drive via USB. I guessed that it probably wouldn't work on a MS-DOS filesystem, so I started Disk Utility, which I never have used before. I have formatted disks before on Linux and on Windows, so it's nice to see a great app for this.
    In the Time Machine brief help text, there is not mentioned what kind of filesystem that is required or if it work with any filesystem. So I searched, and it seems as if HFS+ journaled case-insensitive is the filesystem that one should choose. Can anyone confirm this?
    When I click erase then the Disk Utility program hangs and I see the spinning beachball. The Activity Monitor program shows that disk activity stops after a few seconds. There is no more activity, so after 5 minutes I stop the program.
    I have tried several times, and I see the same behavior. I even tried restarting the computer. But no luck.
    Now my external disk no longer shows that it's a MS-DOS filesystem, but shows that it's HFS+ journaled case-insensitive. Probably not ok so far. I try creating a single patition, with format: HFS+ journaled case-insensitive, without OS 9 drivers installed. When I click apply then spinning beachball again and diskactivity stops after a few seconds, and I stop the program after some minutes where it has been inactive.
    My guess is that the disk-erasing task failed and has left the disk in a broken state. During this task Disk Utility began to hang for a reason that I don't know.
    My second guess to why the create-partition task failed, is that the disk is in a broken state, and thus the partition cannot be created, and Disk Utility hangs.
    How do I format the external disk from the command line?

    In Disk Utility, select the drive mechanism (not the volume on it) and click the Partition button. Use the Options button to choose the GUID partition scheme, then make one HFS+ (Journaled) volume.
    There is no point in using the command line for this task, because if Disk Utility cannot deal properly with the disk, Mac OS X will not either. Disk Utility is using diskutil to do the partitioning and formatting.

  • Powerbook G4 hangs after few minutes of usage

    Since February 06 I have a PB G4: 1.67GHZ, model Powerbook5,8
    Since this weekend it hangs after a few minutes of usage.
    I tried a whole load of possible solutions, like resetting the RAM, removing the battery and holding the powerbutton for 5 seconds...I even held it once for half a minute, but that didn't help. Checking the harddrive with disk utility (tried it often, but only once did the PB work long enough to get the end result: hard drive is healthy).
    Often, when it hangs, I can get it to work by simply turning it off and on again. That gives me another window of 2 minutes of usage! Handy throughout the day when I need a file, or a document: I simply boot the machine, get the file and shut it down.
    Carbon copy to an external HD does not work: the system hangs a few minutes after copying.
    I installed a utility named applejack, and the first time I tried it, I got an interesting error: there were errors with the USB controller or USB bus, and system hangs. I didn't make a screenshot so this is out of the top of my head.
    This was interesting because when I tried to make a HD copy (and believe me: I tried lots and lots of times), the system freezes first, and then an error about not unmounting a device popped into the screen. As if the touchpad, keyboards, and all other i/o devices stop working, and the system goes on for a few seconds more.
    Sometimes the system just hangs, sometimes I get that black kernel panic screen.
    Just before writing this, I tried the hardware test on the install CD once more:
    Held down the option key, chose the hardware test -> quick test and..
    system hangs after completion of the test
    A message appeared in the bottom-left corner. Several lines of information, starting with:
    "ERROR, Write to location ZERO detected!!, Loc zero was originally zero
    Current File: Localization/en/Scripts/enAHTscript.sc"
    etc
    I live on a nice island in the Caribbean, so popping into the nearest Apple store means getting on a flight to Miami first.
    What -if any- are my options? Is there something I can try to do myself before shipping it to Apple? Opening the case, checking this or that?
    By the way, I originally posted this in the Titanium folder, but someone mentioned that I might actually have a Aluminum PB.
    see http://discussions.apple.com/message.jspa?messageID=2074854#2074854

    hi joeuu,
    I have 1.5GB mem: one 512 stick and one 1GB and I already tried the sticks one at a time: it hangs with the original 512, it hangs with the 1GB and it hangs with them together in both possible slot settings.
    I managed to copy the panic.log in /Library/Logs/ and here it is:
    Fri Apr 7 15:27:21 2006
    panic(cpu 0 caller 0x000A8D00): Uncorrectable machine check: pc = 00000000000AF340, msr = 0000000000141020, dsisr = 42000000, dar = 000000000280C200
    AsyncSrc = 0000000000000000, CoreFIR = 0000000000000000
    L2FIR = 0000000000000000, BusFir = 0000000000000000
    Latest stack backtrace for cpu 0:
    Backtrace:
    0x00095698 0x00095BB0 0x0002683C 0x000A8D00 0x000A7F90 0x000ABC80
    Proceeding back via exception chain:
    Exception state (sv=0x41381A00)
    PC=0x000AF340; MSR=0x00141020; DAR=0x0280C200; DSISR=0x42000000; LR=0x000AF158; R1=0x2211B8E0; XCP=0x00000008 (0x200 - Machine check)
    Backtrace:
    0x404E9F24 0x404E1F6C 0x404E7B2C 0x002CEFB8 0x41C3F2C8 0x002BC0E8
    0x002CDA70 0x002BC690 0x002BBB4C 0x002BBA5C 0x404E3954 0x404E3B98 0x002CE900 0x002CD7C8
    0x000A9814
    Kernel loadable modules in backtrace (with dependencies):
    com.apple.driver.AppleUSBTrackpad(1.3.0f1)@0x41c3d000
    dependency: com.apple.iokit.IOUSBFamily(2.2.5)@0x404aa000
    dependency: com.apple.iokit.IOHIDFamily(1.4.3)@0x32671000
    com.apple.driver.AppleUSBOHCI(2.2.5)@0x404df000
    dependency: com.apple.iokit.IOUSBFamily(2.2.5)@0x404aa000
    dependency: com.apple.iokit.IOPCIFamily(1.7)@0x323bf000
    Exception state (sv=0x402ABA00)
    PC=0x00000000; MSR=0x0000D030; DAR=0x00000000; DSISR=0x00000000; LR=0x00000000; R1=0x00000000; XCP=0x00000000 (Unknown)
    Kernel version:
    Darwin Kernel Version 8.2.2: Mon Aug 22 18:43:11 PDT 2005; root:xnu-792.5.11.obj~1/RELEASE_PPC
    Now what can we read from this?
    I see in here "AppleUSBTrackpad" and "AppleUSBOHCI".
    Referring to my original post: I have seen an error (a few days ago, when working with applejack) that also mentioned USB.
    I'm very afraid that this is a hardware issue. Reading through some other posts, I saw something about a ribbon that gave problems. What are your thoughts?

  • Queries on oracle hang after index creation

    Hi,
    I have a problem where queries issued on an oracle database hang after I create indexes on some of the tables used by these queries.
    I have a script where I drop indexes :
    DROP INDEX EVP_PCON00_CDSITC_IX;
    DROP INDEX EVP_PCS202_STIMP1_IX;
    DROP INDEX EVP_PECT00_ONAECT_IX;
    DROP INDEX EVP_PVAL01_ONAVAL_IX;
    After this script I load data into those tables.
    Then I launch another script where I recreate the indexes I just dropped :
    CREATE INDEX EVP_PCON00_CDSITC_IX  ON EVP_PCON00(CDSITC) TABLESPACE TS_ODS_INDEX;
    COMMIT;
    CREATE INDEX EVP_PCS202_STIMP1_IX  ON EVP_PCS202(STIMP1) TABLESPACE TS_ODS_INDEX;
    COMMIT;
    CREATE INDEX EVP_PVAL01_CDORIV_IX  ON EVP_PVAL01(CDORIV) TABLESPACE TS_ODS_INDEX;
    COMMIT;
    When the script ends, I try to execute a query using some of the tables I created indexes on :
    SELECT ...
    FROM ....
    WHERE ....
    The query never return a result set, it just hangs, I look at the session browser in toad and the scan of the first table used in the query hangs.
    When I drop those indexes and re execute the query everything works fine.
    Thanks
    Edited by: 946359 on Jul 13, 2012 9:20 AM

    This is the result I get :
    Execution Plan
    Plan hash value: 1198043594
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 1343 | 12 (17)| 00:00:01 |
    | 1 | HASH UNIQUE | | 1 | 1343 | 12 (17)| 00:00:01 |
    |* 2 | HASH JOIN OUTER | | 1 | 1343 | 11 (10)| 00:00:01 |
    | 3 | NESTED LOOPS OUTER | | 1 | 1257 | 9 (12)| 00:00:01 |
    | 4 | NESTED LOOPS OUTER | | 1 | 1033 | 8 (13)| 00:00:01 |
    | 5 | NESTED LOOPS OUTER | | 1 | 809 | 7 (15)| 00:00:01 |
    |* 6 | HASH JOIN | | 1 | 585 | 6 (17)| 00:00:01 |
    | 7 | NESTED LOOPS | | | | | |
    | 8 | NESTED LOOPS | | 1 | 490 | 3 (0)| 00:00:01 |
    | 9 | TABLE ACCESS FULL | EVP_PTIP00 | 1 | 452 | 2 (0)| 00:00:01 |
    |* 10 | INDEX RANGE SCAN | EVP_PTIC00_CDROLE_IX | 1 | | 1 (0)| 00:00:01 |
    |* 11 | TABLE ACCESS BY INDEX ROWID| EVP_PTIC00 | 1 | 38 | 1 (0)| 00:00:01 |
    | 12 | TABLE ACCESS FULL | EVP_PTIF00 | 1 | 95 | 2 (0)| 00:00:01 |
    |* 13 | TABLE ACCESS BY INDEX ROWID | EVP_PTAB00 | 1 | 224 | 1 (0)| 00:00:01 |
    |* 14 | INDEX RANGE SCAN | EVP_PTAB00_CLTABL_IX | 1 | | 1 (0)| 00:00:01 |
    |* 15 | TABLE ACCESS BY INDEX ROWID | EVP_PTAB00 | 1 | 224 | 1 (0)| 00:00:01 |
    |* 16 | INDEX RANGE SCAN | EVP_PTAB00_CLTABL_IX | 1 | | 1 (0)| 00:00:01 |
    |* 17 | TABLE ACCESS BY INDEX ROWID | EVP_PTAB00 | 1 | 224 | 1 (0)| 00:00:01 |
    |* 18 | INDEX RANGE SCAN | EVP_PTAB00_CLTABL_IX | 1 | | 1 (0)| 00:00:01 |
    | 19 | TABLE ACCESS FULL | EVP_PRIB00 | 1 | 86 | 2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("EVP_PRIB00"."NOTIER"(+)="EVP_PTIP00"."NOTIER")
    6 - access("EVP_PTIF00"."NOTIER"="EVP_PTIP00"."NOTIER")
    10 - access("EVP_PTIC00"."CDROLE"='EMP')
    11 - filter("EVP_PTIC00"."CDDDRX"='CL' AND "EVP_PTIP00"."NOTIER"="EVP_PTIC00"."NOTIER")
    13 - filter("EVP_PTAB00_SITF"."CDTABL"(+)="EVP_PTIP00"."CDSITF")
    14 - access("EVP_PTAB00_SITF"."CLTABL"(+)='SITF')
    15 - filter("EVP_PTAB00_TITR"."CDTABL"(+)="EVP_PTIP00"."CDTITR")
    16 - access("EVP_PTAB00_TITR"."CLTABL"(+)='TITR')
    17 - filter("EVP_PTAB00_RMAT"."CDTABL"(+)="EVP_PTIP00"."CDRMAT")
    18 - access("EVP_PTAB00_RMAT"."CLTABL"(+)='RMAT')

  • Oracle 11g R2 hangs after parameter processes incresed

    Hi,
    I am running an Oracle 11g R2 on a Solaris box. I was getting some Critical alert messages about the "Process Limit %" metric. So I decided to increase the value of the processes database parameter from 200 to 400. After that the system is very slow and hanging with a lot of "Log file sync" and "log file parallel write" events.
    I have rolled it back to 200 and now the system is working fine again, but I am concern about getting again the alert about the "process limit %".
    Is there any other parameter that I need to tune together with the increase of the proccesses on?
    Regards,

    It looks like the issue is not related with the processes change.
    If I reboot then db and then restart the process it runs fine. Something happens during on the db during the day that make the nightly schedule task takes forever. When I checked it this morming there were a lot of commit activity. If I cancel the process and then try to restart it again without rebooting the db the problem persist. But if I bounce the database and then run it, it will complete fine.
    Below is AWR periord comparisson report. The first period is when I cancelled the process and then tried to run it again. The second one is after the database reboot.
    I appreciate any advice.
    Snapshot Set DB Name DB Id Instance Inst num Release Cluster Host Std Block Size
    First (1st) DMFEPRD 3155815743 dmfeprd 1 11.2.0.1.0 NO uscndb18 8192
    Second (2nd) DMFEPRD 3155815743 dmfeprd 1 11.2.0.1.0 NO uscndb18 8192
    Snapshot Set Begin Snap Id Begin Snap Time End Snap Id End Snap Time Avg Active Users Elapsed Time (min) DB time (min)
    1st 2700 25-Jun-11 12:17:51 (Sat) 2701 25-Jun-11 12:23:40 (Sat) 2.69 5.82 15.65
    2nd 2702 25-Jun-11 12:47:17 (Sat) 2703 25-Jun-11 12:52:52 (Sat) 2.40 5.58 13.39
    %Diff -10.78 -4.12 -14.44
    Host Configuration Comparison
    1st 2nd Diff %Diff
    Number of CPUs: 32 32 0 0.0
    Number of CPU Cores: 4 4 0 0.0
    Number of CPU Sockets: 1 1 0 0.0
    Physical Memory: 65408M 65408M 0M 0.0
    Load at Start Snapshot: 6.14 2.03 -4.12 -66.9
    Load at End Snapshot: 6.95 4.67 -2.28 -32.8
    %User Time: 13.43 13.04 -.38 -2.9
    %System Time: 6.95 1.26 -5.69 -81.9
    %Idle Time: 79.62 85.7 6.07 7.6
    %IO Wait Time: 0 0 0 0.0
    Cache Sizes
    1st (M) 2nd (M) Diff (M) %Diff
    Memory Target 10,240.0 10,240.0 0.0 0.0
    .....SGA Target 6,656.0 6,656.0 0.0 0.0
    ..........Buffer Cache 672.0 672.0 0.0 0.0
    ..........Shared Pool 1,472.0 1,472.0 0.0 0.0
    ..........Large Pool 64.0 64.0 0.0 0.0
    ..........Java Pool 224.0 224.0 0.0 0.0
    ..........Streams Pool 64.0 64.0 0.0 0.0
    .....PGA Target 3,584.0 3,584.0 0.0 0.0
    Log Buffer 18.6 18.6 0.0 0.0
    Load Profile
    1st per sec 2nd per sec %Diff 1st per txn 2nd per txn %Diff
    DB time: 2.69 2.40 -10.78 0.96 0.14 -85.42
    CPU time: 0.90 2.13 136.67 0.32 0.12 -62.50
    Redo size: 28,238.40 149,564.99 429.65 10,057.71 8,577.01 -14.72
    Logical reads: 8,840.64 21,941.38 148.19 3,148.78 1,258.26 -60.04
    Block changes: 176.06 1,032.02 486.18 62.71 59.18 -5.63
    Physical reads: 1.82 37.36 1,952.75 0.65 2.14 229.23
    Physical writes: 6.69 25.16 276.08 2.38 1.44 -39.50
    User calls: 118.44 318.49 168.90 42.19 18.26 -56.72
    Parses: 91.31 492.38 439.24 32.52 28.24 -13.16
    Hard parses: 4.73 31.89 574.21 1.68 1.83 8.93
    W/A MB processed: 908,952.26 4,708,929.40 418.06 323,742.82 270,039.92 418.06
    Logons: 18.65 41.19 120.86 6.64 2.36 -64.46
    Executes: 109.63 584.01 432.71 39.05 33.49 -14.24
    Transactions: 2.81 17.44 520.64
    1st 2nd Diff
    % Blocks changed per Read: 1.99 4.70 2.71
    Recursive Call %: 84.16 92.70 8.53
    Rollback per transaction %: 0.00 0.00 0.00
    Rows per Sort: 1,620.17 1,194.14 -426.02
    Avg DB time per Call (sec): 0.02 0.01 -0.02

  • Oracle 11g R1 Installation hangs at Cloning Database .

    When I installing Oracle 11g R1 on RHEL 5.4, the installation hangs in Cloning Database step. I have stopped the installation and then executed the scripts mentioned during installation :
    "Run below as root user
    /opt1/Oracle/OracleDB/oraInventory/orainstRoot.sh
    /opt1/Oracle/OracleDB/product/11.1.0/db_2/root.sh
    After I run the sqlplus and tried to logon:
    SJP2VM0140:work2}-/home/work2# sqlplus
    SQL*Plus: Release 11.1.0.6.0 - Production on Wed Jul 21 20:17:31 2010
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    Enter user-name: scott
    Enter password:
    ERROR:
    ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    Linux-x86_64 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    Any help in this regard is greatly appreciated.
    Edited by: Durga Pulipati on Jul 22, 2010 12:13 AM

    # sqlplus /nolog
    SQL> connect / as sysdba
    Connected to an idle instance.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 1269366784 bytes
    Fixed Size 2144024 bytes
    Variable Size 738199784 bytes
    Database Buffers 520093696 bytes
    Redo Buffers 8929280 bytes
    ORA-00205: error in identifying control file, check alert log for more info
    The alert log :
    <txt>ORA-00210: cannot open the specified control file
    ORA-00202: control file: &apos;/opt1/Oracle/OracleDB/oradata/orcl/control03.ctl&apos;
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00210: cannot open the specified control file
    ORA-00202: control file: &apos;/opt1/Oracle/OracleDB/oradata/orcl/control02.ctl&apos;
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00210: cannot open the specified control file
    ORA-00202: control file: &apos;/opt1/Oracle/OracleDB/oradata/orcl/control01.ctl&apos;
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Could you please help me with this?

  • Forms gets hang after 15 mins in 11.5.10.2

    Hi,
    I have cloned an instance from production. Now my forms in production are getting hanged after 15 mins but the development instance is running fine. The EBS version is 11.5.10.2. Linux RedHat 4.0. Jserv 1.1.
    Everytime this is happening at 15-20 mins interval. I have seen the log files in iAS_HOME/Apache but they are clean.
    Can anyone plz help on this.
    Thanks.
    Neeraj.

    Rapid Clone does not modify the source system. adpreclone.pl prepares the source system to be cloned by collecting information about the database and creating generic templates of files containing source specific hardcoded values.
    I suggest you use Forms Runtime Diagnostics (FRD) to debug the issue, for more details on how to use it please check Note: 150168.1 - Obtaining Forms Runtime Diagnostics (FRD) In Oracle Applications 11i
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=150168.1

Maybe you are looking for

  • How Come My iPhone 3G No Longer Shows Up in iTunes?

    All of a sudden my iPhone doesn't show up in the iTunes sidebar on my 20" iMac, but using the same cable it does on my MacBook. I have tried plugging it in diretly to the iMac as well as to my Belkin hub, but it no longer shows up in iTunes, nor does

  • Outlook integration Add-on is not working SAP 2007B (Status - Failed)

    Dear B1 Experts, Please let me know what should i do , as SAP B1SAP 2007B Add on Outlook integration version 8.00.01.07 is having status as Failed ( it is not working ) So what should i do now?? Should i reinstall if yes then how to go about it?? plz

  • AnyGantt Custom XML Apex 4.2.2

    Hi guys, Sorry to create a new post on this, but I'm having trouble following some of the previously posted solutions. I'd like to use the AnyGantt Project chart but dynamically build the <project_chart> <tasks> tags so that I can leverage some of th

  • Excel mail merge

    how do i print envelopes using an existing excel spreadsheet?

  • PSE 12 freezes after the use of certain tools

    Hi all, I've been using PSE 12 for about a year and it's been working great. I recently upgraded to Yosemite on my iMac, and now every time I try to use certain features on PSE (lasso tool, stamp tool), the program freezes and I have to force quit. I