Deadlocks and more on a large database.

Hi,
We are running a multithreaded server. The database is performing as cache, which is queried/updated about 500 times/sec by about 500 threads.
Most queries are misses.
We are experiencing a large number of deadlocks in the database, and sometime get/put operations fails to complete even after 20 retries.
We would please like to know the following (not all questions related to the deadlock issue):
1. How can we reduce the number of deadlocks ?
2. Is using deferred mode favorable in this use case ?
3. How do we find the optimal log file size ?
4. Are there any more configuration properties we need to consider ?
Thanks,
Lior

've tried changing the lock timeout to the default,
as specified in the je.properties file. (50000).
however, it seems the change has no effect, and the
deadlocks occur in the same frequency as before.Ok.
we've also tried without much success:
envConfig.setConfigParam("je.env.sharedLatches",
"true");
envConfig.setConfigParam("je.lock.nLockTables",
"401");The above params have no impact on deadlocks.
Is there any other configuration properties I can
change?No, except perhaps to increase the cache size so that operations complete more quickly. An operation that does a read from disk will take much longer than when the record is in cache.
Here's the stack trace, after modifying the lock
timeout property:
2008-07-23 16:55:54,977 WARN
[com.flash.http.postOptCache.service.PostOptCacheFacad
eImpl] com.sleepycat.je.DeadlockException: (JE
3.2.68) Lock expired. Locker
-1_HPI-executor707_ThreadLocker: waited for lock on
database=postoptcache node=4211902 type=WRITE
grant=WAIT_NEW timeoutMillis=500
startTime=1216821354475 endTime=1216821354977
Owners: [<LockInfo
locker="-1_HPI-executor825_ThreadLocker"
type="WRITE"/>]
Waiters: []
com.sleepycat.je.DeadlockException: (JE 3.2.68) Lock
expired. Locker -1_HPI-executor707_ThreadLocker:
waited for lock on database=postoptcache node=4211902
type=WRITE grant=WAIT_NEW timeoutMillis=500
startTime=1216821354475 endTime=1216821354977This information tells us that thread HPI-executor707 is trying to perform a write operation and times out after 500 millis. The write lock on the record is not granted because thread HPI-executor825 owns the write lock on that record.
Thread HPI-executor707 is performing a Database.delete. We don't know what thread HPI-executor825 is doing, but we can assume that is holding the record lock for more than 500 millis.
You have a couple options:
1) Find out why HPI-executor825 is holding the lock for so long, and try to correct that problem. For example, if it is keeping a cursor open, try to close the cursor earlier.
2) If you're sure that no thread is keeping a cursor open for any longer than necessary, then it is possible that something else is slowing things down. Is I/O very slow on this system? Are full GCs taking a long time? If so, you can try to correct these problems, but you can also increase the lock timeout to a value larger than any delay caused by I/O or GC.
Basically, you need to find out why the record lock is being held by thread HPI-executor825 for more than 500 millis. Then either correct that situation so that the lock is not held for so long, or increase the lock timeout if there is no way to reduce the time that the lock is held.
--mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Similar Messages

  • Will SP1 take more time to install if we have more databases and/or larger databases ?

    We are preparing to install SP1 (CU4 ?) for Exchange 2013 (e2013) on our production system.
    We have done that in our test system.
    The production system has more databases and more mailboxes and data in each database, than the test system.
    Seems like we read someplace that SP1 has an update to the "database schema" (in addition to the AD Schema update).
    Thought that this database work might require extra time.
    Will the SP1 install run longer because of the larger and more numerous databases ?
    Thanks.
    =====
    tags -- mailbox server, e2013, implement Service Pack 1,

    Thanks for the information.
    So,...is there a Database schema upgrade as part of SP1 for Exchange 2013 ?  (yes or no)
    If the answer is "yes", then...
    I understand all the mailboxes would be "offline".
    But that does not really answer my question.
    Just for an example, if they are simply adding one new attribute to the schema, that is probably done in the "schema master", and assuming that field is "null" for all current data, no other work needs to be done, regardless of the number of databases or
    the size (or the number of mailboxes).
    But let's say that new attribute needs to be "set" based on something in the data, and if it is "per database", that could have an impact, or if it was "per message", ...many messages to evaluate...etc.
    I thought maybe some people had real-life experience and would base the answer on what they saw.
        --- I assume that is what @CanKILIC  above is saying...  :-)
    Thanks.

  • How to make address book larger and more readable?

    extra large isn't big enough. is there a way to make the address book interface larger and more readable?

    This won't cover the entire interface (although there is a somewhat experimental way to do that as well), but try quitting "Address Book.app" if it is running, opening "/Applications" > "Utilities" > "Terminal.app" and entering this command:<pre>defaults write com.apple.AddressBook ABTextSizeIncrement -int 10</pre>Then relaunch "Address Book.app". Substitute whatever integer value you prefer in place of "10" (the default sizes are 1, 2 and 3 respectively for "regular", "large" and "extra large").

  • I need your help with a decision to use iPhoto.  I have been a PC user since the mid 1980's and more recently have used ACDSee to manage my photo images and Photoshop to edit them.  I have used ProShow Gold to create slideshows.  I am comfortable with my

    I need your help with a decision to use iPhoto.  I have been a PC user since the mid 1980’s and more recently have used ACDSee to manage my photo images and Photoshop to edit them.  I have used ProShow Gold to create slideshows.  I am comfortable with my own folder and file naming conventions. I currently have over 23,000 images of which around 60% are scans going back 75 years.  Since I keep a copy of the originals, the storage requirements for over 46,000 images is huge.  180GB plus.
    I now have a Macbook Pro and will add an iMac when the new models arrive.  For my photos, I want to stay with Photoshop which also gives me the Bridge.  The only obvious reason to use iPhoto is to take advantage of Faces and the link to iMovie to make slideshows.  What am I missing and is using iPhoto worth the effort?
    If I choose to use iPhoto, I am not certain whether I need to load the originals and the edited versions. I suspect that just the latter is sufficient.  If I set PhotoShop as my external editor, I presume that iPhoto will keep track of all changes moving forward.  However, over 23,000 images in iPhoto makes me twitchy and they are appear hidden within iPhoto.  In the past, I have experienced syncing problems with, and database errors in, large databases.  If I break up the images into a number of projects, I loose the value of Faces reaching back over time.
    Some guidance and insight would be appreciated.  I have a number of Faces questions which I will save for later. 

    Bridge and Photoshop is a common file-based management system. (Not sure why you'd have used ACDSEE as well as Bridge.) In any event, it's on the way out. You won't be using it in 5 years time.
    Up to this the lack of processing power on your computer left no choice but to organise this way. But file based organisation is as sensible as organising a Shoe Warehouse based on the colour of the boxes. It's also ultimately data-destructive.
    Modern systems are Database driven. Files are managed, Images imported, virtual versions, lossless processing and unlimited editing are the way forward.
    For a Photographer Photoshop is overkill. It's an enormously powerful app, a staple of the Graphic Designers' trade. A Photographer uses maybe 15% to 20% of its capability.
    Apps like iPhoto, Lightroom, Aperture are the way forward - for photographers. There's the 20% of Photoshop that shooters actually use, coupled with management and lossless processing. Pop over to the Aperture or Lightroom forums (on the Adobe site) and one comment shows up over and over again... "Since I started using Aperture/ Lightroom I hardly ever use Photoshop any more..." and if there is a job that these apps can do, then the (much) cheaper Elements will do it.
    The change is not easy though, especially if you have a long-standing and well thought out filing system of your own. The first thing I would strongly advise is that you experiment before making any decisions. So I would create a Library, import 300 or 400 shots and play. You might as well do this in iPhoto to begin with - though if you’re a serious hobbyist or a Pro then you'll find yourself looking further afield pretty soon. iPhoto is good for the family snapper, taking shots at birthdays and sharing them with friends and family.
    Next: If you're going to successfully use these apps you need to make a leap: Your files are not your Photos.
    The illustration I use is as follows: In my iTunes Library I have a file called 'Let_it_Be_The_Beatles.mp3'. So what is that, exactly? It's not the song. The Beatles never wrote an mp3. They wrote a tune and lyrics. They recorded it and a copy of that recording is stored in the mp3 file. So the file is just a container for the recording. That container is designed in a specific way attuned to the characteristics and requirements of the data. Hence, mp3.
    Similarly, that Jpeg is not your photo, it's a container designed to hold that kind of data. iPhoto is all about the data and not about the container. So, regardless of where you choose to store the file, iPhoto will manage the photo, edit the photo, add metadata to the Photo but never touch the file. If you choose to export - unless you specifically choose to export the original - iPhoto will export the Photo into a new container - a new file containing the photo.
    When you process an image in iPhoto the file is never touched, instead your decisions are recorded in the database. When you view the image then the Master is presented with these decisions applied to it. That's why it's lossless. You can also have multiple versions and waste no disk space because they are all just listings in the database.
    These apps replace the Finder (File Browser) for managing your Photos. They become the Go-To app for anything to do with your photos. They replace Bridge too as they become a front-end for Photoshop.
    So, want to use a photo for something - Export it. Choose the format, size and quality you want and there it is. If you're emailing, uploading to websites then these apps have a "good enough for most things" version called the Preview - this will be missing some metadata.
    So it's a big change from a file-based to Photo-based management, from editing files to processing Photos and it's worth thinking it through before you decide.

  • How to read  *.pdf files and store them in a database?

    Dear programmers,
    I have problem with reading *.pdf files and store them in a database.
    can any one help me, please!
    Is it possible to read more than one file from the local system and store them in a database.
    thnaks in advance.
    bye

    What "problem" are you encountering?
    Depending on your choice of database software, it may or may not support the storage of binary large objects (BLOBs).

  • Problem with  large databases.

    Lightroom doesn't seem to like large databases.
    I am playing catch-up using Lightroom to enter keywords to all my past photos. I have about 150K photos spread over four drives.
    Even placing a separate database on each hard drive is causing problems.
    The program crashes when importing large numbers of photos from several folders. (I do not ask it to render previews.) If I relaunch the program, and try the import again, Lightroom adds about 500 more photos and then crashes, or freezes again.
    I may have to go back and import them one folder at a time, or use iView instead.
    This is a deal-breaker for me.
    I also note that it takes several minutes after opening a databese before the HD activity light stops flashing.
    I am using XP on a dual core machine with, 3Gigs of RAM
    Anyone else finding this?
    What is you work-around?

    Christopher,
    True, but given the number of posts where users have had similar problems ingesting images into LR--where LR runs without crashes and further trouble once the images are in--the probative evidence points to some LR problem ingesting large numbers.
    I may also be that users are attempting to use LR for editing during the ingestion of large numbers--I found that I simply could not do that without a crash occuring. When I limited it to 2k at a time--leaving my hands off the keyboard-- while the import occured, everything went without a hitch.
    However, as previously pointed out, it shouldn't require that--none of my other DAMs using SQLite do that, and I can multitask while they are ingesting.
    But, you are right--multiple single causes--and complexly interrated multiple causes--could account for it on a given configuration.

  • How can we suggest a new DBA OCE certification for very large databases?

    How can we suggest a new DBA OCE certification for very large databases?
    What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
    The largest databases that I have ever worked with barely over 1 Trillion Bytes.
    Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
    I could guess that maybe some of the following topics of how to configure might be on it,
    * Partitioning
    * parallel
    * bigger block size - DSS vs OLTP
    * etc
    Where could I send in a recommendation?
    Thanks Roger

    I wish there were some details about the OCE data warehousing.
    Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
    Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
    Overview of Data Warehousing
      Describe the benefits of a data warehouse
      Describe the technical characteristics of a data warehouse
      Describe the Oracle Database structures used primarily by a data warehouse
      Explain the use of materialized views
      Implement Database Resource Manager to control resource usage
      Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
    Parallelism
      Explain how the Oracle optimizer determines the degree of parallelism
      Configure parallelism
      Explain how parallelism and partitioning work together
    Partitioning
      Describe types of partitioning
      Describe the benefits of partitioning
      Implement partition-wise joins
    Result Cache
      Describe how the SQL Result Cache operates
      Identify the scenarios which benefit the most from Result Set Caching
    OLAP
      Explain how Oracle OLAP delivers high performance
      Describe how applications can access data stored in Oracle OLAP cubes
    Advanced Compression
      Explain the benefits provided by Advanced Compression
      Explain how Advanced Compression operates
      Describe how Advanced Compression interacts with other Oracle options and utilities
    Data integration
      Explain Oracle's overall approach to data integration
      Describe the benefits provided by ODI
      Differentiate the components of ODI
      Create integration data flows with ODI
      Ensure data quality with OWB
      Explain the concept and use of real-time data integration
      Describe the architecture of Oracle's data integration solutions
    Data mining and analysis
      Describe the components of Oracle's Data Mining option
      Describe the analytical functions provided by Oracle Data Mining
      Identify use cases that can benefit from Oracle Data Mining
      Identify which Oracle products use Oracle Data Mining
    Sizing
      Properly size all resources to be used in a data warehouse configuration
    Exadata
      Describe the architecture of the Sun Oracle Database Machine
      Describe configuration options for an Exadata Storage Server
      Explain the advantages provided by the Exadata Storage Server
    Best practices for performance
      Employ best practices to load incremental data into a data warehouse
      Employ best practices for using Oracle features to implement high performance data warehouses

  • Move large database to other server using RMAN in less downtime

    Hi,
    We have large database around 20TB. We want to migrate (move) the database from one server to other server. We do not want to use standby option.
    1)     How can we move database using RMAN in less downtime
    2)     Other than RMAN is there any option is available to move the database to new server
    For option 1 (restore using RMAN),
    Whether below options are valid?
    If this option is valid, how to implement this?
    1)     How can we move database using RMAN in less downtime
    a)     Take the full backup from source (source db is up)
    b)     Restore the full backup in target (source db is up)
    c)     Take the incremental backup from source (source db is up)
    d)     Restore incremental backup in target (source db is up)
    e)     Do steps c and d, before taking downtime (source db is up)
    f)     Shutdown and mount the source db, and take the incremental backup (source db is down)
    g)     Restore last incremental backup and start the target database (target is up and application is accessing this new db
    database version: 10.2.0.4
    OS: SUN solaris 10
    Edited by: Rajak on Jan 18, 2012 4:56 AM

    Simple:
    I do this all the time to relocate file system files... But the principle is the same. You can do this in iterations so you do not need to do it all at once:
    Starting 8AM move less-used files and more active files in the afternoon using the following backup method.
    SCRIPT-1
    RMAN> BACKUP AS COPY
    DATAFILE 4 ####"/some/orcl/datafile/usersdbf"
    FORMAT "+USERDATA";
    Do as many files as you think you can handle during your downtime window.
    During your downtime window: stop all applications so there is no contention in the database
    SCRIPT-2
    ALTER DATABASE DATAFILE 4 offline;
    SWITCH DATAFILE 4 TO COPY;
    RECOVER DATAFILE 4;
    ALTER DATABASE DATAFILE 4 online;
    I then execute the delete of the original file at somepoint later - after we make sure everything has recovered and successfully brought back online.
    SCRIPT-3
    DELETE DATAFILECOPY "/some/orcl/datafile/usersdbf"
    For datafiles/tablespaces that are really busy, I typically copy them later in the afternoon as there are fewer archivelogs that it has to go through in order to make them consistent. The ones in the morning have more to go through, but less likelihood of there being anything to do.
    Using this method, we have moved upwards 600G at a time and the actual downtime to do the switchover is < 2hrs. YMMV. As I said, this can be done is stages to minimize overall downtime.
    If you need some documentation support see:
    http://docs.oracle.com/cd/E11882_01/server.112/e18951/asm_rman.htm#CHDBDJJG
    And before you do ANYTHING... TEST TEST TEST TEST TEST. Create a dummy tablespace on QFS and use this procedure to move it to ASM to ensure you understand how it works.
    Good luck! (hint: scripts to generate these scripts can be your friend.)

  • Find deadlock and send notification

    Hi World,
    I am looking for trigger to find out the deadlock and send an email to me to kill .
    can you help me please

    proora, the condition you are speaking about is probably just lock waited task rathter than a deadlock. A deadlock (simple form) is where task_A wants resources held by task_B and task_B want resources held by task_A. Since both process hold resouces needed by the other neighter will ever be able to get the resouces it needs to continue. Oracle will detect this situation and automatically choose a process and kill it so the other process can get the newly freed resources and complete. Oracle records when this occurs.
    Your situation is more highly a locked waited task caused by one session performing DML and not issueing a timely commit to release the locks. Oracle provides a lock tree script in the $ORACLE_HOME/rdbms/admin directly named something like utlltree.sql Numerous lock waited scripts have been posted on this and other forumns.
    The other possibility is that the entire database is hung, but this situation is unlikely. Oracle support provides articles on what to do in the case of a hung database.
    HTH -- Mark D Powell --

  • Sql Server Management Assistant (SSMA) Oracle okay for large database migrations?

    All:
    We don't have much experience with the SSMA (Oracle) tool and need some advice from those of you familiar with it.  We must migrate an Oracle 11.2.0.3.0 database to SQL Server 2014.  The Oracle database consists of approximately 25,000 tables and 30,000
    views and related indices.  The database is approximately 2.3 TB in size.
    Is this do-able using the latest version of SSMA-Oracle?  If so, how much horsepower would you throw at this to get it done?
    Any other gotchas and advice appreciated.
    Kindest Regards,
    Bill
    Bill Davidson

    Hi
    Bill,
    SSMA supports migrating large database of Oracle. To migrate Oracle database to SQL Server 2014, you could use the latest version:
    Microsoft SQL Server Migration Assistant v6.0 for Oracle. Before the migration, you should pay attention to the points below.
    1.The account that is used to connect to the Oracle database must have at least CONNECT permissions. This enables SSMA to obtain metadata from schemas owned by the connecting user. To obtain metadata for objects in other schemas and then convert objects
    in those schemas, the account must have the following permissions: CREATE ANY PROCEDURE, EXECUTE ANY PROCEDURE, SELECT ANY TABLE, SELECT ANY SEQUENCE, CREATE ANY TYPE, CREATE ANY TRIGGER, SELECT ANY DICTIONARY.
    2.Metadata about the Oracle database is not automatically refreshed. The metadata in Oracle Metadata Explorer is a snapshot of the metadata when you first connected, or the last time that you manually refreshed metadata. You can manually update metadata
    for all schemas, a single schema, or individual database objects. For more information about the process, please refer to the similar article: 
    https://msdn.microsoft.com/en-us/library/hh313203(v=sql.110).
    3.The account that is used to connect to SQL Server requires different permissions depending on the actions that the account performs as the following:
     • To convert Oracle objects to Transact-SQL syntax, to update metadata from SQL Server, or to save converted syntax to scripts, the account must have permission to log on to the instance of SQL Server.
     • To load database objects into SQL Server, the account must be a member of the sysadmin server role. This is required to install CLR assemblies.
     • To migrate data to SQL Server, the account must be a member of the sysadmin server role. This is required to run the SQL Server Agent data migration packages.
     • To run the code that is generated by SSMA, the account must have Execute permissions for all user-defined functions in the ssma_oracle schema of the target database. These functions provide equivalent functionality of Oracle system functions, and
    are used by converted objects.
     • If the account that is used to connect to SQL Server is to perform all migration tasks, the account must be a member of the sysadmin server role.
    For more information about the process, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313158(v=sql.110)
    4.Metadata about SQL Server databases is not automatically updated. The metadata in SQL Server Metadata Explorer is a snapshot of the metadata when you first connected to SQL Server, or the last time that you manually updated metadata. You can manually update
    metadata for all databases, or for any single database or database object.
    5.If the engine being used is Server Side Data Migration Engine, then, before you can migrate data, you must install the SSMA for Oracle Extension Pack and the Oracle providers on the computer that is running SSMA. The SQL Server Agent service must also
    be running. For more information about how to install the extension pack, see Installing Server Components (OracleToSQL). And when SQL Express edition is used as the target database, only client side data migration is allowed and server side data migration
    is not supported. For more information about the process, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313202(v=sql.110)
    For how to migrate Oracle Databases to SQL Server, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313159(v=sql.110).aspx
    Regards,
    Michelle Li

  • Lockers becomes more and more

    my application will fork child process periodically ,these processes
    will read DBs ,do some analysis and exit , they will close all cursors
    ,DB, and environment before exits.
    I found that db_stat -CA will show more and more lockers as time went
    on:
    chenyajun@netwise:data$ db_stat -CA
    Default locking region information:
    5027 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    5 Number of lock modes
    20000 Maximum number of locks possible
    1000 Maximum number of lockers possible
    1000 Maximum number of lock objects possible
    0 Number of current locks
    30 Maximum number of locks at any one time
    68 Number of current lockers
    93 Maximum number of lockers at any one time
    0 Number of current lock objects
    30 Maximum number of lock objects at any one time
    362597 Total number of locks requested
    362597 Total number of locks released
    0 Total number of lock requests failing because DB_LOCK_NOWAIT
    was set
    1 Total number of locks not immediately available due to
    conflicts
    0 Number of deadlocks
    60M Lock timeout value (60000000)
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    4MB 144KB The size of the lock region
    2 The number of region locks that required waiting (0%)
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock REGINFO information:
    Lock Region type
    3 Region ID
    __db.003 Region name
    0xb78da000 Original region address
    0xb78da000 Region address
    0xb7cfdf40 Region primary address
    0 Region maximum allocation
    0 Region allocated
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock region parameters:
    1031 locker table size
    1031 object table size
    4333280 obj_off
    0 osynch_off
    4325024 locker_off
    0 lsynch_off
    0 need_dd
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock conflict matrix:
    0 0 0 0 0
    0 0 1 0 0
    0 1 1 1 1
    0 0 0 0 0
    0 0 1 0 1
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by lockers:
    Locker Mode Count Status ----------------- Object
    c75 dd= 0 locks held 0 write locks 0
    64 dd=19 locks held 0 write locks 0
    c7c dd= 0 locks held 0 write locks 0
    6c dd=18 locks held 0 write locks 0
    87e dd= 0 locks held 0 write locks 0
    c86 dd= 0 locks held 0 write locks 0
    885 dd= 0 locks held 0 write locks 0
    c8d dd= 0 locks held 0 write locks 0
    c97 dd= 0 locks held 0 write locks 0
    87 dd=17 locks held 0 write locks 0
    c9e dd= 0 locks held 0 write locks 0
    8e dd=16 locks held 0 write locks 0
    498 dd= 0 locks held 0 write locks 0
    ca8 dd= 0 locks held 0 write locks 0
    49f dd= 0 locks held 0 write locks 0
    caf dd= 0 locks held 0 write locks 0
    a2 dd=15 locks held 0 write locks 0
    cb9 dd= 0 locks held 0 write locks 0
    cc0 dd= 0 locks held 0 write locks 0
    10cb dd= 0 locks held 0 write locks 0
    af dd=14 locks held 0 write locks 0
    cca dd= 0 locks held 0 write locks 0
    10d2 dd= 0 locks held 0 write locks 0
    cd1 dd= 0 locks held 0 write locks 0
    c7 dd=13 locks held 0 write locks 0
    d2 dd=12 locks held 0 write locks 0
    968 dd= 0 locks held 0 write locks 0
    96f dd= 0 locks held 0 write locks 0
    979 dd= 0 locks held 0 write locks 0
    572 dd= 0 locks held 0 write locks 0
    980 dd= 0 locks held 0 write locks 0
    579 dd= 0 locks held 0 write locks 0
    11a5 dd= 0 locks held 0 write locks 0
    11ac dd= 0 locks held 0 write locks 0
    1cb dd=11 locks held 0 write locks 0
    1d2 dd=10 locks held 0 write locks 0
    dfe dd= 0 locks held 0 write locks 0
    e05 dd= 0 locks held 0 write locks 0
    62d dd= 0 locks held 0 write locks 0
    634 dd= 0 locks held 0 write locks 0
    63e dd= 0 locks held 0 write locks 0
    645 dd= 0 locks held 0 write locks 0
    64f dd= 0 locks held 0 write locks 0
    656 dd= 0 locks held 0 write locks 0
    a71 dd= 0 locks held 0 write locks 0
    a78 dd= 0 locks held 0 write locks 0
    129f dd= 0 locks held 0 write locks 0
    12a6 dd= 0 locks held 0 write locks 0
    12b0 dd= 0 locks held 0 write locks 0
    296 dd= 9 locks held 0 write locks 0
    12b7 dd= 0 locks held 0 write locks 0
    12c1 dd= 0 locks held 0 write locks 0
    12c8 dd= 0 locks held 0 write locks 0
    2ac dd= 8 locks held 0 write locks 0
    ed8 dd= 0 locks held 0 write locks 0
    edf dd= 0 locks held 0 write locks 0
    b4b dd= 0 locks held 0 write locks 0
    b52 dd= 0 locks held 0 write locks 0
    38f dd= 0 locks held 0 write locks 0
    396 dd= 0 locks held 0 write locks 0
    7a4 dd= 0 locks held 0 write locks 0
    3a0 dd= 0 locks held 0 write locks 0
    7ab dd= 0 locks held 0 write locks 0
    3a7 dd= 0 locks held 0 write locks 0
    fc2 dd= 0 locks held 0 write locks 0
    fc9 dd= 0 locks held 0 write locks 0
    fd3 dd= 0 locks held 0 write locks 0
    fda dd= 0 locks held 0 write locks 0
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by object:
    Locker Mode Count Status ----------------- Object
    bdb version is 4.3.29
    Though the processes closes all the relevent resources(cursor,db,env)
    before exit,
    why are so many lockers yet?
    I use CDB , no txn protected.
    init flags for my environment:
    DB_CREATE | DB_INIT_MPOOL | DB_INIT_CDB | DB_CDB_ALLDB

    Hi,
    my application will fork child process periodically
    ,these processes
    will read DBs ,do some analysis and exit , they will
    close all cursors
    ,DB, and environment before exits.
    I found that db_stat -CA will show more and more
    lockers as time went
    on: Is not normal that if you are using the application, lockers number to grow?( this number: 93 Maximum number of lockers at any one time; will continue to grow once the application is running) I'm not sure that your question was very clear.
    The maximum number of lockers can be estimated as follows:
    - If the database environment is using transactions, the maximum number of lockers can be estimated by adding the number of simultaneously active non-transactional cursors open database handles to the number of simultaneously active transactions and child transactions (where a child transaction is active until it commits or aborts, not until its parent commits or aborts).
    - If the database environment is not using transactions, the maximum number of lockers can be estimated by adding the number of simultaneously active non-transactional cursors and open database handles to the number of simultaneous non-cursor operations.
    Doc: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/lock/max.html
    Regards,
    Bogdan Coman, Oracle

  • Migrating and Backing up Schemas (complex database structures)

    Hey guys, I need to figure out a way to back up and also migrate our Oracle database from our production schema to the dev schema and the other way around.
    We have bunch of config tables that drive how systems on our platform run, and when setting up new systems or doing maintenance, we need to update our config tables. We want to be able to work on the dev schemas and after setting up a system/feature, we want to be able to migrate all those configs to the dev schemas.
    I thought of running a procedure where we give the ID of the system (from the main table) and i would go through all the tables and select nvl(..) and if it doesn't exist, i would insert into, and if it does exist then i just run an update on that row.
    This code will get very messy and complicated especially since the whole config schema is very complex and it might be hard to handle all the keys properly.
    Another option i was looking at was triggers, so when setting up a new system, there would be a log of all the statements we ran while setting up/editing a system, then we would run it on our production schema.
    I'm on a coop term, and have only been working with databases for 6 months, so i don't know that much and any information/advice would be greatly appericiated.

    Yes i guess it might not be migration, it's just what the project is called around here :)
    Regardless, yes the intiial idea was replication, should create all the keys from stratch when creating the entries in the production schema but we abondened the idea and we'll be using the production sequences for creating the primary keys in the development, and just make an exact copy back and forth...
    Do you think i can just start from the top of the hierarchy (even though not clearly defined, so hierarchy might not be very straight forward) and loop through that table, insert or update, then move one level down, copy and update and so on?
    Also is there any sources where i can get more information regarding triggers
    Edited by: tolgaek on Oct 27, 2010 10:04 AM

  • Best Practice to Atomic Read and Write a Field In Database

    I am from Java Desktop Application background. May I know what is the best practice in J2EE, to atomic read and write a field in database. Currently, here is what I did
    // In Servlet.
    synchronized(private_static_final_object)
        int counter = read_counter_from_database();
        counter++;
        write_counter_back_to_database(counter);
    }However, I suspect the above method will work all the time.
    As my observation is that, if I have several web request at the same time, I am executing code within single instance of servlet, using different thread. The above method shall work, as different thread web request, are all referring to same "private_static_final_object"
    However, my guess is "single instance of servlet" is not guarantee. As after some time span, the previous instance of servlet may destroy, with another new instance of servlet being created.
    I also came across [http://code.google.com/appengine/docs/java/datastore/transactions.html|http://code.google.com/appengine/docs/java/datastore/transactions.html] in JDO. I am not sure whether they are going to solve the problem.
    // In Servlet.
    Transaction tx = pm.currentTransaction();
    tx.begin();
        int counter = read_counter_from_database();  // Line 1
        counter++;                                                  // Line 2
        write_counter_back_to_database(counter);    // Line 3
    tx.commit();Is the code guarantee only when Thread A finish execute Line 1 till Line 3 atomically, only Thread B can continue to execute Line 1 till Line 3 atomically?
    As I do not wish the following situation happen.
    (1) Thread A read counter from Database as 0
    (2) Thread A increment counter to 1
    (3) Thread B read counter from Database as 0
    (4) Thread A write counter as 1 to database
    (5) Thread B increment counter to 1
    (6) Thread B write counter as 1 to database
    What I wish is
    (1) Thread A read counter from Database as 0
    (2) Thread A increment counter to 1
    (4) Thread A write counter as 1 to database
    (3) Thread B read counter from Database as 1
    (5) Thread B increment counter to 2
    (6) Thread B write counter as 2 to database
    Thanks you.
    Edited by: yccheok on Oct 27, 2009 8:39 PM

    This is my understanding of the issue (you should research it further on you own to get a better understanding):
    I suggest you use local variables (ie, defined within a function), and keep away from static variables. Those local variables are thread safe. If you call functions within functions, its still thread safe. If you read or write to one record in a database using sql, its thread safe (you dont need a transaction). If you read/write to multiple tables and/or records, you probably need a transaction. Servlets are thread safe. You usually dont need the 'synchronized' word anywhere unless you have a function updating/reading a static variable and therefore want to ensure only one user is accessing the static varible at a time. Also do so if you are writing to some resource (such as a file, a variable in applicaton scope, session scope, or resource that everyone uses such as email server). You dont want more than one person at a time to write to it). Note the database is one of of those resources that is handled by transactions rather than the the synchronized keyword (the synchronized keyword is applied to your application only (not other applications that someone is running), whereas the transaction ensures all applications are locked out while you update those records in the database).
    By the way, if you have a static variable, you should have one and only one function (synchronized) that updates it that everyone uses. If you have more than one synchronized function, that updates it, its probably not thread safe.
    An example of a static variable you would use is a Datasource object (to obtain your database connections). You only need one connection pool in your application and you access it via the datasource variable.
    If you're unsure your code is thread safe, you can create two seperate threads that call the same block of functions repeatedly to ensure it works as expected.

  • OS 10.4.11 - MacBook Pro Crashing more and more every day - why?

    Crashes more and more often - several times in a day.
    Hope someone can help. This has happened to some of my friends who just put up their hands and bought one of the newer macs. I've only had this one for a year, and I don't/can't put up the funds all over again. I really really need this to work for school and for my job. Please help!
    I have a MacBook Pro OS 10.9.11.
    2.33 GHz Intel Core 2 Duo
    2 GB 667 MHz DDR2 SDRAM
    I tried booting from the apple cd and doing verify/repair - but that hasn't fixed the crashes. I am not a tech type person and I live VERY far from a apple store. My panic log and other recent logs are pasted below.
    Recent Panic Log:
    Sun Nov 9 13:26:50 2008
    panic(cpu 0 caller 0x001A49CB): Unresolved kernel trap (CPU 0, Type 14=page fault), registers:
    CR0: 0x8001003b, CR2: 0x0002003c, CR3: 0x00ecf000, CR4: 0x000006e0
    EAX: 0x00020020, EBX: 0x05909880, ECX: 0x00000000, EDX: 0x03df3640
    CR2: 0x0002003c, EBP: 0x251fbb48, ESI: 0x314d0000, EDI: 0x00000513
    EFL: 0x00010202, EIP: 0x003a42b6, CS: 0x00000008, DS: 0x00000010
    Backtrace, Format - Frame : Return Address (4 potential args on stack)
    0x251fb938 : 0x128d0d (0x3cc65c 0x251fb95c 0x131f95 0x0)
    0x251fb978 : 0x1a49cb (0x3d2a94 0x0 0xe 0x3d22b8)
    0x251fba88 : 0x19b3a4 (0x251fbaa0 0x1403b54 0x251fbab8 0x19a840)
    0x251fbb48 : 0x39ed48 (0x5909880 0x8000 0x251fbb78 0x3bf483)
    0x251fbb98 : 0x3bf414 (0x5909880 0x0 0x1 0x0)
    0x251fbbb8 : 0x3bf4e3 (0x5909880 0x0 0x251fbbe8 0x3b17000)
    0x251fbbd8 : 0x650cd0 (0x5909880 0x0 0x251fbc08 0x6518e9)
    0x251fbc08 : 0x651b06 (0x5963000 0x18 0x0 0x1)
    0x251fbc48 : 0x66d77b (0x5963000 0x3b17000 0x1 0x3a5b540)
    0x251fbc68 : 0x650dd3 (0x5963000 0x3b17000 0x5963004 0x12db2d)
    0x251fbc88 : 0x3b5499 (0x5963000 0x4b069c 0x251fbcd8 0x0)
    0x251fbca8 : 0x187b8a (0x5963000 0x596c6c0 0x1d 0x251fbcd8)
    0x251fbcf8 : 0x12b502 (0x4337fa8 0x578156c 0x2 0x11cc32)
    0x251fbd38 : 0x11f1f0 (0x4337f00 0x251fbdcc 0x24 0x0)
    0x251fbd78 : 0x12b6e5 (0x4337f00 0x10000 0x0 0x251fbd88)
    0x251fbda8 : 0x1494a1 (0x251fbdcc 0x24 0x0 0x0) Backtrace continues...
    Kernel loadable modules in backtrace (with dependencies):
    com.apple.ATIRadeonX1000(4.5.6)@0x64a000
    dependency: com.apple.iokit.IOPCIFamily(2.2)@0x610000
    dependency: com.apple.iokit.IOGraphicsFamily(1.4.8)@0x620000
    dependency: com.apple.iokit.IONDRVSupport(1.4.8)@0x63b000
    Kernel version:
    Darwin Kernel Version 8.11.1: Wed Oct 10 18:23:28 PDT 2007; root:xnu-792.25.20~1/RELEASE_I386
    Mon Nov 10 16:52:19 2008
    panic(cpu 1 caller 0x001A49CB): Unresolved kernel trap (CPU 1, Type 14=page fault), registers:
    CR0: 0x80010033, CR2: 0x33331e00, CR3: 0x00ecf000, CR4: 0x000006e0
    EAX: 0x33331e00, EBX: 0x016c4c60, ECX: 0x042d6ef0, EDX: 0x016c4c68
    CR2: 0x33331e00, EBP: 0x2512bb08, ESI: 0x00000014, EDI: 0x04b94940
    EFL: 0x00210206, EIP: 0x00141502, CS: 0x00000008, DS: 0x00000010
    Backtrace, Format - Frame : Return Address (4 potential args on stack)
    0x2512b8b8 : 0x128d0d (0x3cc65c 0x2512b8dc 0x131f95 0x0)
    0x2512b8f8 : 0x1a49cb (0x3d2a94 0x1 0xe 0x3d22b8)
    0x2512ba08 : 0x19b3a4 (0x2512ba20 0x3aff7000 0x5 0x0)
    0x2512bb08 : 0x12db2d (0x16c4c60 0x1 0x0 0x3)
    0x2512bb38 : 0x12db4c (0x14 0x1 0x2512bb98 0x2512bbb0)
    0x2512bb58 : 0x3bf5d1 (0x14 0x2512bb8c 0x131e99 0x131ffe)
    0x2512bb78 : 0x3c63d3 (0x14 0x3851460 0x3851464 0x2512bbac)
    0x2512bb98 : 0x3c4c1a (0x2512bbb0 0x3ebb54 0x35 0x3c6d3d)
    0x2512bbd8 : 0x3c3710 (0x4b94940 0x3e93940 0x1 0x3e48cb)
    0x2512bc38 : 0x3c2c90 (0x3e93940 0x4b94940 0x3f10bc 0x3c4dd8)
    0x2512bc78 : 0x3c01b0 (0x4ace8c0 0x4b94940 0x3f10a4 0x3c4d73)
    0x2512bca8 : 0x3c2c90 (0x4b9a200 0x4b94940 0x3f10bc 0x162a88)
    0x2512bce8 : 0x387b66 (0x3cb9180 0x4b94940 0x2512bd28 0x3c2528)
    0x2512bd08 : 0x38b367 (0x38b6a00 0x4b94940 0x23b2e470 0x38b6a00)
    0x2512bd28 : 0x3b2954 (0x38b6a00 0x4b94940 0x2512bd58 0x3bf4ff)
    0x2512bd78 : 0x18873c (0x38b6a00 0x4a04fb4 0x4a04fc8 0x11cc32) Backtrace continues...
    Kernel version:
    Darwin Kernel Version 8.11.1: Wed Oct 10 18:23:28 PDT 2007; root:xnu-792.25.20~1/RELEASE_I386
    Wed Nov 12 00:07:32 2008
    panic(cpu 0 caller 0x003BE004): OSObject::_RESERVEDOSObject9 called
    Backtrace, Format - Frame : Return Address (4 potential args on stack)
    0x2532b498 : 0x128d0d (0x3cc65c 0x2532b4bc 0x131f95 0x0)
    0x2532b4d8 : 0x3be004 (0x3f2a84 0x3f2c08 0x3f2c08 0x9)
    0x2532b4f8 : 0x3bf27c (0x4b090c 0x9 0x2532b528 0x19a706)
    0x2532b518 : 0x3c0ebf (0x3d679e0 0x4b0970 0x2532b558 0x19a809)
    0x2532b538 : 0x3c0f13 (0x3b7b9e0 0x3d679e0 0x2532b558 0x0)
    0x2532b558 : 0x3a6fb7 (0x3d679e0 0x4820048 0x25320010 0x2f28e000)
    0x2532b5b8 : 0x3a6e37 (0x4825e00 0x4825e00 0x3831da0 0x0)
    0x2532b5e8 : 0x66e80c (0x4825e00 0x3831da0 0x0 0x1)
    0x2532b6b8 : 0x66f838 (0x40ee800 0x480c380 0x2f750000 0x533ac000)
    0x2532b718 : 0x671e6e (0x40ee800 0x480c380 0x45cf00 0xd0)
    0x2532bc68 : 0x65254c (0x40ee800 0x2532bcd0 0x5afa9a4 0x5afa9a8)
    0x2532bcf8 : 0x3b19b9 (0x40ee800 0x1 0x2532bd2c 0x2532bd28)
    0x2532bd38 : 0x3b4e75 (0x40ee800 0x1 0x382ecb0 0x1)
    0x2532bd68 : 0x189fdc (0x40ee800 0x1 0x382ecb0 0x5afa9c0)
    0x2532bdb8 : 0x12b4ee (0x5afa98c 0x5afa7a0 0x2532bdf8 0x11e042)
    0x2532bdf8 : 0x124b17 (0x5afa900 0x37fb4f8 0x3fe6238 0x0) Backtrace continues...
    Kernel loadable modules in backtrace (with dependencies):
    com.apple.ATIRadeonX1000(4.5.6)@0x64a000
    dependency: com.apple.iokit.IOPCIFamily(2.2)@0x610000
    dependency: com.apple.iokit.IOGraphicsFamily(1.4.8)@0x620000
    dependency: com.apple.iokit.IONDRVSupport(1.4.8)@0x63b000
    Kernel version:
    Darwin Kernel Version 8.11.1: Wed Oct 10 18:23:28 PDT 2007; root:xnu-792.25.20~1/RELEASE_I386
    Mon Nov 17 00:12:18 2008
    panic(cpu 1 caller 0x001A49CB): Unresolved kernel trap (CPU 1, Type 14=page fault), registers:
    CR0: 0x8001003b, CR2: 0xeeeeeef2, CR3: 0x00ecf000, CR4: 0x000006e0
    EAX: 0x00010001, EBX: 0xffffffff, ECX: 0x04297b8c, EDX: 0xeeeeeeee
    CR2: 0xeeeeeef2, EBP: 0x2520bec8, ESI: 0x7fffffff, EDI: 0x03b70288
    EFL: 0x00210887, EIP: 0x0013d832, CS: 0x00000008, DS: 0x044f0010
    Backtrace, Format - Frame : Return Address (4 potential args on stack)
    0x2520bc98 : 0x128d0d (0x3cc65c 0x2520bcbc 0x131f95 0x0)
    0x2520bcd8 : 0x1a49cb (0x3d2a94 0x1 0xe 0x3d22b8)
    0x2520bde8 : 0x19b3a4 (0x2520be00 0x5a 0x2520bf08 0x1f15d027)
    0x2520bec8 : 0x130542 (0x3b70288 0xffffffff 0x7fffffff 0x2520beec)
    0x2520bf08 : 0x195f2e (0x2520bf44 0x0 0x0 0x0)
    0x2520bfc8 : 0x19b81e (0x42ec960 0x0 0x19e0b5 0x384d718) No mapping exists for frame pointer
    Backtrace terminated-invalid frame pointer 0xbfffea38
    Kernel version:
    Darwin Kernel Version 8.11.1: Wed Oct 10 18:23:28 PDT 2007; root:xnu-792.25.20~1/RELEASE_I386
    Fri Nov 21 02:59:20 2008
    panic(cpu 0 caller 0x0019A516): simple lock deadlock detection - l=00466260, cpu=0, ret=00000000
    Backtrace, Format - Frame : Return Address (4 potential args on stack)
    0x2522bb58 : 0x128d0d (0x3cc65c 0x2522bb7c 0x131f95 0x0)
    0x2522bb98 : 0x19a516 (0x3d1478 0x466260 0x0 0x0)
    0x2522bbb8 : 0x13d46a (0x466260 0x283 0x2522bc08 0x140a17)
    0x2522bbd8 : 0x72ae7a (0x3e7eea0 0x38c0b40 0x2522bc08 0x168ec68)
    0x2522bc18 : 0x72a610 (0x3e6bc00 0x2522bc5c 0x2522bc48 0x3c0d78)
    0x2522bc38 : 0x73c3c3 (0x3e6bc00 0x2522bc5c 0x2522bc68 0x3bf37a)
    0x2522bc78 : 0x38aff4 (0x3e6bc00 0x0 0x47a3f00 0x3bf4e3)
    0x2522bca8 : 0x38bbb0 (0x47a3f00 0x3e66d40 0x0 0x37fec80)
    0x2522bce8 : 0x38f5a3 (0x47a3f00 0x381de60 0x0 0xffffffff)
    0x2522bd28 : 0x38f755 (0x47a3f00 0x4 0x47a3f04 0x199793)
    0x2522bd48 : 0x5b9696 (0x47a3f00 0x0 0x2522bd78 0x1a3736)
    0x2522bd68 : 0x3b2efa (0x47a3f00 0x4b069c 0x2522bdb8 0x1985e1)
    0x2522bd88 : 0x188817 (0x47a3f00 0x418c6cc 0x2522bda8 0x19e23a)
    0x2522bdb8 : 0x12b4ee (0x4c27cb4 0x4c3cfa8 0x0 0x0)
    0x2522bdf8 : 0x124b17 (0x4c27c00 0x0 0x18 0x2522bedc)
    0x2522bf08 : 0x195f2e (0x2522bf44 0x0 0x0 0x0) Backtrace continues...
    Kernel loadable modules in backtrace (with dependencies):
    com.apple.driver.ACPISMCPlatformPlugin(2.7.3d4)@0x73a000
    dependency: com.apple.iokit.IOPCIFamily(2.2)@0x610000
    dependency: com.apple.driver.IOPlatformPluginFamily(2.7.3d4)@0x724000
    dependency: com.apple.iokit.IOACPIFamily(1.2.0)@0x6b2000
    dependency: com.apple.driver.AppleSMC(1.3.0d1)@0x732000
    com.apple.driver.IOPlatformPluginFamily(2.7.3d4)@0x724000
    com.apple.iokit.IOUSBFamily(2.7.7)@0x5a5000
    Kernel version:
    Darwin Kernel Version 8.11.1: Wed Oct 10 18:23:28 PDT 2007; root:xnu-792.25.20~1/RELEASE_I386
    Fri Nov 21 18:54:52 2008
    panic(cpu 1 caller 0x001A49CB): Unresolved kernel trap (CPU 1, Type 14=page fault), registers:
    CR0: 0x8001003b, CR2: 0x00008400, CR3: 0x00e99000, CR4: 0x000006e0
    EAX: 0x00008400, EBX: 0x0000001c, ECX: 0x00000000, EDX: 0x0168ec68
    CR2: 0x00008400, EBP: 0x2523bbc8, ESI: 0x00000018, EDI: 0x00000000
    EFL: 0x00010206, EIP: 0x0033ebbf, CS: 0x00000008, DS: 0x00ab0010
    Backtrace, Format - Frame : Return Address (4 potential args on stack)
    0x2523b9b8 : 0x128d0d (0x3cc65c 0x2523b9dc 0x131f95 0x0)
    0x2523b9f8 : 0x1a49cb (0x3d2a94 0x1 0xe 0x3d22b8)
    0x2523bb08 : 0x19b3a4 (0x2523bb20 0x35275038 0x2523bb48 0x127487)
    0x2523bbc8 : 0x314208 (0x18 0x50 0x0 0x3d53de4)
    0x2523bc18 : 0x2fe8a8 (0xeb2000 0x0 0xeb2fff 0x0)
    0x2523bd28 : 0x1e5ab1 (0x2523bd54 0x297 0x2523bd88 0x1d172b)
    0x2523bd88 : 0x1e07dc (0x4baa39c 0x2523be9c 0x11 0x2523bde8)
    0x2523be08 : 0x35071b (0x4114120 0x2523be9c 0x3ed6a04 0x1)
    0x2523bef8 : 0x350893 (0x40dd1f4 0x4114120 0x15 0x1bc77840)
    0x2523bf58 : 0x37b300 (0x40dd1f4 0x4112878 0x41128bc 0x0)
    0x2523bfc8 : 0x19b77e (0x42ed180 0x0 0x19e0b5 0x4510530) No mapping exists for frame pointer
    Backtrace terminated-invalid frame pointer 0xb021ce18
    Kernel version:
    Darwin Kernel Version 8.11.1: Wed Oct 10 18:23:28 PDT 2007; root:xnu-792.25.20~1/RELEASE_I386
    Fri Nov 21 20:35:27 2008
    panic(cpu 1 caller 0x001A49CB): Unresolved kernel trap (CPU 1, Type 14=page fault), registers:
    CR0: 0x80010033, CR2: 0x0041fa4c, CR3: 0x00e99000, CR4: 0x000006e0
    EAX: 0x0041fa40, EBX: 0x04658224, ECX: 0x0397cc80, EDX: 0x03aa3274
    CR2: 0x0041fa4c, EBP: 0x25253c88, ESI: 0x0465821c, EDI: 0x03aa3270
    EFL: 0x00010002, EIP: 0x0013f301, CS: 0x00000008, DS: 0x037c0010
    Backtrace, Format - Frame : Return Address (4 potential args on stack)
    0x25253a58 : 0x128d0d (0x3cc65c 0x25253a7c 0x131f95 0x0)
    0x25253a98 : 0x1a49cb (0x3d2a94 0x1 0xe 0x3d22b8)
    0x25253ba8 : 0x19b3a4 (0x25253bc0 0x1103 0x168ecb8 0x0)
    0x25253c88 : 0x13f3d7 (0x465821c 0x3aa3270 0x41fa40 0x19e23a)
    0x25253ca8 : 0x11f6e9 (0x465821c 0x3aa3270 0x25253ce8 0x121b41)
    0x25253d18 : 0x12195c (0x465821c 0x3aa3270 0x25253d38 0x37c49a0)
    0x25253d38 : 0x12720d (0x3aa3258 0x46581e0 0x1 0x25253d6c)
    0x25253d78 : 0x14b516 (0x37c49a0 0x2803 0x2003 0x11cc32)
    0x25253db8 : 0x12b4ee (0x49dd3a4 0x443f2a8 0x0 0x0)
    0x25253df8 : 0x124b17 (0x49dd300 0x0 0x28 0x25253edc)
    0x25253f08 : 0x195f2e (0x25253f44 0x0 0x0 0x0)
    0x25253fc8 : 0x19b81e (0x43ab2e0 0x0 0x19e0b5 0x38140d8) No mapping exists for frame pointer
    Backtrace terminated-invalid frame pointer 0xbffff738
    Kernel version:
    Darwin Kernel Version 8.11.1: Wed Oct 10 18:23:28 PDT 2007; root:xnu-792.25.20~1/RELEASE_I386
    Sat Nov 22 22:11:45 2008
    panic(cpu 0 caller 0x003BDB14): OSMetaClass::_RESERVEDOSMetaClass0 called
    Backtrace, Format - Frame : Return Address (4 potential args on stack)
    0x2506baf8 : 0x128d0d (0x3cc65c 0x2506bb1c 0x131f95 0x0)
    0x2506bb38 : 0x3bdb14 (0x3f28c4 0x0 0x2506bb68 0x3a0e140)
    0x2506bb58 : 0x3b540f (0x3e1b180 0x4b0958 0x4ae90c4 0x12db2d)
    0x2506bb78 : 0x3b5467 (0x4ae90c0 0x1e 0x2506bbc8 0x4473980)
    0x2506bb98 : 0x187b8a (0x4ae90c0 0x47339c0 0x1e 0x2506bbc8)
    0x2506bbe8 : 0x12b502 (0x474baa8 0x4e6556c 0x6 0x11cc32)
    0x2506bc28 : 0x11f1f0 (0x474ba00 0x2506bcbc 0x24 0x0)
    0x2506bc68 : 0x12b6e5 (0x474ba00 0x10000 0x0 0x0)
    0x2506bc98 : 0x1494a1 (0x2506bcbc 0x24 0x0 0x10c)
    0x2506bce8 : 0x120221 (0x47339c0 0x1 0x23b0a000 0x0)
    0x2506bd08 : 0x12291e (0x47339c0 0x1 0x23b0a5b0 0x23b0a5b0)
    0x2506bd48 : 0x126501 (0x37c54a0 0x5b07 0x23b0a5b0 0x12ce10)
    0x2506bd78 : 0x149ea6 (0x37c54a0 0x5b07 0x7 0x11cc32)
    0x2506bdb8 : 0x12b4ee (0x38da2a8 0x47463a8 0x0 0x0)
    0x2506bdf8 : 0x124b17 (0x38da200 0x0 0x24 0x2506bedc)
    0x2506bf08 : 0x195f2e (0x2506bf44 0x0 0x0 0x0) Backtrace continues...
    Kernel version:
    Darwin Kernel Version 8.11.1: Wed Oct 10 18:23:28 PDT 2007; root:xnu-792.25.20~1/RELEASE_I386
    (I think the crash log below is just from my IPhone - but I am including this)
    Recent crash log in mobile folder in crash folder:
    Incident Identifier: CE90C089-EDC2-4D5D-AF95-340D094C3463
    CrashReporter Key: 385c0d9384a508ee9988570e5f38ee35a04fd3d0
    OS Version: iPhone OS 2.1 (5F136)
    Date: 2008-11-19 17:49:46 -0500
    2056192 bytes free
    30715904 bytes wired
    0 bytes purgeable
    Memory status: 8
    About to jettison: MobileSafari
    Processes
    PID RPRVT RSHRD RSIZE Command
    1 252K 240K 364K launchd
    13 72.0K 144K 92.0K update
    14 224K 184K 288K syslogd
    15 620K 292K 920K lockdownd
    16 1.51M 348K 1.95M mediaserverd
    17 304K 156K 428K mDNSResponder
    19 652K 252K 1.31M iapd
    20 232K 252K 408K fairplayd
    21 596K 268K 984K configd
    22 6.53M 16.3M 9.54M SpringBoard
    25 984K 336K 1.39M CommCenter
    26 1000K 424K 1.36M BTServer
    28 240K 168K 288K notifyd
    36 7.09M 10.4M 8.71M MobilePhone
    1473 232K 156K 344K SCHelper
    1628 19.8M 14.7M 30.8M MobileSafari
    1632 296K 216K 1.12M ReportCrash
    *End*

    Backup your files. Erase the drive and reinstall the OS. If that doesn't fix it, then it's probably a hardware issue.

  • Why is my word and pdf files so large? How can I reduce the size?

    My document contains mostly photos from a database via a path to the actual jpeg photos on disk.
    There are 39 jpeg photos + a small amount of text with each photo.
    The 39 jpeg photos on the disk add up to be 5.79 mb.
    If I export the crystal reports document as [Microsoft Word (97-2003) Non Editable] the, the file size is 36.8 mb.
    If I export the same data as a [Microsoft Word (97-2003) Editable] the file size is only 3.1 mb.
    That is a difference of 33.7 mb! Why?
    Also, when I export the same data to a PDF file it is much larger than I think it should be; 12.7 mb.
    Why is my word and PDF files so large?
    How can I reduce the size of the word and PDF files? (without reducing the photo quality).
    Are there any tools to post process the Word or PDF files to reduce the file size?

    You can't, extra size is to hold the details

Maybe you are looking for

  • Can DAQmxRegisterEveryNSamplesEvent function be used for the Counter Input Channels

    Hi, All, I have an application to count the digitial pulse number. I want to know the time of the coming pulses which start from 1 and increased 4 later, like 1, 5, 9, 13..... The time interval between each pulse is not a fixed value. So I tried to u

  • How do I set a default page size?

    When Firefox opens, pages are displayed in about 1/3 the size of the screen. I would like to be able to set a default display size for all pages, in the same manner that I can set a default zoom in Google Chrome.

  • Is there a way to stop a query just after the cursor/plan is produced by CBO?

    Following suggestions of Kerry Osborne&#8217;s Oracle Blog &raquo; Blog Archive &raquo; Explain Plan Lies &#8211; Kerry Osborn- on the lies of "Explain plan" (and of "set autotrace on"  too) I'd like to try to stop a query/DML before actually it star

  • Images Out of Order

    Heya; I just headed over to my "All Images" folder, which I do not do often, only to find that the images were out of order and some of them are missing "Date Created" listings. Also, there are no "Size" listings. So I opened "Arrange By" and the "Si

  • Changes in the report are not appearing

    Hello All, I have the following command Run_Product(REPORTS, 'tpd_subst_rep', SYNCHRONOUS, RUNTIME, FILESYSTEM, pl_id, NULL) in the form. I have changed the report tpd_subst_rep but I am not able to see the changes. Can any one please explain from wh