OVM Repository and VM Guest Backups - Best Practice?

Hey all,
Does anybody out there have any tips/best practices on backing up the OVM Repository as well ( of course ) the VM's? We are using NFS exclusively and have the ability to take snapshots at the storage level.
Some of the main points we'd like to do ( without using a backup agent within each VM ):
backup/recovery of the entire VM Guest
single file restore of a file within a VM Guest
backup/recovery of the entire repository.
The single file restore is probably the most difficult/manual. The rest can be done manually from the .snapshot directories, but when we're talking about having hundreds and hundreds of guests within OVM...this isn't overly appealing to me.
OVM has this lovely manner of naming it's underlying VM directories off of some abiguous number which has nothing to do with the name of the VM ( I've been told this is changing in an upcoming release ).
Brent

Please find below the response from the Oracle support on that.
In short :
- First, "manual" copies of files into the repository is not recommend nor supported.
- Second we have to go back and forth through templates and http (or ftp) server.
Note that when creating a template or creating a new VM from a template, we're tlaking about full copies. No "fast-clone" (snapshots) are involved.
This is ridiculous.
How to Back up a VM:1) Create a template from the OVM Manager console
Note: Creating a template requires the VM to be stopped (this is required because the if the copy of the virtual disk is done with the running will corrupt data) and the process to create the template make changes to the vm.cfg
2) Enable Storage Repository Back Ups using the step above:
http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-storage-repo-config.html#vmusg-repo-backup
2) Mount the NFS export created above on another server
3) Them create a compress file (tgz) using the the relevant files (cfg + img) from the Repository NFS mount:
Here is an example of the template:
$ tar tf OVM_EL5U2_X86_64_PVHVM_4GB.tgz
OVM_EL5U2_X86_64_PVHVM_4GB/
OVM_EL5U2_X86_64_PVHVM_4GB/vm.cfg
OVM_EL5U2_X86_64_PVHVM_4GB/System.img
OVM_EL5U2_X86_64_PVHVM_4GB/README
How to restore up a VM:1) Then upload the compress file (tgz) to an HTTP, HTTPS or FTP. server
2) Import to the OVM manager using the following instructions:
http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-repo.html#vmusg-repo-template-import
3) Clone the Virtual machine from the template imported above using the following instructions:
http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-vm-clone.html#vmusg-vm-clone-image
Edited by: user521138 on Sep 5, 2012 11:59 PM
Edited by: user521138 on Sep 6, 2012 3:06 AM

Similar Messages

  • Re: OVM Repository and VM Guest Backups - Best Practice?

    Hi,
    I have also been looking into how to backup an OVM Repository, and I'm currently thinking of doing it with the OCFS2 reflink command, which is what it used by the OVMM 'Thin Clone' option to create a snapshot of the virtual disk's .img file. I thought I could create a script that reflink's all the virtual disks to sperate directory within the repository, then export the repository via OVMM and backup all the snapshots. All the snapshots can then be deleted once the backup was complete.
    The VirtualMachines directory could also be backed up in the same way, or I would have thought it would be safe to back this directory directly, as the files are very cfg files are very small and change infrequently.
    I would be interested to hear from anyone that has any experience of doing a similar thing, or has any advise about whether this would be doable.
    Regards,
    Andy

    yes, that is one common way to perform backups. Unfortunately, you'll have to script those things yourself. Some people also use the xm command to pause the VM for just the time it takes to create the reflinks (especially if there is more than one virtual disk in the machine and you want to make sure they are consistent).
    You can read a bit more about it here:
    VM Manager and VM Server - backup and recovery options
    great article (in german but you can understand the scripts) about this http://www.trivadis.com/fileadmin/user_upload/PDFs/Trivadis_in_der_Presse/120601_DOAG-News_Snapshot_einer_VM_mit_Oracle_VM_3.pdf
    and I have blogged about an alternative way where you clone a running machine and then backup that cloned (and stopped) machine
    http://portrix-systems.de/blog/brost/taking-hot-backups-with-oracle-vm/
    cheers
    bjoern

  • Portal backup best practice!!

    Hi ,
    Can anyone let me know where can i get Portal Backup Best Practices!! We are using SAP EP7.
    Thanks,
    Raghavendra Pothula

    Hi Tim: Here's my basic approach for this -- I create either a portal dynamic page or a stored procedure that renders an HTML parameter form. You can connect to the database and render what ever sort of drop downs, check boxes, etc you desire. To tie everything together, just make sure when you create the form, the names of the fields match that of the page parameters created on the page. This way, when the form posts to the same page, it appends the values for the page parameters to the URL.
    By coding the entire form yourself, you avoid the inherent limitations of the simple parameter form. You can also use advanced JavaScript to dynamically update the drop downs based on the values selected or can cause the form to be submitted and update the other drop downs from the database if desired.
    Unfortunately, it is beyond the scope of this forum to give you full technical details, but that is the approach I have used on a number of portal sites. Hope it helps!
    Rgds/Mark M.

  • E-business backup best practices

    What is E-business backup best practices, tools or techniques?
    For example what we do now is taking copy from the oracle folder in D partition every two days.
    but it take long time, and i believe there is a better way.
    We are on windows server 2003 and E-business 11i

    user11969921 wrote:
    What is E-business backup best practices, tools or techniques?
    For example what we do now is taking copy from the oracle folder in D partition every two days.
    but it take long time, and i believe there is a better way.
    We are on windows server 2003 and E-business 11i
    Please see previous threads for the same topic/discussion -- https://forums.oracle.com/search.jspa?view=content&resultTypes=&dateRange=all&q=backup+EBS&rankBy=relevance
    Please also see RMAN manuals (incremental backup, hot backup, ..etc) -- http://www.oracle.com/pls/db112/portal.portal_db?selected=4&frame=#backup_and_recovery
    Thanks,
    Hussein

  • When I share a file to YouTube, where does the output file live? I want also to make a DVD. And is this a best practice or is there a better way?

    I want also to make a DVD, but can't see where the .mov files are.
    And is this a best practice or is there a better way to do this, such as with a master file?
    thanks,
    /john

    I would export to a file saved on your drive as h.264, same frame size. Then import that into youtube.
    I have never used FCP X to make a DVD but I assume that it will build the needed vob mpeg 2 source material for the disk.
      I used to use Toast & IDVD. Toast is great.
    To "see" the files created by FCP 10.1.1 for YouTube, rt. (control) click on the Library Icon in your Movies/show package contents/"project"/share.

  • Backup "best practices" scenarios?

    I'd be grateful for a discussion of different "best practices" regarding backups, taking into consideration that a Time Machine backup is only as good as the external disk it is on.
    My husband & I currently back up our two macs, each to its own 500 GB hard disk using TM. We have a 1TB disk and I was going to make that a periodic repository for backups for each of our macs, in case one of the 500GB disks fails, but was advised by a freelance mac tech not to do that. This was in the middle of talking about other things & now I cannot remember if he said why.
    We need the general backup, and also (perhaps a separate issue) I am particularly interested in safeguarding our iPhoto libraries. Other forms of our data can be redundantly backed up to DVDs but with 50GB iPhoto libraries that doesn't seem feasible, nor is backing up to some online storage facility. My own iP library is too large for my internal disk; putting the working library on an external disk is discouraged by the experts in the iPhoto forum (slow access, possible loss if connexion between computer & disk interrupted during transfer) so my options seem to be getting a larger internal disk or using the internal disk just for the most recent photos and keeping the entire library on an external disk, independent of incremental TM backups.

    It's probably more than you ever wanted to know about backups, but some of this might be useful:
    There are three basic types of backup applications: *Bootable Clone, Archive, and Time Machine.*
    This is a general explanation and comparison of the three types. Many variations exist, of course, and some combine features of others.
    |
    _*BOOTABLE "CLONE"*_
    |
    These make a complete, "bootable" copy of your entire system on an external disk/partition, a second internal disk/partition, or a partition of your internal disk.
    Advantages
    If your internal HD fails, you can boot and run from the clone immediately. Your Mac may run a bit slower, but it will run, and contain everything that was on your internal HD at the time the clone was made or last updated. (But of course, if something else critical fails, this won't work.)
    You can test whether it will run, just by booting-up from it (but of course you can't be positive that everything is ok without actually running everything).
    If it's on an external drive, you can easily take it off-site.
    Disadvantages
    Making an entire clone takes quite a while. Most of the cloning apps have an update feature, but even that takes a long time, as they must examine everything on your system to see what's changed and needs to be backed-up. Since this takes lots of time and CPU, it's usually not practical to do this more than once a day.
    Normally, it only contains a copy of what was on your internal HD when the clone was made or last updated.
    Some do have a feature that allows it to retain the previous copy of items that have been changed or deleted, in the fashion of an archive, but of course that has the same disadvantages as an archive.
    |
    _*TRADITIONAL "ARCHIVE" BACKUPS*_
    |
    These copy specific files and folders, or in some cases, your entire system. Usually, the first backup is a full copy of everything; subsequently, they're "incremental," copying only what's changed.
    Most of these will copy to an external disk; some can go to a network locations, some to CDs/DVDs, or even tape.
    Advantages
    They're usually fairly simple and reliable. If the increments are on separate media, they can be taken off-site easily.
    Disadvantages
    Most have to examine everything to determine what's changed and needs to be backed-up. This takes considerable time and lots of CPU. If an entire system is being backed-up, it's usually not practical to do this more than once, or perhaps twice, a day.
    Restoring an individual item means you have to find the media and/or file it's on. You may have to dig through many incremental backups to find what you're looking for.
    Restoring an entire system (or large folder) usually means you have to restore the most recent Full backup, then each of the increments, in the proper order. This can get very tedious and error-prone.
    You have to manage the backups yourself. If they're on an external disk, sooner or later it will get full, and you have to do something, like figure out what to delete. If they're on removable media, you have to store them somewhere appropriate and keep track of them. In some cases, if you lose one in the "string" (or it can't be read), you've lost most of the backup.
    |
    _*TIME MACHINE*_
    |
    Similar to an archive, TM keeps copies of everything currently on your system, plus changed/deleted items, on an external disk, Time Capsule (or USB drive connected to one), internal disk, or shared drive on another Mac on the same local network.
    Advantages
    Like many Archive apps, it first copies everything on your system, then does incremental backups of additions and changes. But TM's magic is, each backup appears to be a full one: a complete copy of everything on your system at the time of the backup.
    It uses an internal OSX log of what's changed to quickly determine what to copy, so most users can let it do it's hourly incremental backups without much effect on system performance. This means you have a much better chance to recover an item that was changed or deleted in error, or corrupted.
    Recovery of individual items is quite easy, via the TM interface. You can browse your backups just as your current data, and see "snapshots" of the entire contents at the time of each backup. You don't have to find and mount media, or dig through many files to find what you're looking for.
    You can also recover your entire system (OSX, apps, settings, users, data, etc.) to the exact state it was in at the time of any backup, even it that's a previous version of OSX.
    TM manages it's space for you, automatically. When your backup disk gets near full, TM will delete your oldest backup(s) to make room for new ones. But it will never delete it's copy of anything that's still on your internal HD, or was there at the time of any remaining backup. So all that's actually deleted are copies of items that were changed or deleted long ago.
    TM examines each file it's backing-up; if it's incomplete or corrupted, TM may detect that and fail, with a message telling you what file it is. That way, you can fix it immediately, rather than days, weeks, or months later when you try to use it.
    Disadvantages
    It's not bootable. If your internal HD fails, you can't boot directly from your TM backups. You must restore them, either to your repaired/replaced internal HD or an external disk. This is a fairly simple, but of course lengthy, procedure.
    TM doesn't keep it's copies of changed/deleted items forever, and you're usually not notified when it deletes them.
    It is fairly complex, and somewhat new, so may be a bit less reliable than some others.
    |
    RECOMMENDATION
    |
    For most non-professional users, TM is simple, workable, and maintenance-free. But it does have it's disadvantages.
    That's why many folks use both Time Machine and a bootable clone, to have two, independent backups, with the advantages of both. If one fails, the other remains. If there's room, these can be in separate partitions of the same external drive, but it's safer to have them on separate drives, so if either app or drive fails, you still have the other one.
    |
    _*OFF-SITE BACKUPS*_
    |
    As great as external drives are, they may not protect you from fire, flood, theft, or direct lightning strike on your power lines. So it's an excellent idea to get something off-site, to your safe deposit box, workplace, relative's house, etc.
    There are many ways to do that, depending on how much data you have, how often it changes, how valuable it is, and your level of paranoia.
    One of the the best strategies is to follow the above recommendation, but with a pair of portable externals, each 4 or more times the size of your data. Each has one partition the same size as your internal HD for a "bootable clone" and another with the remainder for TM.
    Use one drive for a week or so, then take it off-site and swap with the other. You do have to tell TM when you swap drives, via TM Preferences > Change Disk; and you shouldn't go more than about 10 days between swaps.
    There are other options, instead of the dual drives, or in addition to them. Your off-site backups don't necessarily have to be full backups, but can be just copies of critical information.
    If you have a MobileMe account, you can use Apple's Backup app to get relatively-small amounts of data (such as Address book, preferences, settings, etc.) off to iDisk daily. If not, you can use a 3rd-party service such as Mozy or Carbonite.
    You can also copy data to CDs or DVDs and take them off-site. Re-copy them every year or two, as their longevity is questionable.
    Backup strategies are not a "One Size Fits All" sort of thing. What's best varies by situation and preference.
    Just as an example, I keep full Time Machine backups; plus a CarbonCopyCloner clone (updated daily, while I'm snoozing) locally; plus small daily Backups to iDisk; plus some other things to CDsDVDs in my safe deposit box. Probably overkill, but as many of us have learned over the years, backups are one area where +Paranoia is Prudent!+

  • Sql backup best practice on vms that are backed up as a complete vm

    hi,
    apologies as i am sure this has been asked many times before but i cant really find an answer to my question. so my situation is this. I have two types of backups; agent based and snap based backups.
    For the vm's that are being backed up by snapshots the process is: vmware does the snap, then the san takes a snap of the storage and then the backup is taken from the san. we then have full vm backups.
    For the agent based backups, these are only backing up file level stuff. so we use this for our sql cluster and some other servers. these are not snaps/full vm backups, but simply backups of databases and files etc.
    this works well, but there are a couple of servers that need to be in the full vm snap category and therefore cant have the backup agent installed on that vm as it is already being backed up by the snap technology. so what would be the best practice on these
    snapped vms that have sql installed as well? should i configure a reoccurring backup in sql management studio (if this is possible??) which is done before the vm snap backup? or is there another way i should be backing up the dbs?
    any suggestions would be very welcome.
    thanks
    aaron

    Hello Aaron,
    If I understand correctly, you perform a snapshot backup of the complete VM.
    In that case you also need to create a SQL Server backup schedule to perform Full and Transaction Log backups.
    (if you do a filelevel backup of the .mdf and .ldf files with an agent you also need to do this)
    I would run a database backup before the VM snapshot (to a SAN location if possible), then perform the Snapshot backup.
    You should set up the transaction log backups depending on business recovery needs.
    For instance: if your company accepts a maximum of 30 minutes data loss make sure to perform a transaction log backup every 30 minutes.
    In case of emergency you could revert to the VM Snapshot, restore the full database backup and restore transaction log backups till the point in time you need.

  • Development System Backup - Best Practice / Policy for offsite backups

    Hi, I have not found any recommendations from SAP on best practices/recommendations on backing up Development systems offsite and so would appreciate some input on what policies other companies have for backing up Development systems. We continuously make enhancements to our SAP systems and perform daily backups; however, we do not send any Development system backups offsite which I feel is a risk (losing development work, losing transport & change logs...).
    Does anyone know whether SAP have any recommendations on backuping up Development systems offsite? What policies does your company have?
    Thanks,
    Thomas

    Thomas,
    Your question does not mention consideration of both sides of the equation - you have mentioned the risk only.  What about the incremental cost of frequent backups stored offsite?  Shouldn't the question be how the 'frequent backup' cost matches up with the risk cost?
    I have never worked on an SAP system where the developers had so much unique work in progress that they could not reproduce their efforts in an acceptable amount of time, at a acceptable cost.  There is typically nothing in dev that is so valuable as to be irreplaceable (unlike production, where the loss of 'yesterday's' data is extremely costly).  Given the frequency that an offsite dev backup is actually required for a restore (seldom), and given that the value of the daily backed-up data is already so low, the actual risk cost is virtually zero.
    I have never seen SAP publish a 'best practice' in this area.  Every business is different; and I don't see how SAP could possibly make a meaningful recommendation that would fit yours.  In your business, the risk (the pro-rata cost of infrequently  needing to use offsite storage to replace or rebuild 'lost' low cost development work) may in fact outweigh the ongoing incremental costs of creating and maintaining offsite daily recovery media. Your company will have to perform that calculation to make the business decision.  I personally have never seen a situation where daily offsite backup storage of dev was even close to making  any kind of economic sense. 
    Best Regards,
    DB49

  • Grid Control and SOA suite monitoring best practice

    Hi there,
    I’m trying to monitor a SOA implementation on Grid Control.
    Are there some best practices about it?
    Thanks,     
    Nisti
    Edited by: rnisti on 12-Nov-2009 9:34 AM

    If they use it to access and monitor the database without making any other changes, then it should be fine. But if they start scheduling stuff like oradba mentioned above, then that is where they will clash.
    You do not want a situation where different jobs are running on the same database from different setups by different team (cron, dbcontrol, dbms_job, grid control).
    Just remember their will be aditional resource usage on the database/server to have both running and the Grid Control Repository cannot be in the same database as the db console repository.

  • Unity Backup Best Practices

    We are looking at our Disaster Recovery when it comes to our Unity Voicemail servers and I am looking for some Best Practices when it comes to backing up Unity. Does anyone have any whitepapers/suggestions? Thanks.

    if you mean unity backup as in backing up the data, then the suggested method is four fold...
    1) using a backupExec or the likes, with a SQL agent, backup the SQL database(s) on unity.
    2) using a backupExec or the likes, with an Exchange agent, backup the Exchange database(s) used by unity.
    3) using a backupExec or the likes, perform a Windows 2000/2003 full backup including SystemState. (open file option is also available for backupExec)
    4) for DR recovery purposes, using DiRT, backup the unity system.
    see this link for more unity backup info:
    http://www.cisco.com/en/US/products/sw/voicesw/ps2237/products_white_paper09186a00801f4ba7.shtml
    if you mean backing up unity as far as failover, then you can use the link that follows to create a failover environment for your unity:
    http://www.cisco.com/en/US/products/sw/voicesw/ps2237/prod_installation_guides_list.html

  • UDDI and deployed Web Services Best Practice

    Which would be considered a best practice?
    1. To run the UDDI Registry in it's own OC4J container with Web Services deployed in another container
    2. To run the UDDI Registry in the same OC4J container as the deployed Web Services

    The reason you don't see your services in the drop-down is because, CE does lazy initialization of EJB components (gives you a faster startup time of the server itself). But your services are still available to you. You do not need to redeply each time you start the server. One thing you could do is create a logical destinal (in NWA) for each service and use the "search by logical destination" button. You should always see your logical names in that drop-down that you can use to invoke your services. Hope it helps.
    Rao

  • What is the Account and Contact workflow or best practice?

    I'm just learning the use of the web services. I have written something to upload my customers into accounts using the web services. I need to now include a contact for each account. I'm trying to understand the workflow. It looks like I need to first call the web service to create the account, then call a separate web service to create the contact and include the account's ID with the contact to that they are linked, is this correct?
    Is there a place I can go to find the "best practices" for work flows?
    Can I automatically create the contact within my call to create the account in the web service?
    Thanks,

    Probably a poor choice of words. Sorry.
    So basically, I have gotten further, but I just noticed related problem.
    I'm using the WebServices(WS) 1.0. I insert an account, then, on a separate WS call, I insert my contacts for the account. I include the AccountID, and a user defined key from the Account when creating the Contact.
    When I look at my Contact on the CRMOD web page, it shows the appropriate links back to the Account. But when I look at my Account on the CRMOD web page, it shows no Contacts.
    So when I say workflow or Best Practice, I was hoping for guidance on how to properly write my code to accomplish all of the necessary steps. As in this is how you insert an account with a contact(s) and it updates the appropriate IDs so that it shows up properly on the CRMOD web pages.
    Based on the above, it looks like I need to, as the next step, take the ContactID and update the Account with it so that their is a bi-directional link.
    I'm thinking there is a better way in doing this.
    Here is my psuedocode:
    AccountInsert()
    AccountID = NewAcctRec
    ContactInsert(NewAcctRec)
    ContactID = NewContRec
    AccountUpdate(NewContRec)
    Thanks,

  • GRC AACG/TCG and CCG control migration best practice.

    Is there any best practice documents which illustrates the step by step migration of AACG/TCG and CCG controls from the development instance to the production? Also, how should one take the back up for the same ?
    Thanks,
    Arka

    There are no automated out of the box tools to migrate anything from CCG.  In AACG/TCG  you can export and import Access Models (includes the Entitlements) and Global Conditions.  You will have to manual setup roles, users, path conditions, etc.
    You can't clone AACG/TCG or CCG.
    Regards,
    Roger Drolet
    OIC

  • Oracle SLA Metrics and System Level Metrics Best Practices

    I hope this is the right forum...
    Hey everyone,
    This is what I am looking for. We have several SLA's setup and we have defined many Business Metrics and we are trying to map them to System level metrics. One key area for us is Oracle. I was wondering is there is a best practice guide out there for SLA's when dealing with Oracle or even better, System Level Metric best practices.
    Any help would be ideal please.

    Hi
    Can you also include the following in the FAQ?
    1) ODP.NET if installed prior to this beta version - what is the best practice ? De-install it prior to getting this installed etc ..
    2) As multiple Oracle home's have become the NORM these days - this being a Client only should probably be non-intrusive and non-invasive.. Hope that is getting addressed.
    3) Is this a pre-cursor to the future happenings like some of the App-Server evolving to support .NET natively and so on??
    4) Where is BPEL in this scheme of things? Is that getting added to this also so that Eclipse and .NET VS 2003 developers can use some common Webservice framework??
    Regards
    Sundar
    It was interesting to see options for changing the spelling of Webservice [ the first one was WEBSTER]..

  • Bring CRM to clients and partners, Architecture DMZ best practices

    Hi, we need to bring access to CRM from internet to clients and partners.
    So that we need to know the best practices to architecture design.
    We have many doubts with these aspects:
    - We will use SAP portal, SAP Gateways and web dispatchers with a DMZ:
           do you have examples about this kind of architecture?
    - The new users will be added in 3 steps: 1000, 10000 and 50000:
           how can regulate the stress at internal system?, is it possible?
    - The system can't show any problems to the clients:
           we need 24x7 system, because the clients are big clients.
    - At the moment we have 1000 internal users.
    thanks

    I use the Panel Close? filter event and discard it and use the event to signal to my other loops/modules that my software should shut down. I normally do this either via user events or if I'm using a queued state machine (which I generally do for each of my modules) then I enqueue a 'shutdown' message where each VI will close its references (e.g. hardware/file) and stop the loop.
    If it's just a simple VI, I can sometimes be lazy and use local variables to tell a simple loop to exit.
    Finally, once all of the modules have finished, use the FP.Close method to close the top level VI and the application should leave memory (once everything else has finished running).
    This *seems* to be the most recommended way of doing things but I'm sure others will pipe up with other suggestions!
    The main thing is discarding the panel close event and using it to signal the rest of your application to shut down. You can leave your global for 'stopping' the other loops - just write a True to that inside the Panel Close? event but a better method is to use some sort of communications method (queue/event) to tell the rest of your application to shut down.
    Certified LabVIEW Architect, Certified TestStand Developer
    NI Days (and A&DF): 2010, 2011, 2013, 2014
    NI Week: 2012, 2014
    Knowledgeable in all things Giant Tetris and WebSockets

Maybe you are looking for

  • Itunes plus quick time update

    Received notice of updating to itunes v9 +quick time + safari. Well Safari update appeared to install ok but the itunes and quick time gets to a point where it say it can't change Config.msi\4843932.rbf Tells me to try again as Administator. Well I a

  • Solaris 8 container install problem using Ultra 60 flash archive

    Hi I have an M4000 configured with a Solaris 8 container. I am using a flash archive from an Ultra 60. Below is the problem when I install. # zoneadm -z f714clone install -u -a /var/flar/fid714_20090515.flar Log File: /var/tmp/f714clone.install.26921

  • Settings for IDoc Processing Maintain Partner Profile Manually

    Hi Everybody, I just wanted to maintain a partner profile for IDoc Outbound Processing in CRM7.0. The Partner Type is LS (Logical System). Now, when I'm creating an Outbound Parameter and want to save, I get an Error Message that the field package si

  • Image path broken after 4.1 to 4.2 upgrade

    Hi, Just upgraded several instances from 4.1 to 4.2.1. Running apex listener 2.0.0.354.17.05 on glassfish 3.1.2.2. Our db is 11.2.0.3.0 on centos 2.6.32-279.9.1.el6.x86_64 One of our upgrades has a very unusual problem. Our applications all run fine,

  • Where could I read the Temp of NB ? Help please!

    Hallo everyone ! I just wonder  how you guys can read the temp of NB ??!! I don't find that in the BIOS ( mine is 1.1 ) and when I run CoreCenter client , it shows the Temp of NB like that : N/A how can I read the temp of NB please ? Any comment ??