Clean up database processes

Dear,
when running this query, it returned around 900 rows and also checking oracle database i have 145 sessions open.
this is a test server. please advise on how to clean up these processes because the system running out of memory.
Thanks in advance!
select manager_type, process_status_code, PROCESS_START_DATE,concurrent_queue_id from fnd_concurrent_processes;
MANAGER_TYPE PRO PROCESS_START_D CONCURRENT_QUEUE_ID
1 A 28-JUL-08 10
1 A 28-JUL-08 40
1 A 28-JUL-08 1063
1 A 28-JUL-08 1046
1 A 28-JUL-08 1025
1 A 29-JUL-08 0
3 A 30-JUL-08 15
909 rows selected.

Thanks mdtaylor,
below is the list
Standard Manager the target was set to 12
is it too many.
which one you think i can turn it off
Name Actual target
Standard Manager 13 12
Internal Manager 1 1
Conflict Resolution Manager 1 1
Output Post Processor 2 2
Scheduler/Prereleaser Manager 1 1
Service Manager 1 1
Session History Cleanup 1 1
UWQ Worklist Items Release for Crashed session 1 1
Inventory Manager 1 1
INV Remote Procedure Manager 1 1
OAM Metrics Collection Manager 1 1
Contracts Core Concurrent Manager 1 1
PA Streamline Manager 1 1
PO Document Approval Manager 1 1
Receiving Transaction Manager 1 1
Workflow Agent Listener Service 1 1
Workflow Mailer Service 1 1
Workflow Document Web Services Service 1 1
By the way this command
$FNDCPPUR apps/apps 0 Y ALL MODE age=10
You have specified invalid arguments for the program.
would you please correct?
Thanks in advance!

Similar Messages

  • Any way to Lock the Decoration Free Label in place so that the Clean Up Diagram process doesn't move it soemplace unreasonable?

    Any way to Lock the Decoration Free Label in place so that the Clean Up Diagram process doesn't move it soemplace unreasonable? The Clean Up Diagram process moves things to very strange places and often makes the diagram more complicated than it needs to be.
    There should be a way to lock a Decoration Free Label in a location that the Clean Up Diagram will not move it somewhere else.
    Is there such a feature?
    How do you get your Decorations to stay where you put them even after Clean Up Diagram?

    Hey dbaechtel,
    There is a way to do this however you will need to put your Decoration Free Label within a structure.  Check out the following KnowledgeBase article below for more information:
    KnowledgeBase 4TKECSYP: Block Diagram Cleanup Tool Moves Decorations
    Best,
    Chris LS
    National Instruments
    Applications Engineer
    Visit ni.com/gettingstarted for step-by-step help in setting up your system.

  • Dimensions and Cubes Process OK / database 'Process Full' fails

    Hello,
    I am having trouble processing SSAS database (SQL 2008R2).
    Within BIDS environment I can process all dimensions and two cubes - no problem. But when I try to process the database (Process Full – Sequential/ All in one transaction), I keep getting errors about one specific dimension (however the very same dimension
    is processed separately just fine).
    Any ideas why this would happen?
    Thanks,
    Lana

    Thanks so much for all your replies! Your help is greatly appreciated
    J
    Here is the error message I am getting (it is the same for bunch of other attributes of the same dimension called
    Item):
    Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Item', Name of 'Item' was being processed.
    Errors in the OLAP storage engine: An error occurred while the 'Product Group' attribute of the 'Item' dimension from the 'InvCube' database was being processed.
    OLE DB error: OLE DB or ODBC error: Operation canceled; HY008.
    Errors in the OLAP storage engine: A duplicate attribute key has been found when processing: Table: 'dbo_Item', Column: 'Colour', Value: ''. The attribute is 'Colour'.
    Keep in mind that the Key for this dimension is called ItemID and there are no duplicates (I checked). Before processing the cube, I have dozens of procedures (within SSIS package that brings the data into DW) that are checking if all the
    keys (FK columns on Fact table) exist within dim tables (PK columns). I also do not allow NULLs or blanks for the dim keys within Fact tables, I replace them with 
    special “DimName-000” Keys that, again, exist within dim tables.
    What I am trying to figure out… why I can process Item dimension just fine on its own (I also process all the other dimensions OK), after that I can process both cubes that are using this (and other) dimension(s) … and I can browse the cubes, getting breakdown
    by Item / Colour / Product Group, etc... So, everything seems to work perfectly. However, when I try to process the whole database (with all previously processed dimensions and both cubes - there are only two cubes for now, to make is simple), the process
    fails, giving me an error about duplicate attribute key for this specific Item dimension.
    Any thoughts?
    Thanks again!

  • Advise on SQL database process

    Dear all,
    We have build our own product CMS platform configuration which is running on SQL server and CMS web site link to it.
    We will lauch our product soon but need to identify the correct hosting plan for not having any further issue.
    For exemple for now all is hosted in our company azure account with a Standard Basic 5 DTU's.
    When we go in production we can expect some connection and big amount of data on our database.
    My question are as follow :
    - Does using a single database storage for my potential Customers would be enough ?
    - How should I plan backup efficiently ?
    - Should I plan a safety replication process somewhere in azure or duplicate the storage in order to switch in case of lack of place ?
    My concern is to avoid my database beeing full while customers increase
    Thanks for your advise and help
    regards
    serge

    Hi Solatys,
    - If I select by default STANDARD Azure database, which size, DTU, etc should I select ? 
    For this question, you may reference the below link. You can scale your Azure database performance based on the real-time requirement.
    Azure
    SQL Database introduces new near real-time performance metrics 
    - If for instance the initial URL of the web portal connected to SQL server1 has a trouble, how can I switch users transparently to a second replicated server in order that user will notify anything ?
    Azure SQL Database has a built-in high availability subsystem that protects your database from failures of individual servers and devices in a datacenter. Azure SQL
    Database maintains multiple copies of all data in different physical nodes located across fully independent physical sub-systems to mitigate outages due to failures of individual server components, such as hard drives, network interface adapters, or even entire
    servers. At any one time, three database replicas are running—one primary replica and two or more
    secondary replicas. Data is written to the primary and one secondary replica using a quorum based commit scheme before the transaction is considered committed.
    If the hardware fails on the primary replica, Azure SQL Database detects the failure and fails over to the secondary replica. In case of a physical loss of a replica, a new replica is automatically created.
    So there are always at minimum two physical, transactionally consistent copies of your data in the datacenter.
    Azure SQL Database Business Continuity
    - IF I select the Maximum size of the database, what will happen in case it will be nearly full and need to provide more space ?
    Insert and update transactions that exceed the upper limit will be rejected because the database will be in read-only mode. The maximum size has been gained from 50GB to 500GB as of now, you may have no concern about it. You can
    set a monitoring alertto
    notify the storage utilization and CPU percentage etc.
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • Unable to start the database(Process m000 died, see its trace file)

    Hi,
    Oracle Version:10.2.0.1
    Operating System:Linux
    Hi suddenly the database went down and in alert logfile it is showing error like this.
    Fri Feb 18 01:40:51 2011
    Process m000 died, see its trace file
    Fri Feb 18 01:40:51 2011
    ksvcreate: Process(m000) creation failed
    Fri Feb 18 01:41:18 2011
    Errors in file /u01/app/oracle/admin/apdtest/bdump/apdtest_ora_15998.trc:
    ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []
    Fri Feb 18 01:41:19 2011
    Process m000 died, see its trace file
    Fri Feb 18 01:41:19 2011
    ksvcreate: Process(m000) creation failed
    Fri Feb 18 01:42:19 2011
    Errors in file /u01/app/oracle/admin/apdtest/bdump/apdtest_ora_16032.trc:
    ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []
    Fri Feb 18 01:42:20 2011
    Process m000 died, see its trace file
    Fri Feb 18 01:42:20 2011
    ksvcreate: Process(m000) creation failed
    Fri Feb 18 01:43:20 2011
    Errors in file /u01/app/oracle/admin/apdtest/bdump/apdtest_ora_16036.trc:
    ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []
    Fri Feb 18 01:43:21 2011
    Process m000 died, see its trace file
    Fri Feb 18 01:43:21 2011
    ksvcreate: Process(m000) creation failed
    Fri Feb 18 01:44:21 2011
    Errors in file /u01/app/oracle/admin/apdtest/bdump/apdtest_ora_16042.trc:
    ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []
    Fri Feb 18 01:44:22 2011
    Process m000 died, see its trace file
    Fri Feb 18 01:44:22 2011
    ksvcreate: Process(m000) creation failedand generating lot of trace files.
    Please help me how to solve this .
    Thanks & Regards,
    Poorna Prasad.

    In my alert log file I also find this error.
    ed Feb 16 06:11:13 2011
    Process J000 died, see its trace file
    Wed Feb 16 06:11:13 2011
    kkjcre1p: unable to spawn jobq slave process
    Wed Feb 16 06:11:13 2011
    Errors in file /u01/app/oracle/admin/apdtest/bdump/apdtest_cjq0_26196.trc:
    Wed Feb 16 06:11:13 2011
    Errors in file /u01/app/oracle/admin/apdtest/bdump/apdtest_j000_4172.trc:
    ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []
    Wed Feb 16 06:11:14 2011
    Errors in file /u01/app/oracle/admin/apdtest/bdump/apdtest_j000_4172.trc:
    ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []
    Process J000 died, see its trace file
    Wed Feb 16 06:11:14 2011
    kkjcre1p: unable to spawn jobq slave process
    Wed Feb 16 06:11:14 2011
    Errors in file /u01/app/oracle/admin/apdtest/bdump/apdtest_cjq0_26196.trc:Thanks & Regards,
    Poorna Prasad.S

  • The Clone Database process has not finished 12 hours later!

    On my WindowsXP system,I have made a Data Guard configuration with DGMGRL tool(without EM Grid Control). The current database is the primary database. I want to add a standby database. So, At first, I clone the current database using EM Clone Database tool. But the clone process has not finished after renning for 12 hours.
    What is the reason? Is it normal?

    To answer your question then. No, you do not need an SGA size >=800MB to clone the database. You may want to increase the SGA on the clone database to speed up the clone process if possible. 500MB isn't much memory for a database server. What is the size of the SGA on the source database?

  • Database processing sloness (V.V URGENT)

    Dear Friends,
    We have banking system in oracle database 10g. And we have two tables. deal and deal_tmp. Firstly data coming into tmp table then we are storing deal table. Firstly everything was ok. Now this process taking toooo much time to process. I have run statistics as well and rebuilding indexes as well. But problem not solved. Each deal taking 20 min to complete while before it it was taking 1 min maximum.
    Can any body help me for solving this issue.
    Ali Haroon Nawaz

    Ali Haroon wrote:
    Dear Friends,
    We have banking system in oracle database 10g. And we have two tables. deal and deal_tmp. Firstly data coming into tmp table then we are storing deal table. Firstly everything was ok. Now this process taking toooo much time to process. I have run statistics as well and rebuilding indexes as well. But problem not solved. Each deal taking 20 min to complete while before it it was taking 1 min maximum.
    Can any body help me for solving this issue.
    Ali Haroon Nawazcheck this link
    HOW TO: Post a SQL statement tuning request - template posting
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Clean up the process repository

    Hello,
    we have undeployed several DCs and are now facing the problem, that the old processes are still listed in the process repository. Is there a possiblity, how this could be cleaned up?
    Thanks in advance.
    Regards,
    André

    within my knowledge, at least in CE7.2, it is not possible.

  • My computer crashed and had to be scrubbed clean--in the process, my bookmarks were saved to a backup folder on the hard drive. How do I import it back into Firefox?

    All my Firefox settings (home page, etc.) had to be reset, but I can't figure out how to import my bookmarks from my backup folder (not in Firefox Library) to the toolbar. Can I drag and drop?

    The answer is yes you can drag and drop, but I am not too clear what you have done, so I will attempt to briefly answer likely possibilities.
    If you will bear with me I will try to explain some of my understandings about Firefox and its bookmarks. Although you see them in the library or sidebar they are stored in a database, along with other info in a file called ''places.sqlite'' . That resides in something called the profile. Firefox also does some clever tricks to back those up and restore them as necessary, so has a backups folder. Additionally it allows the bookmarks to be exported or imported as two types of file .html (try to avoid) or .json
    So one question is what sort of files have you managed to keep ?
    Now talking about the Firefox Bookmarks Library <br> you may note that has by default a folder or directory called 'bookmarks toolbar'
    Anything in the bookmarks toolbar within the library will appear on the toolbar you see. (space permitting, and sometimes as nested folders)
    You can drag and drop to your hearts content within the library, including to the folder for the toolbar.
    (It allows a lot more, for instance
    <br>- dragging onto a desktop to make a shortcut
    <br>- to or from another open library or browser window
    <br>- dragging and dropping to become a tab, or as a toolbar item)
    If you exported your bookmarks as a standard .json file before working on your computer, then you should have them stored somewhere as a .json file. If you open the library it gives you the option under import and export to browse for and import files.
    It is rather more complicated if you stored the raw data files/folders from a profile.
    See also
    * [[Backing up and restoring bookmarks]]
    *[[profiles]]

  • Cleanly stop a process in an  XI adapter module

    Hi,
    I have a java adapter module that decides whether the process should carry on or not.
    When I want to stop the process I throw an UnsupportedOperationException that is reported in Runtime Workbench -> system monitoring.
    Are there any other means to stop the process (other exception....) from an adapter module?
    Thanks
    Yann

    Hi,
    I have a java adapter module that decides whether the process should carry on or not.
    When I want to stop the process I throw an UnsupportedOperationException that is reported in Runtime Workbench -> system monitoring.
    Are there any other means to stop the process (other exception....) from an adapter module?
    Thanks
    Yann

  • How to clean Oracle database?

    Dear Gurus and DBAs. My company wish to give old computer to school local. We have Oracle database on this server. My boss want to be sure that school not get any Oracle data. Some data is bank material and should be a secret. The bank is important client and if information get out, this mean big problem for us. Any ideas you could have appreicated. Thank you.

    >
    Hi Rajiv,
    Dear Gurus and DBAs. My company wish to give old computer to school local.
    We have Oracle database on this server. My boss want to be sure that school
    not get any Oracle data. Some data is bank material and should be a secret.
    The bank is important client and if information get out, this mean big problem for us.
    Any ideas you could have appreicated. Thank you.Dban is your man - not even God could get information off a Dban'ed disk - get it [url http://www.dban.org/]here.
    I did some work for a charity that sends computers to East Africa a few years ago.
    These PC's came from banks, insurance companies, legal firms &c. - lot's of
    potentially sensitive data. Many of the company's just simply deleted the
    files, which as we all know is not truly effective.
    The first thing we did with every machine was Dban them - no work was allowed on
    any PC until it had a Dban sticker on it. Never had any issues.
    HTH,
    Paul...
    Edited by: Paulie on 22-Jul-2012 00:26
    Edited by: Paulie on 22-Jul-2012 00:27

  • Mac. MobileMe-to-iCloud. Upgrading Mail Database process seems to be hung - has been running for over 1 hour - progress bar stuck at about 20%.

    Am wondering whether to 'Force Quit' and restart, or to let it keep running. Noticed that Activity Monitor is reporting very use of CPU (98% to 100%) of an activity named as "AddressBookSourceSync". Am not sure if that's related to this issue. Any suggestions? Thanks....
    MacBook Pro 17" Intel Core 2 Duo 2.3 GHZ, Memory 4 GB Mac OS X Lion 10.7.2

    did you ever resolve this?  I have the same problem.  Now it is March 2012, your question was in October of 2011.

  • BPM process to manage business data vs Business Data in RDBMS

    Hi all,
    I have so far seen BPM as a pure business process tool rather than a data management one even thought BPM provides for managing data. If in case, we have a nice Business Process, which also collects plenty of related business data, what would be the suggestion?
    1. To store Business Data in RDBMS and provide BPM with just enough info. for the process.
    (or)
    2. To store Business Data in BPM Business Catalog & do away with RDBMS (ofcourse BPM uses RDBMS for dehydration )
    In our project, we are discussing this and some points in favour of approach (1) are:
    i. For web based applications with multiple UI forms to collect data, storing data in RDBMS is way better on performance than accessing/storing in BPM business catalog
    ii. Data Retention of Business Data in BPM needs special consideration which may lead to dumping data eventually in RDBMS
    iii. UI frameworks help build UIs quick with a known data model rather than over APIs exposed by BPM
    iv. Reporting over an RDBMS data model is easier than over a Business Catalog in BPM
    Are these points valid or is approach (2) having other advantages to consdier?

    I think I'm just backing up what you had on your original post, but here's what we typically tell customers when this comes up.
    This has long been a best practice recommended by Oracle.  In Oracle’s Performance Tuning for Oracle Business Process Management Suite 11g document ,  on page 17 it states:
    "Minimize the amount of data stored in the process instance. Obviously, there is a tradeoff between the cost of storing data locally compared to storing keys and retrieving data for use within the process, which needs to be considered.
    A reasonable starting point would be to model the process state to hold only values that are needed to control the process flow and keys to get any other (external) data on an ‘as needed’ basis. If retrieval is too frequent/slow, or the systems holding that data are not always available, then move more data into the process."
    You touched on this, but decoupling the process payload and the underlying data for these reasons:
    1. The underlying data and the processes typically have different lifecycles and need to be independent of one another
    There is a need to maintain each at different times
    They are typically modified by different groups of people with differing skills
    The data stored in a database is typically the “source of truth” that sometimes must be able to be accessed and easily manipulated by applications outside of Oracle BPM; if stored as process instance data, instead of SQL extracting data from a database, the outside applications would need to access it through Oracle BPM APIs they are not necessarily familiar with
    2. Lightweight process data persistence improves performance
    The underlying message contract between the process instance and the engine that persists the payload should leverage key values where possible (think primary keys / relational keys from classic DBMS design patterns), rather than defining instance variables for every data element.  The performance of the Oracle BPM engine is improved and the data for the instances are rendered faster.
    The process instance is carrying the necessary process payload, rather than a bloated payload.  Only the information germane to the current activity should be retrieved and rendered.  This allows the application server to run more efficiently.
    At each step in the process, the process payload is hydrated and then dehydrated (read from the engine’s underlying database tables and then written back to the tables).  If this information is stored in an external database, there is no need for the overhead of this hydration and dehydration of large amounts of data to occur.
    At each step in the process, if stored externally in a separate database outside of Oracle BPM, only the data required is read and / or updated when it is required to do so.
    3. Decoupling helps speed development
    Oracle BPM was built with the Decoupled Model View Controller (MVC) pattern in mind
    One of its strengths is the architecture‘s business services layer that can make the source of the data transparent.  Given a single key value stored in the process instance payload, services can be invoked from the process and the human steps in the process that represent the “real source of truth” that the business needs.
    The MVC pattern’s model layer assumes that given the process’s key value, it is then possible to easily access underlying business data from a variety of sources including databases, EJBs and web services.   Although storing all of the information inside the process payload can be considered one of the model’s business service sources, the overhead of using this in production systems is not recommended.
    Once exposed, the business services can be  reused by any business process needing the information.
    User interfaces created with Oracle’s Application Development Framework (ADF) have out-of-the-box components and operations that take advantage of this MVC pattern.  Some examples of these out-of-the-box patterns that do not have to be programmed include:
    Database table information can easily displayed using Next and Previous that automatically retrieve the next or previous sets of rows
    Similarly, scrolling in a table with many rows up and down renders data automatically
    Both server and client side validations and rules
    Database dropdowns and cascading dropdowns 
    Forms automatically created with Master / Detail patterns
    4. Decoupling reduces the complexities arising from data synchronization
    When orchestrating various external systems into a process, care must be given to account for “Systems of Record” and the purview these systems have over data values
    Decoupling process instance data so that only key values are in the payload allow the Systems of Record to continually update the subservient element values without fear of stagnant data in the process
    Participants in the process receive the most current data values when dealing with process instances
    When data objects span several process instances, finding and updating data is easier if stored in a databaseExample: Process instance based on Orders. Several process instances may involve orders for a single customer. When the order changed, no problem, just find the process instance using correlations and updated it. When customer info changed, you need to synchronize any number of process instances. Placing the data in an RDBS made the solution simple. Simply updated the customer tables and all orders now have the latest info. No need to find related process instances and update them.
    Some BPM events don’t carry sufficient information and need enrichment to process events. With data stored in the payload, there is no easy way to enrich the event data. This is especially true with ACM events. Events in ACM do not have instance information. Storing data in the database will prevent a costly work around using a dedicated process and correlations to get the info needed.
    5. Decoupling facilitates the data capture for reporting and archiving
    Keeping data in the BPM payload takes away the option to do custom reporting (outside of BAM) and archiving of business data.
    Storing data in the RDBMS makes possible to create custom reports (outside of BAM) which would be not be possible or hard to do if all data lived in the BPM payload. Also, if you wanted to capture custom data changes or progressions thru a BPM and/or BPEL instance, RDBMS tables have a clear advantage over payload information. Payload (in most cases) would not have the data progression captured, and also reading data progression from the logs is not a recommended option.
    Many organizations have data retention policy, which requires data to be archived and be accessible. Archiving and data accessibility is very limited if data is stored in the BPM payload.
    6. The need for process Intelligence goes beyond the instance life cycle
    Instances get cleaned up from process database and many organizations are interested in not only keeping the business data but also all the BPM related intelligence related to audit trail, KPIs etc.  BAM data getting to  BI cubes is one of the ways to ensure that intelligence lives on, but viewing process audit maps, audit trails and knowing what attachments were part of the process is a very common use-case. For the latter, the common patterns used are the use of UCM or other ECM products to store the correlated set of documentation that can be brought together in a Webcenter like content portal for historical research and auditability purposes. That, couple with an application database strategy to keep correlated application data would paint the full picture for the business users.
    Hope this helps,
    Dan

  • Upgrade of database with GC repository resides

    I have GC 10.2.0.3 running with the repository stored in a 9.2.0.8 database.
    I would like to upgrade the database to 10.2.0.3 using dbua if possible. When the dbua sees an upgrade to 10.2 it creates a new SYSMAN schema, but I already have one that the Grid Control install created when I used the option "install into an existing database."
    I've searched MetaLink on how to do this, and created a SR but am having trouble getting support to understand what I'm attempting.
    I'm open to anything, create a new database, etc. The only thing I want to be sure of is to not loose the information that I've already established in Grid, and I'm assuming it's stored in the SYSMAN schema.

    Totally fresh clean 10gR2 database on a different host and platform.
    2.3 Export/Import
    If the source and destination database is non-10g, then export/import is the only option for cross platform database migration.
    For performance improvement of export/import, set higher value for BUFFER and RECORDLENGTH . Do not export to NFS as it will slow down the process considerably.
    Direct path can be used to increase performance. Note – As EM uses VPD, conventional mode will only be used by Oracle on tables where policy is defined.
    Also User running export should have EXEMPT ACCESS POLICY privilege to export all rows as that user is then exempt from VPD policy enforcement. SYS is always exempted from VPD or Oracle Label Security policy enforcement, regardless of the export mode, application, or utility that is used to extract data from the database.
    2.3.1 Prepare for Export/Import
    * Mgmt_metrics_raw partitions check
    select table_name,partitioning_type type,
    partition_count count, subpartitioning_type subtype from
    dba_part_tables where table_name = 'MGMT_METRICS_RAW'
    If MGMT_METRICS_RAW has more than 3276 partitions please see Bug 4376351 – This bug is fixed in 10.2. Old partitions should be dropped before export/import to avoid this issue – This will also speed up the export/import process.
    To drop old partitions - run exec emd_maintenance.partition_maintenance
    (This needs shutdown of OMS and set job_queue_processes to 0 during the time drop partitions is run) – Please refer to EM Performance Best practices document for more details on usage.
    Workaround to avoid bug 4376351 is to export mgmt_metrics_raw in conventional mode – This is needed only if drop partition is not run. Note - drop old partitions run is highly recommended.
    * Shutdown OMS instances and prepare for migration
    Shutdown OMS, set job queue_processes to 0 and remove dbms jobs using commands
    connect /as sysdba
    alter system set job_queue_processes=0;
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql
    2.3.2 Export
    Before running export make sure that NLS_LANG variable is same as database characterset. For example after running this query
    SQL> select value from nls_database_parameters where PARAMETER='NLS_CHARACTERSET';
    VALUE
    WE8ISO8859P1
    Then NLS_LANG environment variable should be set to AMERICAN_ AMERICA. WE8ISO8859P1
    * Export data
    exp full=y constraints=n indexes=n compress=y file=fullem102_1.dmp log=fullem102exp_1.log
    Provide system username and password when prompted.
    Verify the log file and make sure that no characterset conversion happens (this line should not be present in log file “possible charset conversion”)
    * Export without data and with constraints
    exp full=y constraints=y indexes=y rows=n file=fullem102_2.dmp log=fullem102exp_2.log
    Provide system username and password when prompted
    2.3.3 Import
    Before running import make sure that NLS_LANG variable is same as database characterset.
    * Run RepManager to drop target repository (if target database has EM repository installed)
    cd ORACLE_HOME/ sysman/admin/emdrep/bin
    RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
    * Pre-create the tablespaces and the users in target database
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql
    For first 2 scripts, we need to provide input arguments when prompted or you can provide them on command line for example
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql MGMT_TABLESPACE <path>/mgmt.dbf <size of mgmt.dbf> <aotoextend size> MGMT_ECM_DEPOT_TS <path>/mgmt_ecm_depot1.dbf <size of mgmt_ecm_depot1.dbf> <aotoextend size> MGMT_TABLESPACE <path>/mgmt.dbf <size of mgmt.dbf> <aotoextend size>
    @/scratch/nagrawal/OracleHomes/oms10g/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql sysman <sysman password> MGMT_TABLESPACE TEMP CENTRAL ON
    * Import data -
    imp constraints=n indexes=n FROMUSER=sysman TOUSER=sysman buffer=2097152 file=fullem102_1.dmp log=fullem102imp_1.log
    * Import without data and with constraints -
    imp constraints=y indexes=y FROMUSER=sysman TOUSER=sysman buffer=2097152 rows=n ignore=y file=fullem102_2.dmp log=fullem102imp_2.log
    Verify the log file and make sure that no characterset conversion happens (this line should not be present in log file “possible charset conversion”)
    2.3.4 Post Import EM Steps
    * Please refer to Section 3.1 for Post Migration EM Specific Steps
    3 Post Repository Migration Activities
    3.1 Post Migration EM Steps
    Following EM specific Steps should be carried out post migration -
    * Recompile all invalid objects in sysman schema using
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql
    * Run post plugin steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages-
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql
    Provide <ORACLE_HOME/sysman/admin/emdrep/sql for em_sql_root
    SYSMAN for em_repos_user
    MGMT_TABLESPACE for em_tablespace_name
    TEMP for em_temp_tablespace_name
    Note – The users created by admin_post_import will have same passwords as username.
    Check for invalid objects – compare source and destination schemas for any discrepancy in counts and invalids.
    * Following queues are not enabled after running admin_post_import.sql as per EM bug 6439035, enable them manually by running
    connect sysman/<password>
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_TASK_Q');
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_PAF_RESPONSE_Q');
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_PAF_REQUEST_Q');
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_LOADER_Q');
    * Please check for context using following query
    connect sysman/<password>
    select * from dba_context where SCHEMA='SYSMAN';
    If any of following context is missing create them using
    connect sysman/<password>
    create or replace context storage_context sing storage_ui_util_pkg;
    create or replace context em_user_context sing setemusercontext;
    * Partition management
    Check if necessary partitions are created so that OMS does not run into problems for loading into non-existent partitions (This problem can come only if there are gap of days between export and import) –
    exec emd_maintenance.analyze_emd_schema('SYSMAN');
    This will create all necessary partitions up to date.
    * Submit EM dbms jobs
    Reset back job_queue_processes to original value and resubmit EM dbms jobs
    connect /as sysdba
    alter system set job_queue_processes=10;
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql
    * Update OMS properties and startup OMS
    Update emoms.properties to reflect the migrated repository - oracle.sysman.eml.mntr.emdRepConnectDescriptor
    Update host name, port with the correct value and start the OMS.
    * Relocate “Management Services and Repository” target
    If “Management Services and repository” target needs to be migrated to the destination host, delete the old "Management Services and Repository" target. Add it again with same name "Management Services and Repository" on agent running on new machine.
    * Run following sql to verify the repository collections are enabled for emrep target
    SELECT
    target_name,
    metric_name,
    task.task_id,
    task.interval,
    task.error_message,
    trunc((mgmt_global.sysdate_utc-next_collection_timestamp )/1440) delay
    from mgmt_collection_metric_tasks mtask,
    mgmt_collection_tasks task,
    mgmt_metrics met,
    mgmt_targets tgt
    where met.metric_guid = mtask.metric_guid AND
    tgt.target_guid = mtask.target_guid AND
    mtask.task_id = task.task_id(+) AND
    met.source_type > 0 AND
    met.source != ' '
    AND tgt.target_type='oracle_emrep'
    ORDER BY mtask.task_id;
    This query should result same records in both source and destination database. If you find any of collections missing in destination database, run following to schedule them in destination database
    DECLARE
    traw RAW(16);
    tname VARCHAR2(256);
    ttype VARCHAR2(64);
    BEGIN
    SELECT target_name, target_type, target_guid
    INTO tname, ttype, traw
    FROM mgmt_targets
    WHERE target_type = 'oracle_emrep';
    mgmt_admin_data.add_emrep_collections(tname,ttype,traw);
    END;
    * Discover/relocate Database and database Listener targets
    Delete the old repository database target and listener and rediscover the target database and listener in EM

  • Table space not getting cleaned after using free method (permanent delete)

    Hi ,
    We are using the free method of the LIB OBJ, to permanently delete the objects. As per documentation, the ContentGarbageCollectionAgent will be cleaning the database which runs in a scheduled mode. But the log of that ContentGargabageCollectionAsgent shows, all zero for objects without reference, objects cleared,etc. I.e the table space remains the same before and after deleteing all the contents in the cmsdk database. But the agent is running as per the schedule but just comes out doing nothing.
    Can anbody put some light on this issue.
    thanks
    Raj.

    Hi Matt,
    Thanks for replying. It's been a very long time waiting for you ;)
    ---"Are you running the 9.2.0.1, 9.2.0.2, or 9.2.0.3 version of the Database?"
    we are using 9.2.0.1
    ---"If you installed the CM SDK schema in the "users" tablespace ......."
    Yes we are using USERS tablespace for our Development.
    I ran the query. The result is:
    SYSTEM MANUAL NOT AFFECTED
    USERS MANUAL NOT AFFECTED
    CTXSYS_DATA MANUAL NOT AFFECTED
    CMSDK1_DATA MANUAL NOT AFFECTED
    (USERS belongs to develpoment cmsdk schema. And CMSDK1 for Prod CMSDK schema)
    From the results I see only "Manual", but still I don't see the tablespace size being coming down. Both table space sizes (USER and CMSDK1) always grows higher and higher.
    Also to let you know, We use ORACLE EM Console (Standalone) application to see the oracle databse information online. Will there be any thing to do with the tool we use to see the table space sizes. We make sure we always refresh it before making a note.
    So is there anything else I can see. Once I saw the ContentGarbageCollection agent to free 1025 objects and deleted 0 objects. But I don't see any change in the table space size. I am little confused b/w freed and deleted.
    thanks once again for your response Matt.
    -Raj.

Maybe you are looking for

  • Principal(s) present in a Subject not propagated to EJBs

    We are a team of students developing a J2EE web application and want to provide for security using JAAS and a combination of programmatic and declarative security. Development Environment: Sun Java System Application Server 1.4 J2EE 1.4, J2SE 1.4.2,

  • Send a mail with a picture from mime repository

    Hello, I want to send a mail with a picture from mime repository. The mail is sended OK, but the receiver hasn't  the picture on the mail  (don't find the picture). Here is my syntax :    st_objtxt-line = '<IMG SRC="SAPR3://005056B2002D1EE3BF8F5B5ECD

  • Really odd First Launch issues

    Hello All, I am having som really, really odd issues with my PowerMac G4... Everytime I start-up from a cold boot (I turn it off during the night...) I return in the morning, and it can't see my network! I usually just restart again, and everything i

  • ITunes / AppleTV and Network Storage (NAS)

    So, is there any easy way to store my videos on my NAS (Infrant ReadyNAS X6, not that that matters particuarly) for use with the AppleTV? I have a couple hundred gigs of MP3's and video's, and can't keep all that on my laptop, obviously, nor would I

  • Finder - Get Info - changing "Open With"

    In the Finder, I accidentally changed the default application specified to open a .nib file. How do I change it back to <NONE>. It is a system file. Powerbook G4 Titanium (DVI)   Mac OS X (10.4.4)