Passivation table ps_txn not being cleaned up

Adf 11gR1PS1
Hello
I have a samll application using one unbounded task flow and one bounded task flow.
Each task flow uses a different application module.
The unbound task flow calls the bounded task flow in a modeless inline-popup via a button.
When running the application and clicking on the button the bounded task flow is called and a new row is inserted
into the ps_txn table.
However when the inline-popup is closed via the "x" on the popup window the row is not removed from the ps_txn table.
If the button is clicked again a new row is added to the theps_txn table.
Is this the normal behaviour, looking at 40.5.3 in the Dev Guide it would seem that the record should be deleted or reused.
I understand that there are scripts for cleaning up the table but shouldn't it be automatic ?
What am I missing ?
Regards
Paul

Hi Paul,
Do you use the failover (jbo.dofailover) ?
If not, I would expect records to be deleted from PS_TXN at activation.
I tested with the ADF BC Component Browser, selecting menus Save/Restore Transaction State, with jbo.debugoutput=console:
[277] (0) OraclePersistManager.deleteAll(2126) **deleteAll** collid=17461
[278] (0) OraclePersistManager.deleteAll(2140)    stmt: delete "PS_TXN" where collid=:1
[279] (0) OraclePersistManager.commit(217) **commit** #pending ops=1But I also already noticed orphaned records in the table.
Do you use jbo.internal_connection to use the same connection whatever the AM instance that's passivated/activated or do you have an instance of the PS_TXN table in all AM's connections ?
Regards,
Didier.

Similar Messages

  • Passivation Tables (PS_TXN) Not Created on First Passivation Attempt

    Fusion Middleware Version: 11.1.1.5
    WebLogic: 10.3.5.0
    JDeveloper Build: Build JDEVADF_11.1.1.5.0_GENERIC_110409.0025.6013
    Project: Custom WebCenter Portal Application integrated with custom ADF task flows.
    Hi
    We are trying to use the jbo.server.internal_connection property in the Business Components layer to change the database to which passivated application module data is written (PS_TXN table).
    We have set the property to a valid data source using the JNDI name. The entry in the bc4j.xcfg file is as follows:
    <Database jbo.locking.mode="optimistic" jbo.server.internal_connection="jdbc/mds/CustomPortalDS"/>
    After making the change, when I run the application through my integrated WLS instance within JDeveloper, the first instance to passivate its data results in the necessary passivation database objects being created in the target instance (PS_TXN, PS_TXN_SEQ & PCOLL_CONTROL).
    If however we then remove those objects from the target instance and run the application on our standalong WebLogic server we get an error accessing the application because the BC framework cannot find the PS_TXN table to write to. The following error appears in the log files:
    Caused by: oracle.jbo.PCollException: JBO-28030: Could not insert row into table PS_TXN, collection id 10, persistent id 1
         at oracle.jbo.PCollException.throwException(PCollException.java:36)
         at oracle.jbo.pcoll.OraclePersistManager.insert(OraclePersistManager.java:1901)
         at oracle.jbo.pcoll.PCollNode.passivateElem(PCollNode.java:564)
         at oracle.jbo.pcoll.PCollNode.passivate(PCollNode.java:688)
         at oracle.jbo.pcoll.PCollNode.passivateBranch(PCollNode.java:647)
         at oracle.jbo.pcoll.PCollection.passivate(PCollection.java:465)
         at oracle.jbo.server.DBSerializer.passivateRootAM(DBSerializer.java:294)
         at oracle.jbo.server.DBSerializer.passivateRootAM(DBSerializer.java:267)
         at oracle.jbo.server.ApplicationModuleImpl.passivateStateInternal(ApplicationModuleImpl.java:5975)
         at oracle.jbo.server.ApplicationModuleImpl.passivateState(ApplicationModuleImpl.java:5835)
         at oracle.jbo.server.ApplicationModuleImpl.passivateStateForUndo(ApplicationModuleImpl.java:8857)
         at oracle.adf.model.bc4j.DCJboDataControl.createSavepoint(DCJboDataControl.java:3180)
         at oracle.adf.model.dcframe.LocalTransactionHandler.createSavepoint(LocalTransactionHandler.java:75)
         at oracle.adf.model.dcframe.DataControlFrameImpl.createSavepoint(DataControlFrameImpl.java:797)
         at oracle.adfinternal.controller.util.model.DCFrameImpl.createSavepoint(DCFrameImpl.java:31)
         at oracle.adfinternal.controller.activity.TaskFlowCallActivityLogic.initializeModel(TaskFlowCallActivityLogic.java:1015)
         at oracle.adfinternal.controller.activity.TaskFlowCallActivityLogic.enterTaskFlow(TaskFlowCallActivityLogic.java:615)
         at oracle.adfinternal.controller.activity.TaskFlowCallActivityLogic.invokeLocalTaskFlow(TaskFlowCallActivityLogic.java:337)
         at oracle.adfinternal.controller.activity.TaskFlowCallActivityLogic.invokeTaskFlow(TaskFlowCallActivityLogic.java:229)
         at oracle.adfinternal.controller.engine.ControlFlowEngine.invokeTaskFlow(ControlFlowEngine.java:217)
         at oracle.adfinternal.controller.state.ChildViewPortContextImpl.invokeTaskFlow(ChildViewPortContextImpl.java:104)
         at oracle.adfinternal.controller.state.ControllerState.createChildViewPort(ControllerState.java:1380)
         at oracle.adfinternal.controller.ControllerContextImpl.createChildViewPort(ControllerContextImpl.java:78)
         at oracle.adf.controller.internal.binding.DCTaskFlowBinding.createRegionViewPortContext(DCTaskFlowBinding.java:440)
         at oracle.adf.controller.internal.binding.DCTaskFlowBinding.getViewPort(DCTaskFlowBinding.java:358)
         at oracle.adf.controller.internal.binding.TaskFlowRegionModel.doProcessBeginRegion(TaskFlowRegionModel.java:164)
         at oracle.adf.controller.internal.binding.TaskFlowRegionModel.processBeginRegion(TaskFlowRegionModel.java:112)
         at oracle.adf.view.rich.component.fragment.UIXRegion$RegionContextChange.doChangeImpl(UIXRegion.java:1199)
         at oracle.adf.view.rich.context.DoableContextChange.doChange(DoableContextChange.java:91)
         at oracle.adf.view.rich.component.fragment.UIXRegion._beginInterruptibleRegion(UIXRegion.java:693)
         at oracle.adf.view.rich.component.fragment.UIXRegion.processRegion(UIXRegion.java:498)
         at oracle.adfinternal.view.faces.taglib.region.RegionTag.doStartTag(RegionTag.java:127)
         at oracle.jsp.runtime.tree.OracleJspBodyTagNode.executeHandler(OracleJspBodyTagNode.java:50)
         at oracle.jsp.runtime.tree.OracleJspCustomTagNode.execute(OracleJspCustomTagNode.java:261)
         at oracle.jsp.runtime.tree.OracleJspClassicTagNode.evalBody(OracleJspClassicTagNode.java:87)
         at oracle.jsp.runtime.tree.OracleJspBodyTagNode.executeHandler(OracleJspBodyTagNode.java:58)
         at oracle.jsp.runtime.tree.OracleJspCustomTagNode.execute(OracleJspCustomTagNode.java:261)
         at oracle.jsp.runtime.tree.OracleJspClassicTagNode.evalBody(OracleJspClassicTagNode.java:87)
         at oracle.jsp.runtime.tree.OracleJspBodyTagNode.executeHandler(OracleJspBodyTagNode.java:58)
         at oracle.jsp.runtime.tree.OracleJspCustomTagNode.execute(OracleJspCustomTagNode.java:261)
         at oracle.jsp.runtime.tree.OracleJspClassicTagNode.evalBody(OracleJspClassicTagNode.java:87)
         at oracle.jsp.runtime.tree.OracleJspBodyTagNode.executeHandler(OracleJspBodyTagNode.java:58)
         at oracle.jsp.runtime.tree.OracleJspCustomTagNode.execute(OracleJspCustomTagNode.java:261)
         at oracle.jsp.runtime.tree.OracleJspClassicTagNode.evalBody(OracleJspClassicTagNode.java:87)
         at oracle.jsp.runtime.tree.OracleJspBodyTagNode.executeHandler(OracleJspBodyTagNode.java:58)
         at oracle.jsp.runtime.tree.OracleJspCustomTagNode.execute(OracleJspCustomTagNode.java:261)
         at oracle.jsp.runtime.tree.OracleJspClassicTagNode.evalBody(OracleJspClassicTagNode.java:87)
         at oracle.jsp.runtime.tree.OracleJspBodyTagNode.executeHandler(OracleJspBodyTagNode.java:58)
         at oracle.jsp.runtime.tree.OracleJspCustomTagNode.execute(OracleJspCustomTagNode.java:261)
         at oracle.jsp.runtime.tree.OracleJspClassicTagNode.evalBody(OracleJspClassicTagNode.java:87)
         at oracle.jsp.runtime.tree.OracleJspBodyTagNode.executeHandler(OracleJspBodyTagNode.java:58)
         at oracle.jsp.runtime.tree.OracleJspCustomTagNode.execute(OracleJspCustomTagNode.java:261)
         at oracle.jsp.runtime.tree.OracleJspClassicTagNode.evalBody(OracleJspClassicTagNode.java:87)
         at oracle.jsp.runtime.tree.OracleJspBodyTagNode.executeHandler(OracleJspBodyTagNode.java:58)
         at oracle.jsp.runtime.tree.OracleJspCustomTagNode.execute(OracleJspCustomTagNode.java:261)
         at oracle.jsp.runtime.tree.OracleJspClassicTagNode.evalBody(OracleJspClassicTagNode.java:87)
         at oracle.jsp.runtime.tree.OracleJspIterationTagNode.executeHandler(OracleJspIterationTagNode.java:45)
         at oracle.jsp.runtime.tree.OracleJspCustomTagNode.execute(OracleJspCustomTagNode.java:261)
         at oracle.jsp.runtime.tree.OracleJspNode.execute(OracleJspNode.java:89)
         at oracle.jsp.runtimev2.ShortCutServlet._jspService(ShortCutServlet.java:89)
         at oracle.jsp.runtime.OracleJspBase.service(OracleJspBase.java:29)
         at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:422)
         at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:802)
         at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:726)
         ... 122 more
    Caused by: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:457)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
         at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:889)
         at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:476)
         at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:204)
         at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:540)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
         at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1079)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1466)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3752)
         at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3937)
         at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1535)
         at weblogic.jdbc.wrapper.PreparedStatement.execute(PreparedStatement.java:99)
         at oracle.jbo.pcoll.OraclePersistManager.insert(OraclePersistManager.java:1887)
         ... 184 more
    My understanding is that it is the adfbc_create_statesnapshottables.sql script that creates these database objects. The description in the file is as follows:
    "By default, BC4J will create these objects in the schema of the internal database user the first time that the application makes a passivation request. This script is intended for advanced users who require more control over the creation and naming of these objects."
    My question then is why is this not happening when the application is run on the standalone WebLogic server but is happening when run on the integrated JDevelopers WLS?
    Any help greatly appreciated.

    Hi
    Thanks for your reply.
    We have already considered the points in the referenced links.
    The correct privileges exist for the schema used by the data source and we don't see any messages other than ORA-00942.
    We have run a 'Finest' trace on package oracle.jbo.* and there is no further information. I have pasted the diagnostic output from the first passivation attempt below:
    [SRC_METHOD: passivate] [URI: /jlpportal/faces/home] [SRC_CLASS: oracle.jbo.server.Serializer] <AM MomVer="0">[[
    <<PASSIVATION DATA FOLLOWS NOT PASTED INTO THREAD>>
    </AM>
    [SRC_METHOD: insert] [URI: /jlpportal/faces/home] [SRC_CLASS: oracle.jbo.pcoll.OraclePersistManager] [2226] **insert** id=1, parid=-1, collid=10, keyArr.len=-1, cont.len=981
    [SRC_METHOD: insert] [URI: /jlpportal/faces/home] [SRC_CLASS: oracle.jbo.pcoll.OraclePersistManager] [2227] stmt: insert into "PS_TXN" values (:1, :2, :3, :4, sysdate)
    [SRC_METHOD: getInternalConnection] [URI: /jlpportal/faces/home] [SRC_CLASS: oracle.jbo.server.DBTransactionImpl] [2228] Getting a connection for internal use...
    [SRC_METHOD: getInternalConnection] [URI: /jlpportal/faces/home] [SRC_CLASS: oracle.jbo.server.DBTransactionImpl] [2229] Creating internal connection...
    [ADF_MESSAGE_ACTION_NAME: Establish database connection] [APP: XXJLPPartnerLinkApp#V2.0] [ADF_MESSAGE_STATUS: begin] [ADF_MESSAGE_ACTION_DESC: ] [URI: /jlpportal/faces/home] [ADF_MESSAGE_CONTEXT_DATA: Is datasource?=true;#;Connection identifier=weblogic.jdbc.common.internal.RmiDataSource@2b002b00] Establish database connection
    [SRC_METHOD: establishNewConnection] [URI: /jlpportal/faces/home] [SRC_CLASS: oracle.jbo.server.DBTransactionImpl] [2230] Trying connection: DataSource='weblogic.jdbc.common.internal.RmiDataSource@2b002b00'...
    [ADF_MESSAGE_ACTION_NAME: Establish database connection] [APP: XXJLPPartnerLinkApp#V2.0] [ADF_MESSAGE_STATUS: add_context_data] [URI: /jlpportal/faces/home] [ADF_MESSAGE_CONTEXT_DATA: Success?=true] Establish database connection
    [SRC_METHOD: establishNewConnection] [URI: /jlpportal/faces/home] [SRC_CLASS: oracle.jbo.server.DBTransactionImpl] [2231] Before getNativeJdbcConnection='weblogic.jdbc.wrapper.PoolConnection_oracle_jdbc_driver_T4CConnection
    [SRC_METHOD: establishNewConnection] [URI: /jlpportal/faces/home] [SRC_CLASS: oracle.jbo.server.DBTransactionImpl] [2232] After getNativeJdbcConnection='weblogic.jdbc.wrapper.PoolConnection_oracle_jdbc_driver_T4CConnection
    [SRC_METHOD: insert] [URI: /jlpportal/faces/home] [SRC_CLASS: oracle.jbo.pcoll.OraclePersistManager] [2233] **insert** error, sqlStmt=null
    [SRC_METHOD: insert] [URI: /jlpportal/faces/home] [SRC_CLASS: oracle.jbo.pcoll.OraclePersistManager] [2234] java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist

  • [svn] 4533: Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe

    Revision: 4533
    Author: [email protected]
    Date: 2009-01-14 15:55:31 -0800 (Wed, 14 Jan 2009)
    Log Message:
    Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe
    QA: Yes
    Doc: No
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-301
    Modified Paths:
    blazeds/trunk/modules/core/src/flex/messaging/services/messaging/SubscriptionManager.java

    Revision: 4533
    Author: [email protected]
    Date: 2009-01-14 15:55:31 -0800 (Wed, 14 Jan 2009)
    Log Message:
    Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe
    QA: Yes
    Doc: No
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-301
    Modified Paths:
    blazeds/trunk/modules/core/src/flex/messaging/services/messaging/SubscriptionManager.java

  • Table extension not being saved

    Hi,
    I have implemented Table extension at item level for SC and from the config perspective all looks fine.
    1) Created Append struture with required fields
    First step: appending the field to the correct structure in SE11 or via transaction SPRO.
    Structure names for table extensions are according to the pattern:
    INCL_EEW_PD_<x>_CST_<y> with
    <x>: HEADER, ITEM
    2)
    Path in IMG (transaction SPRO):
    SAP SRM -> SRM server ->
    -> Cross-Application Basic Settings
    -> Extensions and Field Control (Personalization)
    -> Create Table Extensions and Supply with Data
    -> Define Customer Table Extensions on Item Level
    Second step: you additionally need to append the new table extension field to the
    corresponding INCL_EEW_PD_<x>_CST structure called u2018cross-documentu2019 structure. This can
    also be done in IMG
    You can explicitly enter the very same field(s) (name and type) into both append structures.
    It is recommended, however, to first create another structure with the new field (in SE11) and
    then include that other structure in both append structures
    (menu path in SE11 screen: -> Edit -> Include -> Insert).
    Default append structure names in this example are
    ZAINCL_EEW_PD_ITEM_CST_SC and
    ZAINCL_EEW_PD_ITEM_CST
    when using the IMG path..
    3)
    Third step: Set visibility of the table extension.Create an entry with TICUS for item data (THCUS for header data).
    Donu2019t miss to populate the Transaction Type field - you could save the table entry without it,
    but it will not work!
    4)
    Fourth step: Configure Control of Fields of Table Extensions. Create an entry with TICUS for item data (THCUS for header data).Donu2019t miss to populate the Set Level field - you could save the table entry without it, but it will not work!
    Fifth step: Configure Control of Actions
    Sixth (final) step: SM30 - maintain view /SAPSRM/V_MDFSBC_DEFAULT
    I have rechecked the config several times but could not find any reason for the data not to be saved.
    When I add the data to the table structure and if click enter or check button the data disappears. The entries are not captured in BBP_PDICF  table either. i can see the Z structure in se11 for BBP_PDICF.
    I have checked doc_change badi for any issues but could not find the issue. Any idea as where I should check for the data not being saved issue?

    Hello Srinivas,
    Please do this step:
    In SM30-maintain view /SAPSRM/V_MDFSBC_DEFAULT
    Object set type :TICUS
    Structure field name : custom field
    Object type : BUS2121
    set level : item
    Field visible : X
    field enable : X
    Regards,
    Neelima

  • Session not being clean up by JRun

    My application is using IPlanet WebServer and JRun3.02 Application server. I am having a problem with active session not getting cleaned up by the App Server. When the user goes through the application and finishes the process, I invalidate the session by doing 'session.Invalidate()'. I also have set a 30 minute timeout value in the JRun global.properties file to invalidate the session if the user starts but not finish going through the application. However, the number of active session count in the JRun log doesn't seem to go down. After a few days, I ran out of sessions and the application hungs. I keep a few objects on the session including a pretty big 'pdfObject' that I use to create a PDF document on the fly.
    Any idea why JRun not able to clean up the sessions after the 30 minute timeout has passed? Does the fact that I have stored objects on the session preventing JRun from invalidating and cleaning up the session?
    Thanks in advance.

    Hi afikru
    According to the Servlet specification session.invalidate() method should unbind any objects associated with it. However I'm not conversant with JRun application server so I can only provide some pointers here to help you out.
    Firstly, try locating some documentation specific to your application server which may throw some light on why this may be happening.
    Secondly, I'd suggest running the Server within a Profiling tool so that you can see what objects are being created and how many of those. Try explicitly running the Garbage Collector and see if the sessions come down.
    Keep me posted on your progress.
    Good Luck!
    Eshwar R.
    Developer Technical Support
    Sun microsystems

  • Tables are not being deployed completely from Dictionary

    hi,
    I created a Dictionary project and a table with two columns in it. Then i rebuilt the project and created the .SDA archive.
    After that, i went to the navigator view and left clicked on the previusly created SDA archive and selected deploy. Deployment was succesful, and the table was created, but the two columns in that table were not created.
    Does someone know what went wrong?
    No matter how many times i repeat this steps, the columns are not created.
    thank you

    Please post this question on http://forums.oracle.com/forums/forum.jsp?id=486963 for best response.
    Ashesh Parekh
    Oracle9iAS Product Management

  • Obsolete jdb not being cleaned up

    Hi,
    Setup:
    * We are using Oracle NoSQL 1.2.123.
    * We have 3 replication groups with 3 replication nodes each.
    Problem:
    * 2 of the slaves (in 2 different replication groups) occupy much more space in JDB files (10 times more) then all the others. As these are slaves, and writes always go through the master, and all nodes in a replication group have the same data (eventually), I assume that this is stale data that has not been cleaned up by the BDB garbage collection (cleaner threads). Unfortunately the logs do not show anything new (since Dec. last year) and the oldest JDB files are from February.
    Questions:
    * Any ideas what could have gone wrong?
    * What can I do to trigger the cleaners to cleanup the old data? Is that safe to do in production environment and without downtime?
    * Is it really safe to assume that the current data in within a replication groups is really the same?
    Thank you in advance
    Dimo
    PS. A thread dump shows 2 cleaner threads that do nothing.

    1) The simplest and fastest way to correct the replica node is to restore it from the master node. We will send you instructions for doing this later today.Here are directions for refreshing the data storage files (.jdb files) on a target node. NoSQL DB will automatically refresh the storage files from another node, after we manually stop the target node, delete its storage files, and finally restart it, as described below. Thanks to Linda Lee for these directions.
    First, be sure to make a backup.
    Suppose you want to remove the storage files from rg1-rn3 and make it refresh its files from rg1-rn1. First check where the storage files for the target replication node are located using the show topology command to the Admin CLI. Start the AdminCLI this way:
        java -jar KVHOME/lib/kvstore.jar runadmin -host <host> -port <port>Find the directory containing the target Replication Node's files.
        kv-> show topology -verbose
        store=mystore  numPartitions=100 sequence=108
          dc=[dc1] name=MyDC repFactor=3
          sn=[sn1]  dc=dc1 localhost:13100 capacity=1 RUNNING
            [rg1-rn1] RUNNING  c:/linda/work/smoke/KVRT1/dirB
                         single-op avg latency=0.0 ms   multi-op avg latency=0.67391676 ms
          sn=[sn2]  dc=dc1 localhost:13200 capacity=1 RUNNING
            [rg1-rn2] RUNNING  c:/linda/work/smoke/KVRT2/dirA
                      No performance info available
          sn=[sn3]  dc=dc1 localhost:13300 capacity=1 RUNNING
            [rg1-rn3] RUNNING  c:/linda/work/smoke/KVRT3/dirA
                         single-op avg latency=0.0 ms   multi-op avg latency=0.53694165 ms
          shard=[rg1] num partitions=100
            [rg1-rn1] sn=sn1 haPort=localhost:13111
            [rg1-rn2] sn=sn2 haPort=localhost:13210
            [rg1-rn3] sn=sn3 haPort=localhost:13310
            partitions=1-100In this example, rg1-rn3's storage is located in
        c:/linda/work/smoke/KVRT3/dirAStop the target service using the stop-service command
        kv-> plan stop-service -service rg1-rn3 -waitIn another command shell, remove the files for the target Replication Node
        rm c:/linda/work/smoke/KVRT3/dirA/rg1-rn3/env/*.jdbIn the Admin CLI, restart the service
         plan start-service -service rg1-rn3 -waitThe service will restart, and will populate its missing files from one of the other two nodes in the shard. You can use the "verify" or the "show topology" command to check on he status of the store.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Table space not getting cleaned after using free method (permanent delete)

    Hi ,
    We are using the free method of the LIB OBJ, to permanently delete the objects. As per documentation, the ContentGarbageCollectionAgent will be cleaning the database which runs in a scheduled mode. But the log of that ContentGargabageCollectionAsgent shows, all zero for objects without reference, objects cleared,etc. I.e the table space remains the same before and after deleteing all the contents in the cmsdk database. But the agent is running as per the schedule but just comes out doing nothing.
    Can anbody put some light on this issue.
    thanks
    Raj.

    Hi Matt,
    Thanks for replying. It's been a very long time waiting for you ;)
    ---"Are you running the 9.2.0.1, 9.2.0.2, or 9.2.0.3 version of the Database?"
    we are using 9.2.0.1
    ---"If you installed the CM SDK schema in the "users" tablespace ......."
    Yes we are using USERS tablespace for our Development.
    I ran the query. The result is:
    SYSTEM MANUAL NOT AFFECTED
    USERS MANUAL NOT AFFECTED
    CTXSYS_DATA MANUAL NOT AFFECTED
    CMSDK1_DATA MANUAL NOT AFFECTED
    (USERS belongs to develpoment cmsdk schema. And CMSDK1 for Prod CMSDK schema)
    From the results I see only "Manual", but still I don't see the tablespace size being coming down. Both table space sizes (USER and CMSDK1) always grows higher and higher.
    Also to let you know, We use ORACLE EM Console (Standalone) application to see the oracle databse information online. Will there be any thing to do with the tool we use to see the table space sizes. We make sure we always refresh it before making a note.
    So is there anything else I can see. Once I saw the ContentGarbageCollection agent to free 1025 objects and deleted 0 objects. But I don't see any change in the table space size. I am little confused b/w freed and deleted.
    thanks once again for your response Matt.
    -Raj.

  • Aq$_tab_p, aq$_tab_d filling and not being cleaned up

    Hi all.
    I have a simple 1 way streams replication setup (two node) based on examples. Replication seems to be working.
    However, the AQ$_TAB_P and AQ$_TAB_D (on the capture side only) tables continue to fill (as well as number of messages in the queue and spilled lcrs in v$buffered_queues). Nothing should be spilling since the only things I'm sending are 1 row updates to a heartbeat table and the streams pools are a few hundred meg.
    I have tried aq_tm_process unset, as well as set to 2 and the tables continue to grow.
    The MSG_STATE in aq$tab are either DEFERRED or DEFERRED SPILLED. As mentioned all of the heartbeat updates ( as well as small test transactions ) replicate just fine, so the transactions are being captured,propagated and applied.
    I am running 10.2.0.2 on solaris 10 with no streams related one off patches to speak of, for reference. My propagation did not specify queue_to_queue.
    I'm wondering if there is a step I may have missed, or what else I may be able to look at to ensure that these tables are cleaned up?
    Thanks.
    Edited by: user599560 on Oct 28, 2008 12:39 PM

    Hello
    I forgot to mention that you should check v$propagation_receiver on the destination and v$propagation_sender on the source. v$propagation_receiver on the source will not have records unless you are using bi-directional streams.
    The aq_tm_processes parameter should be set on all the databases that uses Streams. This parameter is responsible for spawning the number of queue monitor slaves which is actually performing the spilling and removing the spilled messages which are no longer needed.
    It is suggested to remove this parameter from spfile, however your SHOW PARAMETER will still show this as 0. Hence you should be checking v$spparameter to confirm whether this was actually removed from it. If you remove it from the spfile, then it should spawn the required number of slaves automatically as per the autotune feature in 10g. However I would always suggest to set this parameter to 1 so that one slave process will be always spawned even if we dont use Streams and SHOW PARAMETER always show this as 1.
    If you find the slaves are not spawned, then you should be checking your alert.log to see whether any errors are reported. You need to check the Queue Monitor Coordinator process qmnc is spawned. If qmnc itself is not spawned (by default it should be spawned always), then no q00 slaves will be spawned. If you remove the parameter from spfile and you see that no q00 slaves are spawned eventhough you are using Streams (either capture, prop or apply) then you should log an SR with Oracle Support to investigate this. You can check the qmnc and q00 slaves at the OS level using the following command:
    ps -ef | grep $ORACLE_SID | grep [q][m0][n0]
    Please mark this thread as answered if all your questions are answered well else let me know.
    Thanks,
    Rijesh

  • Table DFKKQSR not being populated for 1099 withholding tax reporting

    we had to run a catch up program to populate this table in order to run program rfidyywt to generate 1099 forms. am wondering what config might be missing that would allow this table to be populated automatically as relevant transactions occur? ie: payments to business partners that are 1099able...
    thanks,
    Kashmir

    I wouldn't mind it being closed if it weren't related to FICA public sector, but the table DFKKQSR is not an FI table, its a table in contract accounting... FI folks won't know the answer to this... I don't believe. its something someone working on FICA would have encountered..

  • Memory optimized DLLs not being cleaned up

    Hi,
    From BOL, my understanding is that DBAs do not need to administer DLLs created for memory optimized tables, or natively compiled stored procedures, as they are recompiled automatically, when the SQL Server service starts and are removed when no longer needed.
    But I am witnessing, that even after a memory optimized table has been dropped, and the service restarted, the DLLs still exist in the file system AND are still loaded into SQL memory and attached to the process. This can be witnessed by the fact that they
    are still visible in sys_dm_os_loaded_modules, and are locked in the file system if you try to delete them, whilst the SQL Service is running.
    Is this a bug? Or are they cleaned up at a later date? If at a later date, what triggers the clean-up, if it isn't an instance restart?
    Pete

    Most likely the DLLs are still needed during DB recovery, as there are still remnants of the tables in the checkpoint files. A couple of cycle of checkpoints and log truncation (e.g., by doing log backup) need to happen to clean up the old checkpoint
    files and remove the remnants of the dropped tables from disk.
    The following blog post details all the state transitions a checkpoint file goes through:
    http://blogs.technet.com/b/dataplatforminsider/archive/2014/01/23/state-transition-of-checkpoint-files-in-databases-with-memory-optimized-tables.aspx

  • Application Table Index not being created

    Since this is slightly different than my last post, I created a new one.
    I am using the example as a basis to complete the Bulk process from the Staging table to the Application table with the following code snippet:
    public void completeBulk(String completeBulkFlags) throws Exception {
    GraphOracleSem graph = null;
    try {
    graph = getModel().getGraph();
    try {
    graph.dropApplicationTableIndex();
    } catch (SQLException e) {}
    graph.getBulkUpdateHandler().completeBulk(completeBulkFlags, null);
    graph.rebuildApplicationTableIndex();
    finally {
    if (graph != null)
    graph.close();
    It appears to be dropping the 'applicationTableIndex', however when the call 'rebuildApplicationTableIndex' the <MODEL_NAME>_IDX does not exist in the database.  And i'm not seeing errors anywhere alerting me that the index was not created.
    Since this index is not created, deletes on the model do not have an index to scan over the <MODEL_NAME>_TPL table and takes forever.  For the time being, we are creating this index manually.  However is there something that we are doing wrong using the Jena Adapter API?
    We are using the newest Jena Adapter and 11.2.0.3 Database instance.
    Thanks
    -Michael

    Hi Michael,
    Your code looks fine.
    Hmmm. That is strange. The code path for rebuilding application table index is quite straightforward. Can you please drop your own index and re-run the following by itself?
    graph.rebuildApplicationTableIndex();
    Thanks,
    Zhe Wu

  • Table schema not being updated after a SQL 'alter'

    Hi all,
    I have a problem to do with altering table columns and queries relating to that table thereafter.
    I have a table which has been queried using an OracleDataAdapter.Fill(DataSet), if I add a column using say using an OracleCommand.ExecuteNonQuery() or sqlplus session (and doing a 'commit' after) the column does not show up on subsequant 'Fill' queries unless I reopen the DB connection.
    Just as an example, here is my test table which is defined as:
    create table steveTest
         id numeric,
         name varchar(15),
         address varchar(25)
    with a few rows of data in...
    If I query the table using ODP.NET (10 & 11)
    OracleDataAdapter oraDap = new OracleDataAdapter("select * from steveTest",oraCon);
    oraDap.Fill(ds);
    Everything is fine until I add/remove a column with say sqlplus or via an OracleCommand.ExecuteNonQuery()
    e.g     "alter table steveTest add address2 varchar2(30)"
    Subsequent Fill or data reader queries only show the unchanged table schema, without the newly added column. If I remove a column the symptoms are worse and I receive a "ORA-01007: variable not in select list"
    I can only think that ODP.NET is caching the schema for that table on that particular connection. Is there anyway to forcefully clear this out?
    I have tried OracleConnection.PurgeStatementCache(); but this doesn't seem to help me.
    My connection string is defined as:
    Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.0.27)(PORT=1521)))
    (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=xe)));
    User Id=system;
    Password=mypass;
    HA Events=true;
    Validate Connection=true;
    Pooling=false;
    Enlist=false;
    Statement Cache Purge=true;
    Statement Cache Size=0;
    The application I am writing is a middle tier application which handles various DB drivers and maintains several 'connection queues' which multiple lightweight client applications can use concurrently. As you can imagine, a possible scenario may arise where a table is queried, altered and then queried again which causes the above issue. The only way I can stop this from happening is if I close/open the DB connection but I don't want to do this after every alter statement.
    I have tried this on Oracle Express 10g and Oracle 10g servers with ODP.NET clients (10.2.0.100/2.111.6.0). Just a point I thought worth mentioning, this does not happen in sqlplus.
    Any help would be much appreciated.
    Regards,
    Steve

    maybe u can check by debuggin the incoming idoc in we19.
    1. check the data u are sending from the source system
    2. check for the field before and ater change in the target system.
    maybe the field is update with the same content. also check the status record of the idoc
    Message was edited by:
            Prabhu  S

  • How to find which table is not being used ?

    Hi,
    I am in need of releasing space from the common schema we have. i have been permitted to drop the tables which has not been used for the last three months.
    Can anyone please suggest how to find the tables that has not been used for a given amount of time.
    Thanks and Regards.
    Rajib

    i have been permitted to drop the tables which has not been used for the last three months.Can I just chip in an observation on this premise? It's not unusual for systems to have processes that run quarterly or even annually. You need to be very careful about dropping "unused" tables - you might just kill your organisations end of year reporting system.
    Is buying more disk space really not an option?
    Cheers, APC

  • Old recovery points seems to not being cleaned up

    I'm running a Windows Server 2012 server with DPM 2012 SP1, acting as a secondary DPM server for a couple of primary servers. However, the last 5-6 weeks it has begun to behave very strange. Suddenly, I get a lot of "Recovery Point volume threshold
    exceeded", "DPM does not have sufficient storage space available on the recovery point volume to create new recovery Points" and "The used disk space on the computer running DPM for the recovery point volume of SQL Server 2008 database
    XXXXX\DB(servername.domain.com) has exceeded the threshold value of 90% (DPM accounts 600 MB for internal usage in addition to free space available). If you do not allocate more disk space, synchronization jobs may fail due to insufficient disk space. (ID
    3169).
    All of these alerts seem to have a common source - disk space of course, but there is currently 8 TB free in the DPM disk pool. However, I have a feeling that all of this started when we added another DPM disk to the storage pool. Could it be that DPM doesn't
    clean up expired disk data correctly any longer?
    /Amir

    Hi,
    If the pruneshadowcopiesdpm201.ps1 is not completing, hangs, or crashes, then that needs to be addressed as that will definitely cause storage usage problems.
    In the meantime you can use this powershell script to delete old recovery points to help free disk space.  It will prompt to select a datasource, then a date to delete all recovery points made before that time.
    #Author : Ruud Baars
    #Date : 11/09/2008
    #Edited : 11/15/2012 By: Wilson S.
    #edited : 11:27:2012 By: Mike J.
    # NOTE: Update script to only remove recovery points on Disk. Recovery points removed will be from the oldest one up to the date
    # entered by the user while the script is running
    #deletes all recovery points before 'now' on selected data source.
    $version="V4.7"
    $ErrorActionPreference = "silentlycontinue"
    add-pssnapin sqlservercmdletsnapin100
    Add-PSSnapin -Name Microsoft.DataProtectionManager.PowerShell
    #display RP's to delete and ask to continue.
    #Check & wait data source to be idle else removal may fail (in Mojito filter on 'intent' to see the error)
    #Fixed prune default and logfile name and some logging lines (concatenate question + answer)
    #Check dependent recovery points do not pass BEFORE date and adjust selection to not select those ($reselect)
    #--- Fixed reselect logic to keep adjusting reselect for as long as older than BEFORE date
    #--- Fixed post removal rechecking logic to match what is done so far (was still geared to old logic)
    #--- Modified to remove making RP and ask for pruning, fixed logic for removal rechecking logic
    $MB=1024*1024
    $logfile="DPMdeleteRP.LOG"
    $wait=10 #seconds
    $confirmpreference = "None"
    function Show_help
    cls
    $l="=" * 79
    write-host $l -foregroundcolor magenta
    write-host -nonewline "`t<<<" -foregroundcolor white
    write-host -nonewline " DANGEROUS :: MAY DELETE MANY RECOVERY POINTS " -foregroundcolor red
    write-host ">>>" -foregroundcolor white
    write-host $l -foregroundcolor magenta
    write-host "Version: $version" -foregroundcolor cyan
    write-host "A: User Selects data source to remove recovery points for" -foregroundcolor green
    write-host "B: User enters date / time (using 24hr clock) to Delete recovery points" -foregroundcolor green
    write-host "C: User Confirms deletion after list of recovery points to be deleted is displayed." -foregroundcolor green
    write-host "Appending to log file $logfile`n" -foregroundcolor white
    write-host "User Accepts all responsibilities by entering a data source or just pressing [Enter] " -foregroundcolor white -backgroundcolor blue
    "**********************************" >> $logfile
    "Version $version" >> $logfile
    get-date >> $logfile
    show_help
    $DPMservername=&"hostname"
    "Selected DPM server = $DPMservername" >> $logfile
    write-host "`nConnnecting to DPM server retrieving data source list...`n" -foregroundcolor green
    $pglist = @(Get-ProtectionGroup $DPMservername) # WILSON - Created PGlist as array in case we have a single protection group.
    $ds=@()
    $tapes=$null
    $count = 0
    $dscount = 0
    foreach ($count in 0..($pglist.count - 1))
    # write-host $pglist[$count].friendlyname
    $ds += @(get-datasource $pglist[$count]) # WILSON - Created DS as array in case we have a single protection group.
    # write-host $ds
    # write-host $count -foreground yellow
    if ( Get-Datasource $DPMservername -inactive) {$ds += Get-Datasource $DPMservername -inactive}
    $i=0
    write-host "Index Protection Group Computer Path"
    write-host "---------------------------------------------------------------------------------"
    foreach ($l in $ds)
    "[{0,3}] {1,-20} {2,-20} {3}" -f $i, $l.ProtectionGroupName, $l.psinfo.netbiosname, $l.logicalpath
    $i++
    $DSname=read-host "`nEnter a data source index from list above - Note co-located datasources on same replica will be effected"
    if (!$DSname)
    write-host "No datasource selected `n" -foregroundcolor yellow
    "Aborted on Datasource name" >> $logfile
    exit 0
    $DSselected=$ds[$DSname]
    if (!$DSselected)
    write-host "No datasource selected `n" -foregroundcolor yellow
    "Aborted on Datasource name" >> $logfile
    exit 0
    $rp=get-recoverypoint $DS[$dsname]
    $rp
    # $DoTape=read-host "`nDo you want to remove when recovery points are on tape ? [y/N]"
    # "Remove tape recovery point = $DoTape" >> $logfile
    write-host "`nCollecting recoverypoint information for datasource $DSselected.name" -foregroundcolor green
    if ($DSselected.ShadowCopyUsedspace -gt 0)
    while ($DSSelected.TotalRecoveryPoints -eq 0)
    { # "still 0"
    #this is on disk
    $oldShadowUsage=[math]::round($DSselected.ShadowCopyUsedspace/$MB,1)
    $line=("Total recoverypoint usage {0} MB on DISK in {1} recovery points" -f $oldShadowUsage ,$DSselected.TotalRecoveryPoints )
    $line >> $logfile
    write-host $line`n -foregroundcolor white
    #this is on tape
    #$trptot=0
    #$tp= Get-RecoveryPoint($dsselected) | where {($_.Datalocation -eq "Media")}
    #foreach ($trp in $tp) {$trptot += $trp.size }
    #if ($trptot -gt 0 )
    # $line=("Total recoverypoint usage {0} MB on TAPE in {1} recovery points" -f ($trptot/$MB) ,$DSselected.TotalRecoveryPoints )
    # $line >> $logfile
    # write-host $line`n -foregroundcolor white
    [datetime]$afterdate="1/1/1980"
    #$answer=read-host "`nDo you want to delete recovery points from the beginning [Y/n]"
    #if ($answer -eq "n" )
    # [datetime]$afterdate=read-host "Delete recovery points AFTER date [MM/DD/YYYY hh:mm]"
    [datetime]$enddate=read-host "Delete ALL Disk based recovery points BEFORE and Including date/time entered [MM/DD/YYYY hh:mm]"
    "Deleting recovery points until $enddate" >>$logfile
    write-host "Deleting recovery points until and $enddate" -foregroundcolor yellow
    $rp=get-recoverypoint $DSselected
    if ($DoTape -ne "y" )
    $RPselected=$rp | where {($_.representedpointintime -le $enddate) -and ($_.Isincremental -eq $FALSE)-and ($_.DataLocation -eq "Disk")}
    else
    $RPselected=$rp | where {($_.representedpointintime -le $enddate) -and ($_.Isincremental -eq $FALSE)}
    if (!$RPselected)
    write-host "No recovery points found!" -foregroundcolor yellow
    "No recovery points found, aborting...!" >> $logfile
    exit 0
    $reselect = $enddate
    $adjustflag = $false
    foreach ($onerp in $RPselected)
    $rtime=[string]$onerp.representedpointintime
    $rsize=[math]::round(($onerp.size/$MB),1)
    $line= "Found {0}, RP size= {1} MB (If 0 MB, co-located datasource cannot be computed), Incremental={2} "-f $rtime, $rsize,$onerp.Isincremental
    $line >> $logfile
    write-host "$line" -foregroundcolor yellow
    #Get dependent rp's for data source
    $allRPtbd=$DSselected.GetAllRecoveryPointsToBeDeleted($onerp)
    foreach ($oneDrp in $allRPtbd)
    if ($oneDrp.IsIncremental -eq $FALSE) {continue}
    $rtime=[string]$oneDrp.representedpointintime
    $rsize=[math]::round(($oneDrp.size/$MB),1)
    $line= ("`t...is dependancy for {0} size {1} `tIncremental={2}" -f $rtime, $rsize, $oneDrp.Isincremental)
    $line >> $logfile
    if ($oneDrp.representedpointintime -ge $enddate)
    #stick to latest full ($oneDrp = dependents, $onerp = full)
    $adjustflag = $true
    $reselect = $onerp.representedpointintime
    "<< Dependents newer than BEFORE date >>>" >> $logfile
    Write-Host -nonewline "`t <<< later than BEFORE date >>>" -foregroundcolor white -backgroundcolor red
    write-host "$line" -foregroundcolor yellow
    else
    #Ok, include current latest incremental
    $reselect = $oneDrp.representedpointintime
    write-host "$line" -foregroundcolor yellow
    if ($reselect -lt $oneDrp.representedpointintime)
    #we adjusted further backward than latest incremental within selection
    $reselect = $rtime
    $line = "Adjusted BEFORE date to be $reselect to include dependents to $enddate"
    $line >> $logfile
    Write-Host $line -foregroundcolor white -backgroundcolor blue
    $line="`n<<< SECOND TO LAST CHANCE TO ABORT - ONE MORE PROMPT TO CONFIRM. >>>"
    write-host $line -foregroundcolor white -backgroundcolor blue
    $line >> $logfile
    $line="Above recovery points within adjusted range will be permanently deleted !!!"
    write-host $line -foregroundcolor red
    $line >> $logfile
    $line="These RP's include dependent recovery points and may contain co-located datasource(s)"
    write-host $line -foregroundcolor red
    $line >> $logfile
    $line="Data source activity = " + $DSselected.Activity
    $line >> $logfile
    write-host $line -foregroundcolor white
    $DoDelete=""
    while (($DoDelete -ne "N" ) -and ($DoDelete -ne "Y"))
    $line="Continue with deletion (must answer) Y/N? "
    write-host $line -foregroundcolor white
    $DoDelete=read-host
    $line = $line + $DoDelete
    $line >> $logfile
    if (!$DSselected.Activity -eq "Idle")
    $line="Data source not idle, do you want to wait Y/N ? "
    write-host $line -foregroundcolor yellow
    $Y=read-host
    $line = $line + $Y
    $line >> $logfile
    if ($Y -ieq "Y")
    Write-Host "Waiting for data source to become idle..." -foregroundcolor green
    while ($DSselected.Activity -ne "Idle")
    ("Waiting {0} seconds" -f $wait) >>$logfile
    Write-Host -NoNewline "..." -ForegroundColor blue
    start-sleep -s $wait
    if ($DoDelete -eq "Y")
    foreach ($onerp in $RPselected)
    #reselect is adjusted to safe range relative to what was requested
    #--- if adjustflag not set then all up to including else only older because we must keep the full
    if ((($onerp.representedpointintime -le $reselect) -and ($adjustflag -eq $false)) -or ($onerp.representedpointintime -lt $reselect))
    $rtime=[string]$onerp.representedpointintime
    write-host `n$line -foregroundcolor red
    $line >>$logfile
    if (($onerp ) -and ($onerp.IsIncremental -eq $FALSE)) { remove-recoverypoint -RecoveryPoint $onerp -confirm:$True} # >> $logfile}
    $line =("---`nDeleting recoverypoint -> " + $rtime)
    $line >>$logfile
    "All Done!" >> $logfile
    write-host "`nAll Done!`n`n" -foregroundcolor white
    $line="Do you want to View DPMdeleteRP.LOG file Y/N ? "
    write-host $line -foregroundcolor white
    $Y=read-host
    $line = $line + $Y
    $line >> $logfile
    if ($Y -ieq "Y")
    Notepad DPMdeleteRP.LOG
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

Maybe you are looking for

  • New folder created in Windows explorer not visible in Bridge.

    If I create a new folder in My Pictures folder using Windows Explorer, it doesn't show up in Bridge . I do it that way because selecting Get Photos from Camera doesn't allow me to select only the photos I want from the card. Why does Bridge not show

  • Problems with Embedded Font MS PGothic

    Hello, After editing a document in Japanese, some characters will not show up. Normally i would attribute this to a missing font pack, but after reinstalling the Japanese font pack on the computer, we strill get the error message: "Cannot extract the

  • Cropping in a circle

    Sorry, this is a very basic question but i can't seem to figure it out. I am trying to  make a circle logo and some of my vectors are half in and half out of my circle shape, what i want is for anything outside the shape to be cropped/not visible in

  • Dynamic chart link to db

    Hi, I'd like to display a chart make with microsoft visio in an apex page, see pic in the link below: http://i43.tinypic.com/syqfeo.jpg description in the chart are link to record in database, updating the database automatically update the chart. Do

  • Reporting Exits in BW

    Hi all ,      I am new to writing customer exits in BW reporting.      so need your help.      There is a tcode RRMX which is a query designer.      i have a characteristic value variable day interval      (&0I_DAYIN&) after executing it will ask for