Cleanup of Pointcloud $$ Tables (BLK, SDO_PC)

I use SDO_PC with MDSYS.SDO_PC_BLK_TABLE. A lot of $$ tables are created and never go away:
mdpce_1_14c17$$$,
mdpce_1_17aea$$$,
mdpce_2_14c17$$$,
mdpce_2_17aea$$$,
mdpce_3_14c17$$$,
mdpcp_1_14c17$$$,
mdpcp_1_17aea$$$,
mdpcp_2_14c17$$$,
mdpcp_2_17aea$$$,
mdpcp_3_14c17$$$
Every failed loading process creates some $$-tables and views.
How to clean up unused $$-tables?
How to tell which of these $$-tables are no longer needed.
How to repair accidentally deleted $$-tables?
My findings:
these are not the RDT$$-tables which represent spatial indizes.
I searched in SDO_PC_PKG; SDO_UTIL and in the user_sdo_* views. The RDT$$-tables are referenced, but not the pointcloud-$$-tables.
SDO_UTIL.DROP_WORK_TABLES refers to them as "scratch tables".

Hi,-
If there are scratch tables from an aborted previous run of pointcloud creation,
point cloud may not be created. Therefore, you need to clean up all scratch tables
using the following SQL statements (in your case):
SQL> exec sdo_util.drop_work_tables('14c17');
SQL> exec sdo_util.drop_work_tables('17aea');
What error do you get after every failed loading process? Please let us know further problems.
Best regards
baris

Similar Messages

  • Need cleanup for audit table

    Hi,
    i have an Audit table REQUESTS(REQUEST_ID, TIMESTAMP) which is storing request id and time stamp values in it.
    the table is just meant for storing not for select any where. and it is growing large.
    my concern is as there is no select is working on this table there is no need of cleanup.
    please suggest me, do i need to write a cleanup job for this table?
    Thanks.

    Well... u can write a cleanup procedure and schedule it in a job..

  • How to delete the records from custom table???

    My requirement is,
    I have a custom table, assume as ZABC, I have updated this by my custom program, This data having table can be extracted by BI etract program(Assume ZZZ). Here I am not writing any code for extracting data from table to BI extract program, That they will do by BI extract program. But I want to write the code for
    <b>Cleanup of Z table—delete records 30 days after the BI data extractor has run.</b>
    How this is possible suggest me any method having for these???
    Thanks Sanju

    Hell Sanjana,
    Ask your basis people to do the clean up after  30 days. If you want the clean up only to be done 30 days after BI extract has been done, then u need some sort of indicator to say that BI Extract has been done , like  flag and date in a custom table which will be set as soon as extract is done. Then based on that info u need to delete a the records.
    Regards

  • SDO_PC, multiple SRIDs - best practise for data model?

    Hi,
    im using UTM and I am getting data covering two zones.
    all my existing data is from zone A.
    tables:
    pointcloud
    pointcloud_blk
    now im getting data with very few points from zone A and most points from zone B. It was agreed that the data delivery will be in SRID for zone B.
    so I tested whether this would work. I had two pointclouds. One with SRID A, another with SRID B. As soon as I put SRID B pointcloud inside, I could NO LONGER QUERY pointcloud with SRID A.
    So it seems to be necessary to use at least another pointcloud_blk, f.e. pointcloud_blk_[srid].
    Question: does another pointcloud_blk for each SRID suffice or do i also need a pointcloud table per SRID. the pointcloud table seems only interesting due to its EXTENT column. But on the other hand this could be queried by "function", since there are only 10 or so records (pointclouds) inside.
    PLZ share your best practises. What does work, what not.

    It is necessary to have one pointcloud_blk table for each SRID since there is a spatial index on that table.
    As for the PointCloud table itself, it is up to you. You can have pointclouds with different SRIDs in that table.
    But if you want to create spatial index on it, you have to use some function based index so that the index
    sees one SRID for the table.
    Since this table usually does not have many rows, this should work fine with one table for different SRIDs.
    siva

  • Encountered error while Upgrade Table

    When weuse the upgrade table option in OWB deployment manager , we are encountering follwoing errors.
    Following is deplyment errors recorded from Runtime Audit Browser.
    1 Informational Upgrade log file Start of main script Executing script in direction: Proceed Executing as user DWTARGET -- *** There are WARNINGS in the script. *** -- Review the Impact Report. -- -- Script Generation for OdbCMUpgradeAdapter_1116623286492 -- Plan was last modified: 20-MAY-05 -- Target destination db : DWTARGET -- Generation started at: 20-MAY-05 -- Generation finished at: 20-MAY-05 ALTER TABLE "DWTARGET"."TEST" MODIFY("COL2" VARCHAR2(12 byte)) Script execution complete. Tcl_AppInit failed: Execution exit status: 0Execution succeeded
    2 Informational Upgrade log file Start of main script Executing script in direction: Clean Up Executing as user DWTARGET -- *** There are WARNINGS in the script. *** -- Review the Impact Report. -- -- Script Generation for OdbCMUpgradeAdapter_1116623286492 -- Plan was last modified: 20-MAY-05 -- Target destination db : DWTARGET -- Generation started at: 20-MAY-05 -- Generation finished at: 20-MAY-05 Starting cleanup of recovery tables... Completed cleanup of recovery tables. Script execution complete. Tcl_AppInit failed: Execution exit status: 0Execution succeeded
    3 Recovery RPE-01008: Recovery of this request is in progress.
    4 Informational Upgrade log file Start of main script Executing script in direction: Clean Up Executing as user DWTARGET -- *** There are WARNINGS in the script. *** -- Review the Impact Report. -- -- Script Generation for OdbCMUpgradeAdapter_1116623286492 -- Plan was last modified: 20-MAY-05 -- Target destination db : DWTARGET -- Generation started at: 20-MAY-05 -- Generation finished at: 20-MAY-05 Starting cleanup of recovery tables... Completed cleanup of recovery tables. Script execution complete. Tcl_AppInit failed: Execution exit status: 0Execution succeeded
    Please let us know of any ideas of how recover from this owb issue
    "RPE-01008: Recovery of this request is in progress. "

    Additional errors found in
    <OWB_HOME>/OWB/LOG DIRECTORY
    /***************Start of Log ***************/
    2005/05/20-17:24:40-EDT [1B8099A] DDLParserAdapter.updateStatusText: -- Script
    Generation for OdbCMUpgradeAdapter_1116624257836
    -- Plan was last modified: 20-MAY-05
    -- Target destination db : DWTARGET
    -- Generation started at: 20-MAY-05
    -- Generation finished at: 20-MAY-05
    2005/05/20-17:24:40-EDT [1B8099A] DDLParserAdapter.updateStatusText: Starting cl
    eanup of recovery tables...
    2005/05/20-17:24:40-EDT [1B8099A] DDLParserAdapter.updateStatusText: Completed c
    leanup of recovery tables.
    2005/05/20-17:24:40-EDT [1B8099A] DDLParserAdapter.updateStatusText: Script exe
    cution complete.
    2005/05/20-17:24:41-EDT [1B8099A] DDLParserAdapter.updateStatusText: Tcl_AppInit
    failed:
    2005/05/20-17:24:41-EDT [1B8099A] DDLParserAdapter.updateStatusText: Execution e
    xit status: 0
    2005/05/20-17:24:41-EDT [1B8099A] DDLParserAdapter.updateStatusText: Execution s
    ucceeded
    2005/05/20-17:24:41-EDT [1B8099A] DDLParserAdapter.updateCurrentStatus: 11
    2005/05/20-17:24:41-EDT [DEEEBD] java.lang.NullPointerException
    at oracle.wh.runtime.platform.adapter.odb.OdbCMUpgradeAdapter.getTargetC
    onnection(OdbCMUpgradeAdapter.java:723)
    at oracle.wh.runtime.platform.adapter.odb.OdbCMUpgradeAdapter.deployUnpa
    rsedScripts(OdbCMUpgradeAdapter.java:363)
    at oracle.wh.runtime.platform.adapter.odb.OdbCMUpgradeAdapter.finalize(O
    dbCMUpgradeAdapter.java:325)
    at oracle.wh.runtime.platform.service.controller.DeploymentController.fi
    nalize(DeploymentController.java:325)
    at oracle.wh.runtime.platform.service.controller.DeploymentController.fi
    nalize(DeploymentController.java:55)
    at oracle.wh.runtime.platform.service.DeploymentManager.run(DeploymentMa
    nager.java:61)
    at java.lang.Thread.run(Thread.java:534)
    2005/05/20-17:24:41-EDT [DEEEBD] oracle.wh.runtime.platform.service.controller.R
    ecoveryInProgress: RPE-01008: Recovery of this request is in progress.
    at oracle.wh.runtime.platform.service.controller.AdapterContextImpl.init
    ialize(AdapterContextImpl.java:1307)
    at oracle.wh.runtime.platform.service.controller.DeploymentContextImpl.i
    nitialize(DeploymentContextImpl.java:439)
    at oracle.wh.runtime.platform.service.controller.DeploymentController.in
    itialize(DeploymentController.java:69)
    at oracle.wh.runtime.platform.service.controller.DeploymentController.fi
    nalize(DeploymentController.java:319)
    at oracle.wh.runtime.platform.service.controller.DeploymentController.fi
    nalize(DeploymentController.java:338)
    at oracle.wh.runtime.platform.service.controller.DeploymentController.fi
    nalize(DeploymentController.java:55)
    at oracle.wh.runtime.platform.service.DeploymentManager.run(DeploymentMa
    nager.java:61)
    at java.lang.Thread.run(Thread.java:534)
    2005/05/20-17:24:41-EDT [DEEEBD] Attempting to create adapter 'class.Oracle Data
    base.9.2.CMUpgrade'
    2005/05/20-17:24:41-EDT [DEEEBD] OdbCMUpgradeAdapter.finalize
    2005/05/20-17:24:41-EDT [DBD794] DDLParserAdapter.updateStatusText: Start of mai
    n script
    2005/05/20-17:24:41-EDT [DBD794] DDLParserAdapter.updateStatusText: Executing
    script in direction: Clean Up
    Executing as user DWTARGET
    2005/05/20-17:24:41-EDT [DBD794] DDLParserAdapter.updateStatusText: -- Script G
    eneration for OdbCMUpgradeAdapter_1116624257836
    -- Plan was last modified: 20-MAY-05
    -- Target destination db : DWTARGET
    -- Generation started at: 20-MAY-05
    -- Generation finished at: 20-MAY-05
    2005/05/20-17:24:41-EDT [DBD794] DDLParserAdapter.updateStatusText: Starting cle
    anup of recovery tables...
    2005/05/20-17:24:41-EDT [DBD794] DDLParserAdapter.updateStatusText: Completed cl
    eanup of recovery tables.
    2005/05/20-17:24:41-EDT [DBD794] DDLParserAdapter.updateStatusText: Script exec
    ution complete.
    2005/05/20-17:24:41-EDT [DBD794] DDLParserAdapter.updateStatusText: Tcl_AppInit
    failed:
    2005/05/20-17:24:41-EDT [DBD794] DDLParserAdapter.updateStatusText: Execution ex
    it status: 0
    2005/05/20-17:24:41-EDT [DBD794] DDLParserAdapter.updateStatusText: Execution su
    cceeded
    2005/05/20-17:24:41-EDT [DBD794] DDLParserAdapter.updateCurrentStatus: 11
    2005/05/20-17:24:41-EDT [DEEEBD] java.lang.NullPointerException
    at oracle.wh.runtime.platform.adapter.odb.OdbCMUpgradeAdapter.getTargetC
    onnection(OdbCMUpgradeAdapter.java:723)
    at oracle.wh.runtime.platform.adapter.odb.OdbCMUpgradeAdapter.deployUnpa
    rsedScripts(OdbCMUpgradeAdapter.java:363)
    at oracle.wh.runtime.platform.adapter.odb.OdbCMUpgradeAdapter.finalize(O
    dbCMUpgradeAdapter.java:325)
    at oracle.wh.runtime.platform.service.controller.DeploymentController.fi
    nalize(DeploymentController.java:325)
    at oracle.wh.runtime.platform.service.controller.DeploymentController.fi
    nalize(DeploymentController.java:338)
    at oracle.wh.runtime.platform.service.controller.DeploymentController.fi
    nalize(DeploymentController.java:55)
    at oracle.wh.runtime.platform.service.DeploymentManager.run(DeploymentMa
    nager.java:61)
    at java.lang.Thread.run(Thread.java:534)
    2005/05/20-17:24:41-EDT [DEEEBD] finalize_unit_done auditId=77611
    2005/05/20-17:24:42-EDT [1BC887B] Free Memory(bytes)=59807752 Total Memory(bytes
    )=64946176 Used Memory(bytes)=5138424
    2005/05/20-17:24:42-EDT [1BC887B] AuditId=77611: Request completed
    /***************END of Log ***************/
    Thanks in advance for any help

  • SDO_NET.SPATIAL_PARTITION creates empty table

    Hi all,
    I tried to partition my network by using the function SDO_NET.SPATIAL_PARTITION. Everything work fine, except, that the partition table is empty.
    this is the command I run:
    BEGIN
    SDO_NET.SPATIAL_PARTITION( 'BZ',  'BZ_PARTITION',  1000,  'LOG_DIR', 'part.log', 'w', 1 );
    End;The tuples in the link table have the attribute link_level=1, except 2 tuples which for testing purpose i changed to 2,3,4
    From my understanding, the partitioning should be based on the geometry of the links and the nodes. Does the link_level attribute impact the partitioning?
    Thanks for any clarification.
    The content of the log file is attached.
    ......@maps:/tmp$ more part.log
          Thu Jun 9 10:43:45 2011
    NDM spatial partitioning begins
    target network: BZ (max. no. of nodes per partition: 1000)
    target link_level >= 1
    set partition log to file: part.log in directory: LOG_DIR
    check node table: BZ_NODES
    check geometry column: GEOMETRY in node table: BZ_NODES
    node table: BZ_NODES checked
    link level information (link_level = [null or 0] will be treated as 1)
    *there are 8173 links at link_level = 1 in link table:BZ_LINKS
    there are 2 links at link_level = 2 in link table:BZ_LINKS
    there are 1 links at link_level = 3 in link table:BZ_LINKS
    there are 2 links at link_level = 4 in link table:BZ_LINKS
    link level information took .000 min.
    create temp. partition view: NDM_TEMP_PARTITION_V$1
    creating temp. partition view took .000 min.
    network: BZ has 3162 nodes at link_level >= 1
    temp. partition view: NDM_TEMP_PARTITION_V$1 created
    multi-level partitioning begins ...
    cleanup partitioning temporary tables
    temp. partition tables cleaned
    begin partitioning of NDM_TEMP_PARTITION_V$1
    partition level: 1 min. partition id: 0 max. partition id: 0
    generating 4 partitions from level: 0 to level: 1 ...
    begin partitioning level: 0...
    partitioning level: 0 with 2 partitions took .000 min.
    begin partitioning level: 1...
    partitioning level: 1 with 4 partitions took .000 min.
    completed partitioning of NDM_TEMP_PARTITION_V$1
    multi-level partitioning took .000 min.
    partition table: BZ_PARTITION renamed to NDM_TEMP_PARTITION_TAB
    creating partition table: BZ_PARTITION
    partition table: NDM_TEMP_PARTITION_TAB contains 0 link levels
    partition result inserted from table: NDM_TEMP_PARTITION_TAB
    inserting previous partition result took .000 min.
    inserting complete partition result took .000 min.
    temp. partition_table_name:NDM_TEMP_PARTITION_TAB dropped
    primary key constraint: BZ_PARTITION_PK on BZ_PARTITION(NODE_ID,LINK_LEVEL) added
    index: BZ_PARTITION_PL on BZ_PARTITION(PARTITION_ID,LINK_LEVEL) created
    index: BZ_PARTITION_P on BZ_PARTITION(PARTITION_ID) created
    *target link_level: 1 contains 3162 nodes in  partitions
    partition_table_name: BZ_PARTITION in network metadata updated
    partition table: BZ_PARTITION now contains 0 link levels
    partition table summary:
    partitioning summary took .000 min.
    partition result committed
    temp. partition view: NDM_TEMP_PARTITION_V$1 dropped
    NDM spatial partitioning completed.
    spatial partitioning took .017 min. ( .000 hr.)----------------------------------------------------------------------
          Thu Jun 9 10:43:46 2011
    ----------------------------------------------------------------------

    I'm curious.
    Did you find the solution? What was the problem?

  • Database size versus table data size

    I ran the below query that queries all tables in the database and the total size for reserved space is 17GB. The database size is 294GB. Why is there such a big difference in size. I would expect the database to be a little bigger but not 277GB bigger.
    DECLARE @TableName VARCHAR(100)    --For storing values in the cursor
    --Cursor to get the name of all user tables from the sysobjects listing
    DECLARE tableCursor CURSOR FOR  
    select [name] from dbo.sysobjects  where  OBJECTPROPERTY(id, N'IsUserTable') = 1 FOR READ ONLY
    --A procedure level temp table to store the results
    CREATE TABLE #TempTable (     tableName varchar(100),     numberofRows varchar(100),     reservedSize varchar(50),     dataSize varchar(50),     indexSize varchar(50),    
    unusedSize varchar(50) )
    --Open the cursor
    OPEN tableCursor
    --Get the first table name from the cursor
    FETCH NEXT FROM tableCursor INTO @TableName
    --Loop until the cursor was not able to fetch
    WHILE (@@Fetch_Status >= 0) BEGIN     
    --Dump the results of the sp_spaceused query to the temp table     
    INSERT  #TempTable         
    EXEC sp_spaceused @TableName     
    --Get the next table name     
    FETCH NEXT FROM tableCursor INTO @TableName END
    --Get rid of the cursor
    CLOSE tableCursor
    DEALLOCATE tableCursor
    --Select all records so we can use the reults
    SELECT *  FROM #TempTable
    order by 2
    --Final cleanup!
    DROP TABLE #TempTable
    Alan

    Hi anaylor,
    According to your description, the database size is larger than sum of tables sizes. There could be a number of reasons , for example,
    • There may have the large transaction, or a lot of data in the database but it has been removed in some process.
    • Indexes/constraints are being stored in other files.
    •Last database cleanup (including table deletion, record deletion) did not affect any disk reclamation.
    •The initial size of the database is large.
    • Have you calculate unused spaces? Databases usually trade space for speeding and allocating huge amounts of disk space ahead of time, to avoid allocation at transaction time. Space freed by delete may be reused or not due to speed reasons.
    Hope it can help.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Can not request reports after cloning process

    hello All,
    I am new in oracle please guide me.
    I have tried to make the process of cloning from dev machine to test machine, but after that the applications in dev machine was not able to request another report.
    With Status: No Manager and phase: Inactive
    we use:
    RDBMS : 10.2.0.2.0
    Oracle Application: 12.0.2(HRMS)
    os: AIX 5.3 (64 bit)
    I previously tried the ways of my search results on the forums and internet
    1. Truncate FND_CONCURRENT_PROCESSES...
    truncate table FND_CONCURRENT_PROCESSES;
    2. Update FND_CONCURRENT_REQUESTS as follows:
    update fnd_concurrent_requests
    set status_code='X', phase_code='C'
    where status_code='T'
    3. Update FND_CONCURRENT_QUEUES.RUNNING_PROCESSES to zero
    update fnd_concurrent_queues
    set running_processes = 0;
    4. restart conccurent managers,and has not succeeded.
    and then I tried another way of information that I got
    1. Stop the Internal Concurrent Manager.
    2. Connect to the database via SQL*Plus as the APPS user.
    3. Execute the following to alter the FNDSM trigger on FND_NODES:
    CREATE OR REPLACE TRIGGER fndsm
    AFTER INSERT OR UPDATE ON FND_NODES <---- ( "OR UPDATE" I do additional)
    FOR EACH ROW
    BEGIN
    if ( :new.NODE_NAME <> 'AUTHENTICATION' ) then
    if ( (:new.SUPPORT_CP='Y')
    or (:new.SUPPORT_FORMS='Y')
    or (:new.SUPPORT_WEB='Y') ) then
    fnd_cp_fndsm.register_fndsm_fcq(:new.NODE_NAME);
    end if;
    if (:new.SUPPORT_CP = 'Y') then
    fnd_cp_fndsm.register_fndim_fcq(:new.NODE_NAME);
    end if;
    end if;
    END;
    4. Cleanup the FND_NODES table by executing the following:
    SQL> exec FND_CONC_CLONE.SETUP_CLEAN;
    5. Run AutoConfig on each node.
    6. Restart the Concurrent Managers. and still does not work.
    if something is missing or wrong from the way I do or there are other ways more appropriate for me to do?
    Thanks,
    Batara

    earlier thanks for your concern and support for you all ,     
    I've run as shown in MOS Doc 134007.1 , but it still does not work, and the clone,
    actually the clone that we do is still in trial and we have not too much concentration to the test instance as the target system, but the problem is that dev instance as the source system should not in fact the configuration changes become problematic,as not requesting another report.
    and this is my next piece of the internal display manager log:
    ========================================================================
    Starting DEV_1111@DEV Internal Concurrent Manager -- shell process ID 725148
    logfile=/u01/oracle/DEV/inst/apps/DEV_hrmdev/logs/appl/conc/log/DEV_1111.mgr
    PRINTER=noprint
    mailto=devmgr
    restart=N
    diag=N
    sleep=30 (default)
    pmon=4 (default)
    quesiz=1 (default)
    Reviver is ENABLED
    Application Object Library: Concurrent Processing version 11.5
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    Internal Concurrent Manager started : 11-NOV-2009 11:43:06
    Process monitor session started : 11-NOV-2009 11:43:06
    Starting PODAMGR Concurrent Manager : 11-NOV-2009 11:43:07
    Could not initialize the Service Manager FNDSM_HRMDEV_DEV. Verify that HRMDEV has been registered for concurrent processing.
    Routine AFPEIM encountered an error while starting concurrent manager PODAMGR with library /u01/oracle/DEV/apps/apps_st/appl/po/12.0.0/bin/POXCON.
    Check that your system has enough resources to start a concurrent manager process. Contact y : 11-NOV-2009 11:43:07
    Starting INVTMRPM Concurrent Manager : 11-NOV-2009 11:43:07
    Could not initialize the Service Manager FNDSM_HRMDEV_DEV. Verify that HRMDEV has been registered for concurrent processing.
    Routine AFPEIM encountered an error while starting concurrent manager INVTMRPM with library /u01/oracle/DEV/apps/apps_st/appl/inv/12.0.0/bin/INCTM.
    Check that your system has enough resources to start a concurrent manager process. Contact : 11-NOV-2009 11:43:07
    Starting RCVOLTM Concurrent Manager : 11-NOV-2009 11:43:07
    Could not initialize the Service Manager FNDSM_HRMDEV_DEV. Verify that HRMDEV has been registered for concurrent processing.
    Routine AFPEIM encountered an error while starting concurrent manager RCVOLTM with library /u01/oracle/DEV/apps/apps_st/appl/po/12.0.0/bin/RCVOLTM.
    Check that your system has enough resources to start a concurrent manager process. Contact : 11-NOV-2009 11:43:08
    Starting FFTM Concurrent Manager : 11-NOV-2009 11:43:08
    Could not initialize the Service Manager FNDSM_HRMDEV_DEV. Verify that HRMDEV has been registered for concurrent processing.
    Routine AFPEIM encountered an error while starting concurrent manager FFTM with library /u01/oracle/DEV/apps/apps_st/appl/ff/12.0.0/bin/FFTM.
    Check that your system has enough resources to start a concurrent manager process. Contact your s : 11-NOV-2009 11:43:08
    Could not find service instance context for service instance number 804345408
    Could not find service instance context for service instance number 804345408
    Could not find service instance context for service instance number 804345408
    Could not find service instance context for service instance number 804345408
    Check that your system has enough resources to start a concurrent manager process. Contac : 11-NOV-2009 11:43:11
    Starting STANDARD Concurrent Manager : 11-NOV-2009 11:43:11
    Could not initialize the Service Manager FNDSM_HRMDEV_DEV. Verify that HRMDEV has been registered for concurrent processing.
    Routine AFPEIM encountered an error while starting concurrent manager STANDARD with library /u01/oracle/DEV/apps/apps_st/appl/fnd/12.0.0/bin/FNDLIBR.
    Check that your system has enough resources to start a concurrent manager process. Contac : 11-NOV-2009 11:43:11
    Starting STANDARD Concurrent Manager : 11-NOV-2009 11:43:11
    Could not initialize the Service Manager FNDSM_HRMDEV_DEV. Verify that HRMDEV has been registered for concurrent processing.
    Routine AFPEIM encountered an error while starting concurrent manager STANDARD with library /u01/oracle/DEV/apps/apps_st/appl/fnd/12.0.0/bin/FNDLIBR.
    Starting OAMCOLMGR Concurrent Manager : 11-NOV-2009 15:13:32
    Could not initialize the Service Manager FNDSM_HRMDEV_DEV. Verify that HRMDEV has been registered for concurrent processing.
    Routine AFPEIM encountered an error while starting concurrent manager OAMCOLMGR with library /u01/oracle/DEV/apps/apps_st/appl/fnd/12.0.0/bin/FNDLIBR.
    Check that your system has enough resources to start a concurrent manager process. Conta : 11-NOV-2009 15:13:33
    Starting INVMGR Concurrent Manager : 11-NOV-2009 15:13:33
    Could not initialize the Service Manager FNDSM_HRMDEV_DEV. Verify that HRMDEV has been registered for concurrent processing.
    Routine AFPEIM encountered an error while starting concurrent manager INVMGR with library /u01/oracle/DEV/apps/apps_st/appl/inv/12.0.0/bin/INVLIBR.
    Check that your system has enough resources to start a concurrent manager process. Contact : 11-NOV-2009 15:13:33
    Process monitor session ended : 11-NOV-2009 15:13:33
    please help and guidance
    Regards
    Batara

  • Logical Standby Problem

    My environment is Primary database is 11.1.0.7 64bit on Windows 2003 Enterprise 64bit. Logical is on the same platform and oracle version but a different server. I created a physical standby first and it applied the logs quickly without any issues. I received no errors when I changed it over to a Logical standby database. The problem that is happening is as soon as I issue the command "alter database start logical standby apply;" the CPU usage goes to 100% and the SQL apply takes a long time to apply a log. When I was doing this on 10G I never ran into this, as soon as the log was received, it was applied within a couple of minutes. I don't think it could be a memory issue since there is plenty on the Logical standby server. I just can't figure out why the SQL apply is so slow and the CPU usage skyrockets. I went through all of the steps in the guide "Managing a Logical Standby Database" from Oracle and I don't see anything wrong. The only difference between the two databases is that on the Primary I have Large Page support enabled, I don't on the Logical. Any help would be greatly appreciated, I need to use this Logical to report off of.

    Thanks for the responses. I have found what is causing the problem. I kept noticing that the statements it was slowing down on were the ones where data was being written to the SYS.AUD$ table in the System tablespace on the Logical Standby database. A quick count of the records showed that I had almost 6 million records in that table. After I decided to truncate SYS.AUD$ on the Logical, the archive logs started to apply normally. I wonder why the Logical has a problem with this table and the Primary doesn't. I didn't even know auditing was turned on on the Primary database, it must be enabled by default. Now I know why my System table space has grown from 1gb to 2gb since November.
    Now that I fixed it for now, I am unsure what to do to keep this from happening. Can I turn off Auditing on the Logical and keep it on for the Primary? Would this stop data from being written to the SYS.AUD$ table on the Logical? It doesn't appear that there is any kind of cleanup on this table that is offered by Oracle, I guess I can just clean out this table occasionally but that is just another thing to add to the list of maintenance tasks. I notice that you can also write this audit data to a file on the OS. Has anyone here done that?

  • A question about cache group error in TimesTen 7.0.5

    hello, chris:
    we got some errors about cache group :
    2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-ogTblGC00405: Failed calling OCI function: OCIStmtFetch()
    2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-raUtils00373: Oracle native error code = 1405, msg = ORA-01405: fetched column value is NULL
    2008-09-21 08:56:28.16 Err : ORA: 229576: ora-229576-2057-raStuff09837: Unexpected row count. Expecting 1. Got 0.
    and the exact scene is: our oracle server was restart for some reason, but we didnot restart the cache group agent. then iit start appear those errors informations.
    we want to know, if the oracle server restart, whether we need to restart cache agent?? thank you..

    Yes, the tracking table will track all changes to the associated base table. Only changes that meet the cache group WHERE clause predicate will be refreshed to TimesTen.
    The tracking table is managed automatically by the cache agent. As long as the cache agent is running and AUTOREFRESH is occurring the table will be space managed and old data will be purged.
    It is okay if very occasionally an AUTOREFRESH is unable to complete within its defined interval but if this happens with any regularity then this is a problem since this situation is unsustainable. To remedy this you need to try one or more of:
    1. Tune execution of AUTOREFRESH queries in Oracle. This may mean adding additional indexes to some of the cached Oracle tables. There is an article on this in MetaLink (doc note 473493.1).
    2. Increase the AUTOREFRESH interval so that a refresh can always complete within the defined interval.
    In any event it is important that you have enough space to cope with the 'steady state' size of the tracking table. If the cache agent will not be running for any significant length of time you need to manually cleanup the tracking table. In TimesTen 11g a script to do this is provided but it is not officially supported in TimesTen 7.0.
    If the rate of updates on the base table is such that you cannot arrive at a sustainable situation by tuning etc. then you will need to consider more radical options such as breaking the table into multiple separate tables :-(
    Chris

  • Strange error in SDO_ROUTER_PARTITION.PARTITION_ROUTER

    Hi,
    the statement
    exec SDO_ROUTER_PARTITION.PARTITION_ROUTER('PARTITION', 4000);
    gives the following output on Oracle 10.2.0.1.0 on Windows XP w/patch 5632711 applied.
    What can be worng? Why should the partition procedure create a file or a directory?
    Please advice, we are stuck!!
    error starting at line 1 in command:
    exec SDO_ROUTER_PARTITION.PARTITION_ROUTER('PARTITION', 4000);
    Error report:
    ORA-29280: percorso della directory non valido
    ORA-06512: a "SYS.UTL_FILE", line 33
    ORA-06512: a "SYS.UTL_FILE", line 436
    ORA-06512: a "MDSYS.SDO_ROUTER_PARTITION", line 524
    ORA-06512: a line 1
    29280. 00000 - "invalid directory path"
    *Cause:    A corresponding directory object does not exist.
    *Action:   Correct the directory object parameter, or create a corresponding
    directory object with the CREATE DIRECTORY command.
    Thanks in advance.

    Thanks a lot Steven, your hint solved the problem.
    The procedure now starts and completes, however, the final tables are empty (!)
    The operation log follows:
    Mer Feb 21 17:59:53 2007
    ******* Beginning SDO Router partitioning
    Mer Feb 21 17:59:53 2007
    INFO: create and load node_part table
    Mer Feb 21 17:59:54 2007
    INFO: cleanup partitioning temporary tables
    Mer Feb 21 17:59:54 2007
    ERROR: exception processing partition of the NEW_PARTITION table
    Mer Feb 21 17:59:54 2007
    INFO: create index np_v_idx on node_part
    Mer Feb 21 17:59:55 2007
    INFO: create and load edge_part
    Mer Feb 21 18:0:3 2007
    INFO: create index edge_part_s_idx on edge_part
    Mer Feb 21 18:0:4 2007
    INFO: create index edge_part_t_idx on edge_part
    Mer Feb 21 18:0:4 2007
    INFO: create index edge_part_st_p_idx on edge_part
    Mer Feb 21 18:0:5 2007
    INFO: create and load outedge and inedge columns in node_part table
    Mer Feb 21 18:4:44 2007
    INFO: create index node_part_p_idx on node_part
    Mer Feb 21 18:4:44 2007
    INFO: recreating node table with partitioning information
    Mer Feb 21 18:4:46 2007
    INFO: updating edge table with partitioning information
    Mer Feb 21 18:5:4 2007
    INFO: creating and loading super_node_ids table
    Mer Feb 21 18:5:5 2007
    INFO: creating and loading super_edge_ids table
    Mer Feb 21 18:5:5 2007
    INFO: creating the final partition table
    Mer Feb 21 18:5:5 2007
    INFO: create index partition_p_idx on partition table
    Mer Feb 21 18:5:5 2007
    ******* Completed SDO Router partitioning
    Is it the type of trouble you are having too?
    Antonio

  • Problem with capturing of baselines

    Hi !
    I have a problem with capturing of baselines- when the SQL is called from PL/SQL code...
    For example if I execute in SQL* Plus session
    alter session set optimizer_capture_sql_plan_baselines = true;
    exec dbms_mview.refresh('VIEW_CLIENT_KONTO', 'C');
    exec dbms_mview.refresh('VIEW_CLIENT_KONTO', 'C');
    nothing happens ..I do not find the baseline in dba_sql_plan_baselines...
    ========================================================
    optimizer_use_sql_plan_baselines = TRUE..of course and I can capture baselines for plain SQL -only not if SQL ist invoked from PL/SQL..
    Now, nowere in documentation could I find that capturing does not work from PL/SQL ..that would, in my opinion, be sderious disadvantage- so much code in the database runs as PL/SQL..
    We habe Oracle 11.2.0.3 Enterprise Edition
    optimizer_features_enable = 11.2.0.3
    What could be worng here..did I forget certain parameter or setting. Thanks for your help in advance.

    Now, nowere in documentation could I find that capturing does not work from PL/SQL ..that would, in my opinion,
    be sderious disadvantage- so much code in the database runs as PL/SQL..You're quite right. It would be serious disadvantage.
    But it's not true that they are not captured from PLSQL.
    Setup:
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> select name, value from v$parameter where name like '%baseline%';
    NAME                                               VALUE
    optimizer_capture_sql_plan_baselines               FALSE
    optimizer_use_sql_plan_baselines                   TRUE
    SQL> create table t1
      2  (col1 number);
    Table created.
    SQL> INSERT /*+ domtest sql */ INTO t1 select 1 from dual;
    1 row created.
    SQL> begin
      2  INSERT /*+ domtest plsql */ INTO t1 select 1 from dual;
      3  end;
      4  /
    PL/SQL procedure successfully completed.
    SQL> select sql_id, substr(sql_text,1,30) sql_text, child_number c, to_char(force_matching_signature
    ) sig, sql_plan_baseline
      2  from   v$sql
      3  where  sql_text like 'INSERT /*+ domtest%';
    SQL_ID        SQL_TEXT                                C SIG                            SQL_PLAN_BASELINE
    gmskus7sbgt5d INSERT /*+ domtest plsql */ IN          0 7407988653257810022
    64rzqgrt63wzu INSERT /*+ domtest sql */ INTO          0 17374141102446297863
    SQL> select to_char(b.signature) sig, b.created
      2  from   v$sql s
      3  ,      dba_sql_plan_baselines b
      4  where  s.sql_text like 'INSERT /*+ domtest%'
      5  and    b.signature = s.force_matching_signature;
    no rows selected
    SQL> alter session set optimizer_capture_sql_plan_baselines = true;
    Session altered.
    SQL> Baseline created for SQL statement:
    SQL> INSERT /*+ domtest sql */ INTO t1 select 1 from dual;
    1 row created.
    SQL> select sql_id
      2  ,      substr(sql_text,1,30) sql_text
      3  ,      child_number c
      4  ,      to_char(force_matching_signature) sig
      5  ,      sql_plan_baseline
      6  from   v$sql
      7  where  sql_text like 'INSERT /*+ domtest%';
    SQL_ID        SQL_TEXT                                C SIG                            SQL_PLAN_BASELINE
    gmskus7sbgt5d INSERT /*+ domtest plsql */ IN          0 7407988653257810022
    64rzqgrt63wzu INSERT /*+ domtest sql */ INTO          0 17374141102446297863
    64rzqgrt63wzu INSERT /*+ domtest sql */ INTO          1 17374141102446297863           SQL_PLAN_1ayk9a0wnr
    SQL> Baseline created for PLSQL statement:
    SQL> begin
      2  INSERT /*+ domtest plsql */ INTO t1 select 1 from dual;
      3  end;
      4  /
    PL/SQL procedure successfully completed.
    SQL> select sql_id
      2  ,      substr(sql_text,1,30) sql_text
      3  ,      child_number c
      4  ,      to_char(force_matching_signature) sig
      5  ,      sql_plan_baseline
      6  from   v$sql
      7  where  sql_text like 'INSERT /*+ domtest%';
    SQL_ID        SQL_TEXT                                C SIG                            SQL_PLAN_BASELINE
    gmskus7sbgt5d INSERT /*+ domtest plsql */ IN          0 7407988653257810022
    gmskus7sbgt5d INSERT /*+ domtest plsql */ IN          1 7407988653257810022            SQL_PLAN_5s3v02k7yx9
    64rzqgrt63wzu INSERT /*+ domtest sql */ INTO          0 17374141102446297863
    64rzqgrt63wzu INSERT /*+ domtest sql */ INTO          1 17374141102446297863           SQL_PLAN_1ayk9a0wnr
    SQL> Cleanup baselines:
    SQL> declare
      2   l_spm_op pls_integer;
      3  begin
      4   for x in (select sql_handle from dba_sql_plan_baselines b where created >= trunc(sysdate))
      5   loop
      6       l_spm_op :=
      7       dbms_spm.drop_sql_plan_baseline(x.sql_handle);
      8   end loop;
      9  end;
    10  /
    PL/SQL procedure successfully completed.
    SQL> So, I would expect that this is related to DBMS_MVIEW and a restriction on recursive, internal statements.
    For example, if you capture sql plan baselines you don't capture baselines for sys and system statements, etc.
    Further investigation required.
    For example, let's build an MV:
    SQL> create materialized view mv1
      2  build immediate
      3  refresh on demand
      4  as
      5  select /*+ domtest mv */ col1 from t1;
    Materialized view created.
    SQL> exec dbms_mview.refresh('MV1');
    PL/SQL procedure successfully completed.
    SQL> We see two statements from the initial creation of the MV and the subsequent refresh (the latter is the one with the BYPASS_RECURSIVE_CHECK hint).
    SQL> select sql_id
      2  ,      substr(sql_text,1,50) sql_text
      3  ,      child_number c
      4  ,      to_char(force_matching_signature) sig
      5  ,      sql_plan_baseline
      6  from   v$sql
      7  where  sql_text like 'INSERT %domtest%';
    SQL_ID        SQL_TEXT                                C SIG                            SQL_PLAN_BASELINE
    ctyufr5b5yzfm INSERT INTO "RIMS"."MV1" selec          0 12798978218302414227
                  t /*+ domtest mv */
    gfa550uufmr34 INSERT /*+ BYPASS_RECURSIVE_CH          0 12927173360082366872
                  ECK */ INTO "RIMS"."
    SQL> Even if we repeat the refresh, we can't see to get a baseline:
    SQL> exec dbms_mview.refresh('MV1');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_mview.refresh('MV1');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_mview.refresh('MV1');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_mview.refresh('MV1');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_mview.refresh('MV1');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_mview.refresh('MV1');
    PL/SQL procedure successfully completed.
    SQL> select sql_id
      2  ,      substr(sql_text,1,50) sql_text
      3  ,      child_number c
      4  ,      to_char(force_matching_signature) sig
      5  ,      sql_plan_baseline
      6  from   v$sql
      7  where  sql_text like 'INSERT %domtest%';
    SQL_ID        SQL_TEXT                                C SIG                            SQL_PLAN_BASELINE
    ctyufr5b5yzfm INSERT INTO "RIMS"."MV1" selec          0 12798978218302414227
                  t /*+ domtest mv */
    gfa550uufmr34 INSERT /*+ BYPASS_RECURSIVE_CH          0 12927173360082366872
                  ECK */ INTO "RIMS"."
    SQL> So.... might that BYPASS_RECURSIVE_CHECK have anything to do with it?
    It's not likely to be relevant but perhaps we should just check it?
    Let's see what happens if we go back to our original plsql statement because we can't insert into an MV directly.
    SQL> begin
      2   INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO t1 select 1 from dual;
      3  end;
      4  /
    PL/SQL procedure successfully completed.
    SQL> select sql_id
      2  ,      substr(sql_text,1,50) sql_text
      3  ,      child_number c
      4  --,      to_char(force_matching_signature) sig
      5  ,      sql_plan_baseline
      6  from   v$sql
      7  where  sql_text like 'INSERT /*+ BYPASS_RECURSIVE_CHECK */%';
    SQL_ID        SQL_TEXT                                                    C SQL_PLAN_BASELINE
    6kjvr1gu6v2pq INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO T1 SELEC          0
    SQL> begin
      2   INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO t1 select 1 from dual;
      3  end;
      4  /
    PL/SQL procedure successfully completed.
    SQL> select sql_id
      2  ,      substr(sql_text,1,50) sql_text
      3  ,      child_number c
      4  --,      to_char(force_matching_signature) sig
      5  ,      sql_plan_baseline
      6  from   v$sql
      7  where  sql_text like 'INSERT /*+ BYPASS_RECURSIVE_CHECK */%';
    SQL_ID        SQL_TEXT                                                    C SQL_PLAN_BASELINE
    6kjvr1gu6v2pq INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO T1 SELEC          0
    SQL> begin
      2   INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO t1 select 1 from dual;
      3  end;
      4  /
    PL/SQL procedure successfully completed.
    SQL> select sql_id
      2  ,      substr(sql_text,1,50) sql_text
      3  ,      child_number c
      4  --,      to_char(force_matching_signature) sig
      5  ,      sql_plan_baseline
      6  from   v$sql
      7  where  sql_text like 'INSERT /*+ BYPASS_RECURSIVE_CHECK */%';
    SQL_ID        SQL_TEXT                                                    C SQL_PLAN_BASELINE
    6kjvr1gu6v2pq INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO T1 SELEC          0
    6kjvr1gu6v2pq INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO T1 SELEC          1 SQL_PLAN_aq62u7rqdfcs8125daea2
    SQL> So, nothing to do with that.
    I would suggest that it is because, being executed via DBMS_MVIEW, it is special and bypasses consideration for baselines.
    Can we somehow circumvent this?
    Perhaps, baselines being a pretty flexible vehicle that work off SIGNATURE (and PLAN_HASH_2).
    Let's double check the signature and plan hash we need to reproduce.
    SQL> select sql_id
      2  ,      substr(sql_text,1,50) sql_text
      3  ,      child_number c
      4  ,      to_char(force_matching_signature) sig
      5  ,      sql_plan_baseline
      6  ,      plan_hash_value
      7  from   v$sql
      8  where  sql_text like 'INSERT %domtest%';
    SQL_ID        SQL_TEXT                                C SIG                            SQL_PLAN_BASELINE
    PLAN_HASH_VALUE
    ctyufr5b5yzfm INSERT INTO "RIMS"."MV1" selec          0 12798978218302414227
                  t /*+ domtest mv */
         3617692013
    gfa550uufmr34 INSERT /*+ BYPASS_RECURSIVE_CH          0 12927173360082366872
                  ECK */ INTO "RIMS"."
         3617692013
    SQL> And replace the materialized view with a table:
    SQL> drop materialized view mv1;
    Materialized view dropped.
    SQL> create table mv1
      2  (col1 number);
    Table created.
    SQL> And try to get a statement with the same signature and plan that does use a baseline:
    SQL> begin
      2   INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "RIMS"."MV1" select /*+ domtest mv */ col1 from t1 ;
      3  end;
      4  /
    PL/SQL procedure successfully completed.
    SQL> select sql_id
      2  ,      substr(sql_text,1,50) sql_text
      3  ,      child_number c
      4  ,      to_char(force_matching_signature) sig
      5  ,      sql_plan_baseline
      6  ,      plan_hash_value
      7  from   v$sql
      8  where  sql_text like 'INSERT %domtest%';
    SQL_ID        SQL_TEXT                                C SIG                            SQL_PLAN_BASELINE
    PLAN_HASH_VALUE
    ctyufr5b5yzfm INSERT INTO "RIMS"."MV1" selec          0 12798978218302414227
                  t /*+ domtest mv */
         3617692013
    5n6auhnqpb258 INSERT /*+ BYPASS_RECURSIVE_CH          0 12927173360082366872           SQL_PLAN_b6tnbps6tn
                  ECK */ INTO "RIMS".
         3617692013
    gfa550uufmr34 INSERT /*+ BYPASS_RECURSIVE_CH          0 12927173360082366872
                  ECK */ INTO "RIMS"."
         3617692013
    SQL> Now if we drop and recreate the materialized view:
    SQL> create materialized view mv1
      2  build immediate
      3  refresh on demand
      4  as
      5  select /*+ domtest mv */ col1 from t1;
    Materialized view created.
    SQL> exec dbms_mview.refresh('MV1');
    PL/SQL procedure successfully completed.
    SQL> select sql_id
      2  ,      substr(sql_text,1,50) sql_text
      3  ,      child_number c
      4  ,      to_char(force_matching_signature) sig
      5  ,      sql_plan_baseline
      6  ,      plan_hash_value
      7  from   v$sql
      8  where  sql_text like 'INSERT %domtest%';
    SQL_ID        SQL_TEXT                                C SIG                            SQL_PLAN_BASELINE
    PLAN_HASH_VALUE
    dac4d22mf0m6k INSERT INTO "RIMS"."MV1" selec          0 12798978218302414227
                  t /*+ domtest mv */
         3617692013
    cn4syqz9cxp3y INSERT /*+ BYPASS_RECURSIVE_CH          0 12927173360082366872           SQL_PLAN_b6tnbps6tn
                  ECK */ INTO "RIMS"."
         3617692013
    SQL> And cleanup:
    SQL> drop table t1;
    Table dropped.
    SQL> drop materialized view mv1;
    Materialized view dropped.
    SQL> declare
      2   l_spm_op pls_integer;
      3  begin
      4   for x in (select sql_handle from dba_sql_plan_baselines b where created >= trunc(sysdate))
      5   loop
      6       l_spm_op :=
      7       dbms_spm.drop_sql_plan_baseline(x.sql_handle);
      8   end loop;
      9  end;
    10  /
    PL/SQL procedure successfully completed.
    SQL> So....
    Are baselines created for sql statements executed from PLSQL? Yes.
    Are baselines created for internal statements from DBMS_MVIEW? No.
    Why? Don't know. But I think it's expected behaviour.
    Is there a convoluted way of applying a baseline to the internal refresh statement? Yes.

  • How to purge Sales Data in Oracle Demantra

    Hi Demantra experts,
    I have one doubt in demantra:
    Suppose the user has loaded the historical data say Sales History data in SALES_DATA table.
    And after running the Analaytical Engine , forecast has been generated as well.
    Later he found that it was a wrong data and now it should be removed.
    If the user wants to remoe these records from SALES_DATA table .
    What is the process of purging the records?
    Thanks,
    Neeraj.

    Thanks a lot for your help.
    I checked the metalink note. It deals with removing the data from temporary tables and not the base tables.
    The Temprary tables acts like interface tables. We need to populate the records in the respective temp table and run the .bat file
    However the record stays in the temp table after the .bat file has loaded the data in base tables.
    The suggested metalink note provides script to cleanup the temp tables of demantra.
    My requirement is to remove the records from base tables.
    Thanks,
    Neeraj.

  • Challenge with Pivoting data

    Hi Everyone,
    I have an interesting challenge which involves extracting data from two related databases, and pivoting part of the data from the second.
    Where I work we use SAP Business One (ERP) in concert with Accellos (WMS). Within our Warehouses we store items in many bin locations. Bin locations; items in those locations, along with quantities, etc are stored in the Accellos database. Master data related
    to the items themselves, such as the item cost, preferred supplier, etc is stored in SAP Business One.
    Whilst I have been able to create reports which successfully bridge both SAP & Accellos, such as that shown below, I have not been able to present the data output in an ideal format.
    As can be seen above given a single item code (e.g.: DR1124) there are many bin labels (and corresponding quantities) returned.
    I would like to show the bin labels 'horizontally' in the fashion illustrated below -
    I believe that using a Pivot is pivotal (excuse the pun!) to success in my endeavour, and due to this I have studied up on Pivots, both the Static type (which I am now comfortable with) and the Dynamic type (which I am still getting 'my head around').
    However there are a couple of challenges related to my specific pivot.
    The maximum number of Bins (and correspondingly Bin Labels) per Item change
    There are over 10K Bin Labels
    I have written a basic Dynamic Pivot which shows all Bin Labels horizontally, like so...
    DECLARE @SQL nvarchar(max), @Columns nvarchar(max)
    SELECT @Columns =
    COALESCE(@Columns + ', ', '') + QUOTENAME(BINLABEL)
    FROM
    SELECT DISTINCT
    BINLABEL
    FROM A1Warehouse..BINLOCAT
    ) AS B
    ORDER BY B.BINLABEL
    SET @SQL = '
    WITH PivotData AS
    SELECT
    BINLABEL
    , PRODUCT
    , QUANTITY
    FROM A1Warehouse..BINLOCAT
    SELECT
    PRODUCT,
    '+ @Columns +'
    FROM PivotData
    PIVOT
    SUM(QUANTITY)
    FOR BINLABEL
    IN('+ @Columns +')
    ) AS PivotResult'
    EXEC(@SQL)
    The above technique gives me over 10K columns because there are that many Bin Labels in total.
    It occurred to me that I would need to count the maximum number of Bin Labels for the Item that had the most Bin Labels, and that this number would then need to be used to set the maximum number of columns.
    DECLARE @maxBins int
    DECLARE @loopCount int = 1
    SET @maxBins = (SELECT MAX([# of Bins]) AS 'Max Bins'
    FROM
    SELECT
    COUNT(BINLABEL) '# of Bins'
    FROM A1Warehouse..BINLOCAT
    GROUP BY PRODUCT
    ) AS T0)
    PRINT @maxBins
    At this point in time one item occupies a total of 26 bin labels / locations. Every other item occupies less than 26 bin labels / locations, so I now know that I need to number my vertical columns as 'Bin 1', 'Bin 2', 'Bin 3', 'Bin...', 'Bin 26'.
    This is where the fun starts, I don't exactly need a Dynamic Pivot, but neither is a Static Pivot up to the task (at least not as best I can tell).
    Here is the Static Pivot query that I have written -
    DECLARE @fromDate DATE = DATEADD(YY, -1, GETDATE())
    DECLARE @toDate DATE = GETDATE()
    DECLARE @maxBins int
    DECLARE @loopCount int = 1
    SET @maxBins = (SELECT MAX([# of Bins]) AS 'Max Bins'
    FROM
    SELECT
    COUNT(BINLABEL) '# of Bins'
    FROM A1Warehouse..BINLOCAT
    GROUP BY PRODUCT
    ) AS T0)
    PRINT @maxBins
    SELECT *
    FROM
    SELECT
    Tx.[Item Code]
    , Tx.Description
    , SUM(Tx.[Sales (last 12 Months)]) AS 'Sales (last 12 Months)'
    , ISNULL(Tx.[Supplier Code], '') AS 'Supplier Code'
    , ISNULL(Tx.[Supplier Name], '') AS 'Supplier Name'
    , Tx.OnOrder
    , Tx.IsCommited
    , Tx.OnHand
    , ISNULL(Tx.BINLABEL, '') AS 'Binlabel'
    , ISNULL(CAST(Tx.QUANTITY AS nvarchar), '') AS 'Quantity'
    FROM
    SELECT
    T0.ItemCode AS 'Item Code'
    , T0.Dscription AS 'Description'
    , SUM(T0.Quantity) AS 'Sales (last 12 Months)'
    , T3.CardCode AS 'Supplier Code'
    , T3.CardName AS 'Supplier Name'
    , T2.OnOrder
    , T2.IsCommited
    , T2.OnHand
    , T4.BINLABEL
    , T4.QUANTITY
    FROM INV1 T0
    INNER JOIN OINV T1 ON T1.DocEntry = T0.DocEntry AND T1.CANCELED = 'N'
    INNER JOIN OITM T2 ON T2.ItemCode = T0.ItemCode
    LEFT JOIN OCRD T3 ON T3.CardCode = T2.CardCode
    LEFT JOIN A1Warehouse..BINLOCAT T4 ON T4.PRODUCT = T0.ItemCode collate SQL_Latin1_General_CP850_CI_AS
    WHERE T1.DocDate >= @fromDate AND T1.DocDate <= @toDate
    GROUP BY T0.ItemCode, T0.Dscription, T3.CardCode, T3.CardName, T2.OnOrder, T2.IsCommited, T2.OnHand, T4.BINLABEL, T4.QUANTITY
    UNION ALL
    SELECT
    T0.ItemCode AS 'Item Code'
    , T0.Dscription AS 'Description'
    , -SUM(T0.Quantity) AS 'Sales (last 12 Months)'
    , T3.CardCode AS 'Supplier Code'
    , T3.CardName AS 'Supplier Name'
    , T2.OnOrder
    , T2.IsCommited
    , T2.OnHand
    , T4.BINLABEL
    , T4.QUANTITY
    FROM RIN1 T0
    INNER JOIN ORIN T1 ON T1.DocEntry = T0.DocEntry
    INNER JOIN OITM T2 ON T2.ItemCode = T0.ItemCode
    LEFT JOIN OCRD T3 ON T3.CardCode = T2.CardCode
    LEFT JOIN A1Warehouse..BINLOCAT T4 ON T4.PRODUCT = T0.ItemCode collate SQL_Latin1_General_CP850_CI_AS
    WHERE T1.DocDate >= @fromDate AND T1.DocDate <= @toDate
    GROUP BY T0.ItemCode, T0.Dscription, T3.CardCode, T3.CardName, T2.OnOrder, T2.IsCommited, T2.OnHand, T4.BINLABEL, T4.QUANTITY
    )Tx
    GROUP BY Tx.[Item Code], Tx.Description, Tx.[Supplier Code], Tx.[Supplier Code], Tx.[Supplier Name], Tx.OnOrder, Tx.IsCommited, Tx.OnHand, Tx.BINLABEL, Tx.QUANTITY
    )Ty
    PIVOT
    MAX(Ty.Quantity)
    FOR Ty.Binlabel IN ([0], [1], [2])
    )Tz
    Here is a screen shot of the results that I see -
    I understand why there are NULLs in my 0, 1, and 2 columns...there simply aren't Bin Labels called 0, 1 or 2!
    My challenge is that I do not know how to proceed from here. Firstly how do I call each of the pivoted columns 'Bin 1', 'Bin 2', 'Bin...', 'Bin 26' when the actual Bin Labels are over 10 thousand different possible character sets, e.g.: #0005540, K1C0102, etc,
    etc, etc...
    I have considered the possibility that a WHILE loop may be able to serve in populating the column names...
    DECLARE @maxBins int
    DECLARE @loopCount int = 1
    SET @maxBins = (SELECT MAX([# of Bins]) AS 'Max Bins'
    FROM
    SELECT
    COUNT(BINLABEL) '# of Bins'
    FROM A1Warehouse..BINLOCAT
    GROUP BY PRODUCT
    ) AS T0)
    PRINT @maxBins
    WHILE @loopCount <= @maxBins
    BEGIN
    PRINT @loopCount
    SET @loopCount = @loopCount +1
    END
    ...of course the query above has no practical application at this stage, but I thought that it may be useful
    from a 'logic' point of view.
    I have tried to insert a WHILE clause into various locations within the Static Pivot query that I wrote, however in each instance there were errors produced by SSMS.
    If anybody can suggest a way to solve my data pivoting challenge it will be much appreciated.
    Kind Regards,
    David

    How you can 'assign' multiple values to the @SQL variable (if that is indeed what is happening)
    What 'FOR XML PATH('') actually does
    Dynamic SQL in general...
    if you could share some insights into how I can go about removing the NULLs it will be greatly appreciated.
    The FOR XML PATH('') method is one of several ways to concatenate the values from several rows into one column of one row.  There are other ways, but I believe the most commonly used one today (and certainly the one I always use) is the FOR XML method. 
    A good link for understanding the FOR XML method is
    http://bradsruminations.blogspot.com/2009/10/making-list-and-checking-it-twice.html.
    If you are not used to dynamic SQL, there is an excellent discussion at http://www.sommarskog.se/dynamic_sql.html.  If you are not used to dynamic SQL, you definitely want to review the SQL Injection topic on that page before making extensive use of
    dynamic SQL. 
    You can get rid of the NULLs, but only by converting the NULLs into a value.  You can use the IsNull() function or as diadmin noted the Coalesce() function to do this.  There is, however, the question of what value do you want.  Of course
    the obvious choice for converting varchar values (like BinLabel) is the empty string ('').  But for numeric values (like BinQty) you need to either output a number (like 0.000000) or you need to convert the numbers into a character type and then you could
    use the empty string.  Of course doing this makes this already complex piece of SQL more complex, but it certainly can be done.  An example
    Create Table Foo(ItemCode varchar(6), ItemDescription varchar(50), OrderQty decimal(12,6), BinLabel varchar(7), BinQty decimal(12,6));
    Insert Foo(ItemCode, ItemDescription, OrderQty, BinLabel, BinQty) Values
    ('DR1124', 'D6 Series 24 OD 3/8 1 Neck', 50, 'B1A1904', 9),
    ('DR1124', 'D6 Series 24 OD 3/8 1 Neck', 50, 'M1D0703', 66),
    ('DR1124', 'D6 Series 24 OD 3/8 1 Neck', 50, 'S1K0603', 24),
    ('H21', 'Rubber Mallot', 75, 'X1X0712', 100),
    ('H21', 'Rubber Mallot', 75, 'T3B4567', 92);
    Declare @SQL nvarchar(max);
    ;With cteRN As
    (Select ItemCode, ItemDescription, BinLabel, BinQty,
    Row_Number() Over(Partition By ItemCode Order By BinLabel) As rn
    From Foo)
    Select @SQL = (Select Cast(N'' As nvarchar(max)) + N', IsNull(Max(Case When rn = ' + Cast(rn As nvarchar(5)) + N' Then BinLabel End), '''') As BinLabel' + Cast(rn As nvarchar(5))
    + N', IsNull(Cast(Max(Case When rn = ' + Cast(rn As nvarchar(5)) + N' Then BinQty End) As varchar(19)), '''') As BinQty' + Cast(rn As nvarchar(5))
    From (Select Distinct rn From cteRN) x
    Order By rn
    For XML Path(''))
    Select @SQL = Stuff(@SQL, 1, 2, '');
    --Select @SQL
    Select @SQL = N'Select ItemCode, Max(ItemDescription) As ItemDescription, Max(OrderQty) As OrderQty,' + @SQL + N' From (Select ItemCode, ItemDescription, OrderQty, BinLabel, BinQty,
    Row_Number() Over(Partition By ItemCode Order By BinLabel) As rn
    From Foo) As x Group By ItemCode'
    --select @SQL
    Exec(@SQL);
    -- Cleanup
    go
    Drop Table Foo;
    Tom

  • R/3 extraction.{direct delta and queued delta and V3}

    Hi,
    I am new to BW.Can anyone explain me about R/3 extraction in detail and steps to extract from R/3 to BW.
    I understood direct delta.But i couldn't understand queued delta.Can anyone please explain queued delta and also V3.Are queued delta and V3 same?
    Detail explanation or links will help a lot.
    Thanks,
    Sunny.

    Hi Sunny,
    Procurement - MM :
    MM is a part of SCM. There are several section in this, say Procurement, manufacturing, vendor evaluation etc... All can be found under follwoing link.
    Please find the procedure and important links for LO extraction..
    LO'S Procedure:
    Go to transaction code RSA3 and see if any data is available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name.
    Go to transaction SBIW --> Settings for Application Specific Datasource --> Logistics --> Managing extract structures --> Initialization --> Filling the Setup table --> Application specific setup of statistical data --> perform setup (relevant application)
    In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name of the run and execute. Now all the available records from R/3 will be loaded to setup tables.
    Go to transaction RSA3 and check the data.
    Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update.
    Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target.
    Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7.
    Go to BW system and create a new infopackage for delta loads. Double click on new infopackage. Under update tab you can see the delta update radio button.
    Now you can go to your data target and see the delta record.
    Re: LO-Cockpit  V1 and V2 update
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Also Refer this link:
    http://www.sap-img.com/business/lo-cockpit-step-by-step.htm
    Inventory.. dwload go thru it
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    *Steps*:
    1st delet the septup table----lbwg(for application 3)(need to cleanup the setup tables in r/3 side by using (LBWG for AC:03)
    1) Fill setup table (fill up the set up tables BX, BF and UM by using (MCNB,OLI1BW and OLIZBW).
    for BX(Open stock)-tcode--MCNB,
    for BF Tcode(oli1bw)
    for UM tcode(olizbw)
    after that pull data to BW with (Once the data is ready in RSA3, then in BW you have to set up compression for zero marker update for the Cube (coz is related to Noncummulative Keyfiger scenario) then load BX data. onces its completed load normally the BF and UMs.
    BW side:-
    make the compression
    for BX load normally
    and BF&UM(step table data):-compression with zero mark update(select that check box option)
    after completion of compression do delta(Note: for deltas follow the compression normally)
    http://help.sap.com/saphelp_nw2004s/helpdata/en/29/79eb3cad744026e10000000a11405a/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/c5/a26f37eb997866e10000009b38f8cf/frameset.htm
    SD - Enterprise Sales and Distribution
    SD comes under CRM application component (ERP Analytics).
    sd flow
    salesorder:
    VBRK - header
    VBAP - item
    VBEP - scheduling
    delivery:
    LIKP - header
    LIPS - item
    billing:
    VBRK - header
    VBRP - item
    shipement
    VTTK - header
    VTTP - item
    customer data
    SD links FI table
    salesarea data general data companycode data
    knvp kna1 knb1
    material master data
    SD link to MM table
    salesdata Basic data salestext data
    MARC MARA STXH(Textfile header)
    MVKE MAKT STXL
    process:
    1.sales reresntive enter sales order in R/3 system, it sotres the transaction into three
    SD tables.
    2.delivery due list is executed for all sales orders and delivers are created in table LIKP, LIPS.
    A goodsissue updating MKPF & MSEG.
    3.invoice due list is then executed for all deliviers. after an invoice is created, which creates an acct.doc ( BKPF & BSEG, FI TABLES) and set up the receivalbe in table BSID (open tem).
    4.The receivalbe is cleared when paymen check is required. anoher acct.doc is created ( BKPF & BSEG) the open receivable is cleared and table BSAD (closed line item) is updated
    First u select application component.. SD,MM,FI,PM
    then got to rsa5.. u can install data source...
    after next step RSA6-Display data sources
    Then select our ds which we need to enhance then click on the
    append structure (application tool bar)
    u can see start with ZAGTFIGL_4 ( fi example)
    u given component name and component type ( here given to existing
    data element)
    save + activate....
    then go to the t code CMOD...>there select the project
    name--> go to the exit in RSAP0001->
    or also u can goto tcode se37 fuction module.. EXIT_SAPLRSAP_001
    transactional data
    click on display...
    then u can see INCLUDE ZXRSAU01( program ) double click it.....
    then u can change option click it.
    So please follow the link below:
    http://help.sap.com/saphelp_nw70/helpdata/en/c5/bbe737a294946fe10000009b38f8cf/frameset.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/17/cd5e407aa4c44ce10000000a1550b0/frameset.htm
    Check these links,
    http://help.sap.com/saphelp_47x200/helpdata/en/dd/55f33e545a11d1a7020000e829fd11/frameset.htm
    http://www.sapbrain.com/TUTORIALS/FUNCTIONAL/SD_tutorial.html
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/MYSAP/SR_MM.pdf
    http://sap-img.com/materials/what-is-the-dataflow-of-mm.htm
    http://www.erpgenie.com/abap/tables_sd.htm
    http://www.erpgenie.com/abap/tables_mm.htm
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    /people/sap.user72/blog/2004/12/23/logistic-cockpit-delta-mechanism--episode-two-v3-update-when-some-problems-can-occur
    /people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
    /people/sap.user72/blog/2005/02/14/logistic-cockpit--when-you-need-more--first-option-enhance-it
    /people/sap.user72/blog/2005/04/19/logistic-cockpit-a-new-deal-overshadowed-by-the-old-fashioned-lis
    Re: LO-Cockpit  V1 and V2 update
    http://www.sap-img.com/business/lo-cockpit-step-by-step.htm
    Regards
    CSM Reddy

Maybe you are looking for

  • Downloaded iOS 7.0.2 on my iPad 4. I am now having difficulty . With emails.

    After downloading IOS 7.0.2 on my iPad 4. When I go into my email it shows the number of emails to view but they are not there, I can't locate them. I have tried deleting the account and re-installing it but it is still the same. I read that a number

  • Table autoheightrows breaks with a nowrap column

    ADF JDEV 11.1.1.3.0 I have a table w/ autoheightrows set, e.g.,       <af:table value="#{bindings.PoRequisitionLinesAllVO11.collectionModel}"                 var="row" rows="#{bindings.PoRequisitionLinesAllVO11.rangeSize}"                 emptyText="

  • Memory Leak in Linux OS

    Using JNI1.2 for C++ and JAVA communication. And the Java application is a multithread application and monitoring JBoss application server using JMX. In each 5 minute interval the C++ application is invoking a method of java class and for each method

  • Firefox 29+: Strange toolbar background color

    Started in FF29, Roboform toolbar has a strange toolbar background color. It only happens if the Bookmarks toolbar is hidden. I've tried the following css rules, but to no avail. #rf-toolbar-container background-image: none !important; background-col

  • Project stock components in a production order

    Hi Gurus, can anyone help. We have the following situation. Raise Purchase orders for components with reference to a WBS. We then raise Production orders which contain this component without reference to the WBS element. So MRP tries to order twice a