Clean up AQ$_CT_capturename_P table

I have async distributed hotlog cdc setup on 9.2 databases. There are many changes capture daily on the source database and I notice that the AQ$_CT_capturename_P in the source cdc schema is growing extremely fast. It's at 8GB with 5 millions record and is growing 500MB a day. What is this table for and how can I clean it up? Thanks.

Thank you SOOOOO much for a reply!
I have a question regarding how to to clean up the messages in a queue.
Within OEM, the queue table shows up in two places.establish
policies
Under the users schema->tables->{queue_table_name}
and under
Distributed->Advanced Queues->Queue Tables->{queue_table_name}
Which options do you suggest I choose to establish
policies?
When right clicking under schema->tables->{queue_table_name} the choices are:
Create
Create Like
Create Using Wizard
View/Edit Details
Enable All Constraints
Disable All Constraints
Show Dependencies
Show Object DDL
Create Index On
Grant Privileges On
Create Synonym For
Data Management->Export, Import, Load
Analyze
Reorganize
Change Management->Clone ..., Compare..., Capture..
Find Database Object
And under Distributed->Advanced Queues->Queue Tables->{queue_table_name} the choices are:
Create
Create Like
View/Edit Details
Remove
Display Messages
Show Object DDL
Create Queue
Find Database Objects

Similar Messages

  • Command the execute sql query and does not clean up data in Table

    Hi Team
    I have an SP, which writes source and target data to Temp Tables and runs except query to get mismatch details
    I am using Exec command to execute source query that writes data to Temp Table, once I execute query using Exec, data is written and immediately #temp table is cleaned, now I cannot use #temp table for comparition.
    Is there any way to execute sql query in SP without using EXEC and will hold the data in temp table.

    You need to create temp table before EXEC statement and rewrite your dynamic query also as below
    declare @source_Sql nvarchar(1000)
    ,@target_Sql nvarchar(1000)
    create table #TempTable1 (name nvarchar(10))
    create table #TempTable2 (name nvarchar(10))
    set @source_Sql = 'INSERT INTO #TempTable1 SELECT [Name] from Employee'
    set @target_Sql = 'INSERT INTO #TempTable2 SELECT [Name] from Employee2'
    EXEC (@source_Sql)   
    EXEC (@target_Sql)             
    select * from #TempTable1 
     Except 
    select * from #TempTable2

  • Is there a way to clean up the Timesheet tables in MS Project 2007 before I migrate to 2013?

    Hi All
    I'm attempting to migrate an instance of project server 2007 over to 2013.  The problem is that it failes on the convertto process when combining the 4 2010 databases into a single 2013 database.
    Reading through the logs it appears to be an issues with an invalid Foreign Key in the MSP_TimesheetProjects table.
    On investigation I find that the 2007 Timesheet tables in the Reporting database have a number of invalid entries.  We don't use the timesheeting system in that particular instance so there should not be any entries at all.  There are however a
    couple to projects in the TimesheetProjects table, as well as a few tasks and timesheet lines, even though there don't appear to be any timesheets.  I have run a delete on all timesheets in the system, and have rebuild the Reporting database by restoreing
    custom fields, but I can't seem to shift them.
    If anyone has any ideas, I'd appreciate the help.
    John

    Hi John,
    What I'd suggest would be to refresh the 2007 reporting DB. See the procedure below:
    Logon to Project Web Access with Administrator credentials
    Select Administrative Backup from the Database Administration secton on the Server
    SettingsPage in Project Web Access
    In the Items for Backup section of the Backup page,
    select the checkbox for Enterprise Custom Fields and then click the Backup button and click the OK button
    when prompted by the system
    Select Administrative Restore from the Database Administration secton on the Server Settings Page in Project Web Access
    Choose Enterprise Custom Fields from the Choose Item selector on the Restore page
    Click the Restore button and then click the OK button
    when prompted by the system
    See reference
    here.
    Hope this helps,
    Guillaume Rouyre, MBA, MVP, P-Seller |

  • How to clean tables in Ides database of Mobile Sales

    Hello experts,
    I'm working with Mobile Sales v 4 SP08, and SQL Server 2000 Standard Edition.
    I have created a new extract for a site.
    In order to go faster loading the new extract synchronizing MSA client, how could I clean the Ides database into MSA client ?
    Best regards
    Juan

    Hi Pratik,
    Thank you, but I'm not interesting in complete archiving process.
    I only want clean tables into Ides database into SQL Server of MSA client, with SQL tools. From SQL Interprise manager.
    When I create a new extract for existing user and synchronize it, the import operation is very slow because SQL server needs delete the old records in all tables and import new records.
    As far as I know, detaching Ides database, saving it, ... some operations...., and restoring it, we can clean all records in tables. After that we will synchronize and will load new extract.
    Kind regards
    Juan

  • No more optical drive and windows 8 issue: "Windows cannot be installed on this disk as it has an MBR partition table."

    I have an Early 2011 13" MBP going strong with an SSD and original HDD installed in the optical drive. However, having had success with windows 7 on my old HDD I was planning to install Windows 8.1 (from .iso) on my SSD. However, for the past day or so it's been a nightmare.
    I tried the official procedure but by editing the bootcamp info.plist so I could boot from USB (you can't install windows from an external optical drive apparently) partitioning as bootcamp wanted too.
    I then tried it a number of different ways using disk utility etc and with an empty space
    However, I always seem to end up with the error "“Windows cannot be installed on this disk. The selected disk has an MBR partition table. On EFI systems, Windows can only installed on GPT disks“
    I read some interesting stuff here http://www.royhochstenbach.com/installing-windows-8-1-on-a-2013-mac-pro/
    He points out that
    "Windows 7 and 8 in x64 support EFI. Normally if you install Windows on a Mac and use the installation DVD, it boots into regular BIOS mode, thus can be installed on an MBR partition. I tried the same, but since the Mac Pro doesn’t have an optical drive I had to use an external drive. And apparently the Mac boots external optical drives in EFI mode too. The Bootcamp wizard is aware of this, and creates a GPT partition on a non-superdrive Mac but an MBR partition on a superdrive Mac."
    This means Bootcamp is essentially making the wrong type of partition?
    My real question is? How do I install it windows 8? I'm really on my last legs!
    I really don't want to have to open this thing up and reinsert my optical drive as its really really difficult to get out of the enclosure (would probably have to break it).
    A huge thanks in advance!

    Hi,
    Have you tried that suggestion?
    You also could use this commands to check and install again:
    Inside windows installer, hit Shift+F10 to get a command prompt, then run diskpart and type List disk to displays a list of disks and information about them, such as their size, amount of available free space, whether the disk is a basic or dynamic disk,
    and whether the disk uses the master boot record (MBR) or GUID partition table (GPT) partition style.
    and then select the target disk. Zap the drive (with the clean command), create GPT table (new gpt), create the GPT-EFI special partitions.
    Step-by-step instructions is here for reference:
    HOW TO: Use the Diskpart.efi Utility to Create a GUID Partition Table Partition on a Raw Disk in Windows
    http://support.microsoft.com/kb/297800?wa=wsignin1.0
    Then reboot so the firmware finds those partitions and adds the disk to the EFI-native boot order (Windows installer checks this).
    Karen Hu
    TechNet Community Support

  • Importing Large Sized MS Office Word 2007 Tables

    Hello,
    Once again I am stuck; hopefully some experienced and helpful FM people can give me some pointers.
    I am using FM 9 in WinXP and MS Office Word 2007. You know, the one that saves documents as .docx.
    I get these documents from Engineers in a Word doc format that in some cases have very large tables that I need to import into FM. I have tried to import these tables as a file, as a CSV, but nothing seems to work well. I keep having problems with each method I try such that the tweaks required to make the import work are not worth it and I may as well start copying and pasting each cell.
    I have also searched the forums to see if there was a post concering this type of thing, but maybe I am not searching on the right thing because I could not find a post that was similar to what I am going through.
    What is the best method to import a 200 row, 6 column table from MS Office Word 2007 into FM9? The end result will need to have a header row and be editable as an FM 9 table.
    Thanks for the help!
    Erin

    Erin, as you've found, importing Word tables can be quite a challenge because there are so many ways that the two applications have different capabilities that the import filter just can't handle them. Plus, it often leaves hidden Word "junk codes" in the table in FM which can severely mangle one's sanity later on. And, the .docx file spec is newer than the FM9 filters, so ...
    The very best way to get clean import of Word tables is to use Rick Quatro's TableCleaner import:
    http://www.frameexpert.com/plugins/index.htm
    It's absolutely worth its weight in gold.

  • Regarding the SAP big tables in ECC 6.0

    Hi,
    We are having SAP ECC 6.0 running on Oracle 10.2g database. Please can anyone of you give fine details on the big tables as below. What are they? Where are they being used? Do they need to be so big? Can we clean them up?
    Table          Size
    TST03          220 GB
    COEP          125 GB
    SOFFCONT1      92 GB
    GLPCA          31 GB
    EDI40          18GB
    Thanks,
    Narendra

    Hello Narendra,
    TST03 merits special attention, certainly if it is the largest table in your database. TST03 contains the contents of spool requests and it often happens that at some time in the past there was a huge number of spool data in the system causing TST03 to inflate enormously. Even if this spool data was cleaned up later Oracle will not shrink the table automatically. It is perfectly possible that you have a 220 GB table containing virtually nothing.
    There are a lot of fancy scripts and procedures around to find out how much data is actually in the table, but personally I often use a quick-and-dirty check based on the current statistics.
    sqlplus /
    select (num_rows * avg_row_len)/(1024*1024) "MB IN USE" from dba_tables where table_name = 'TST03';
    This will produce a (rough) estimate of the amount of space actually taken up by rows in the table. If this is very far below 220 GB then the table is overinflated and you do best to reorganize it online with BRSPACE.
    As to the other tables: there are procedures for prevention, archiving and/or deletion for all of them. The best advice was given in an earlier response to your post, namely to use the SAP Database Management Guide.
    Regards,
    Mark

  • Expired subscription clean up job failing

    the history message from the step:
    Date 19/11/2014 08:41:11
    Log Job History (Expired subscription clean up)
    Step ID 1
    Server TV-SQL1
    Job Name
    Expired subscription clean up
    Step Name Run agent.
    Duration 00:00:01
    Sql Severity 16
    Sql Message ID 1934
    Operator Emailed
    Operator Net sent
    Operator Paged
    Retries Attempted 0
    Message
    Executed as user: IBA\xxx. DELETE failed because the following SET options have incorrect settings: 'ANSI_NULLS, QUOTED_IDENTIFIER'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or
    query notifications and/or XML data type methods and/or spatial index operations. [SQLSTATE 42000] (Error 1934).  The step failed.
    when I run the step manually I get:
    Msg 1934, Level 16, State 1, Line 1
    DELETE failed because the following SET options have incorrect settings: 'ANSI_NULLS, QUOTED_IDENTIFIER'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or
    XML data type methods and/or spatial index operations.
    Msg 20709, Level 16, State 1, Procedure sp_MScleanup_conflict, Line 66
    The merge process could not clean up the conflict table "[MSmerge_conflict_recordings_items]" for publication "recordings".
    I think I know which table (items in db recordings). it has a computed field with a replicated text index on it.
    the computed field definition is:
    (ltrim(isnull([item_title],'')+' ')+isnull([item_desc],'')) which are 2 other fields in the table nvarchar allowing nulls.
    what do I need to do to stop the cleanup job from crashing.
    thanx
    david

    Which version of SQL are you using?
    Did you check this article -
    http://blogs.msdn.com/b/sqlserverfaq/archive/2014/11/13/merge-replication-expired-subscription-clean-up-job-sp-expired-subscription-cleanup-sp-mscleanup-conflict-fails-with-error-msg-1934-level-16-state-1-and-msg-20709-level-16-state-1-procedure-sp-mscleanup-conflict.aspx
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Slightly off topic: Read-only tables pre 11g

    Hi gang
    I'm just writing up a database quiz for a local user group and I was hoping I could get a bit of inspiration from the database experts.
    One of the questions will be "prior to 11g with the introduction of read-only tables, how could you make a table read-only?". The answers I've come up with:
    1) Security priviliges (schema + grant SELECT)
    2) Triggers
    3) Create a check constraint with disable validate
    4) Read-only tablespace
    5) Read-only database (standby)
    6) (Slightly crazy) Create view, and instead-of triggers that do nothing (similar to 2)
    7) Write the query results on a piece of paper and then turn the database off
    Anybody have any other answers, real or slightly off topic like mine please? ;)
    Cheers,
    CM.

    Check constraint and trigger solutions may have problems with sqlldr direct path operations, so using it together with alter table disable lock may be mandatory depending on the needs. Especially if DDLs are also wanted to be avoided.
    This topic was once mentioned on Tom Kyte's blog or asktom but I couldn't find the source to link here.
    SQL> conn hr/hr
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.4.0
    Connected as hr
    -- cleaning objects
    SQL> drop table tong purge ;
    Table dropped
    SQL> drop view vw_tong ;
    View dropped
    -- creating the demo table
    SQL> create table tong ( col1 number ) ;
    Table created
    SQL> alter table tong add constraint cc_tong check ( 1=0 ) disable validate;
    Table altered
    SQL> alter table tong disable table lock;
    Table altered
    -- some DDL tests
    SQL> drop table tong ;
    drop table tong
    ORA-00069: cannot acquire lock -- table locks disabled for TONG
    SQL> truncate table tong ;
    truncate table tong
    ORA-25128: No insert/update/delete on table with constraint (HR.CC_TONG) disabled and validated
    SQL> alter table tong parallel ;
    alter table tong parallel
    ORA-00069: cannot acquire lock -- table locks disabled for TONG
    SQL> lock table tong in exclusive mode ;
    lock table tong in exclusive mode
    ORA-00069: cannot acquire lock -- table locks disabled for TONG
    -- some DML tests
    SQL> select * from tong ;
          COL1
    SQL> update tong set col1 = col1 + 1 ;
    update tong set col1 = col1 + 1
    ORA-25128: No insert/update/delete on table with constraint (HR.CC_TONG) disabled and validated
    -- creating dependent objects test
    SQL> create index nui_tong on tong(col1) nologging ;
    Index created
    SQL> create view vw_tong as select * from tong ;
    View created
    added comments to the code
    Message was edited by:
    TongucY

  • Entries in Dimension table (Dim Id's) which do not exist in Fact table

    Hello all,
    We have a strange situation when we run the Report SAP_INFOCUBE_DESIGN. I expected that the Dimension table could have max 100 % compared with Fact tables. However we have a dimension with 587% entries compared with fact tables.
    ZOEEMRW            /BIC/DZOEEMRW3      rows:  2.416.567    ratio:        587  %
    ZOEEMRW            /BIC/DZOEEMRW5      rows:      2.464    ratio:          1  %
    ZOEEMRW            /BIC/DZOEEMRWP      rows:          4    ratio:          0  %
    ZOEEMRW            /BIC/DZOEEMRWT      rows:        520    ratio:          0  %
    ZOEEMRW            /BIC/DZOEEMRWU      rows:         18    ratio:          0  %
    ZOEEMRW            /BIC/EZOEEMRW       rows:    399.160    ratio:         97  %
    ZOEEMRW            /BIC/FZOEEMRW       rows:     12.520    ratio:          3  %
    Consider dimension /BIC/DZOEEMRW3.
    For this Dimension, we could not find an entry in the tables /BIC/EZOEEMRW or /BIC/FZOEEMRW for the following dim idu2019s.
    There are many DIM ids which exist in the Dimension table but do not exist in the Fact tables.
    Is it normal that this can happen? If so in which cases and is there any way to clean up these entries in the Dimension table which do not exist in the fact table.
    Any help or insight on this issue will be appreciated.
    Best Regards,
    Nitin

    Hey,
    there is a program with which you can clean up your dimension table. Search forum. But also try RSRV there is a option especially for that.
    Backround is that using process chains and you delete data from cube, dimensions are not deleted. So there could be data in the dimension which have no relation to fact table. This is many times discussed in this forum.
    Best regards,
    Peter

  • DHCP Client Table won't reset

    Hi,
    I am rearranging my router settings, mostly my DHCP Clients, but i noticed the DHCP Client Table won't reset itself and therefore i can't add new static IP's anymore for a specific pc or printserver.
    I have a Linksys wireless router WRT54GC.
    The error which i receive after trying is "Error: This entry already exists in the Reserved PC DHCP Client Table."
    Is this a known bug and is there a way to fix this/clean up the Client Table? Without resetting my modem ofcourse.
    Thanks in advance

    This is a Linksys user forum. It is seldom you see a Linksys tchnician posting here, unless they post without acknowledging that they work for Linksys. (Which I believe they should have to do, personally. A special avatar to let us know they are Linksys personnel would be all it would take. Unless of course they wouldn't want to be held accountable for their advice )
    If you want direct tech support from Linksys, your going to have to call them....good luck!!!
    Tomato 1.25vpn3.4 (SgtPepperKSU MOD) on a Buffalo WHR-HP-G54
    D-Link DSM-320 (Wired)
    Wii (Wireless) - PS3 (Wired), PSP (Wireless) - XBox360 (Wired)
    SonyBDP-S360 (Wired)
    Linksys NSLU2 Firmware Unslung 6.10 Beta unslung to a 2Gb thumb, w/1 Maxtor OneTouch III 200Gb
    IOmega StorCenter ix2 1TB NAS
    Linksys WVC54G w/FW V2.12EU
    and assorted wired and wireless PCs and laptops

  • BC4J cleaning up after session timeout

    Hi
    Can anyone advise me on how to cleanup a database table via BC4J when a session times out. When someone logs into my application I enter an audit trail into a table. When they logout or when the session times out I want to add a logout time.
    I have got as far as defining a bean that implements HttpSessionBindingListener. When the session times out valueUnbound is called.
    This works fine, but I need to be able to access a BC4JContext in order to update a table.
    I cannot see how to obtain a BC4JContext as BC4JContext.getContext requires a HttpServletRequest to be passed to it, which I do not have access to at this point.
    Can anyone help ?

    Hi,
    One solution is to use the oracle.jbo.client.Configuration.createRootApplicationModule, releaseRootApplicationModule APIs in a try...finally block in your binding listener. These will instantiate/reuse an ApplicationModule that is managed by a pool which is configured by the specified configuration.
    Another approach would be to extend oracle.jbo.server.ApplicationModuleImpl.reset (or resetState, please see the javadoc to confirm). This method is invoked by the pool whenever an ApplicationModule is released statelessly OR whenever an ApplicationModule is used between sessions. If it is not your intention for the table to be cleaned out when a managed state AM is recycled then this solution may not work for you.
    A third solution is to cleanup the table on the DB side using PL/SQL and DBMS_JOBS. This has the advantage that it does not depend upon the MT to clean up the audit table.
    Hope this helps,
    JR

  • Huge memory leaks in using PL/SQL tables and collections

    I have faced a very interesting problem recently.
    I use PL/SQL tables ( Type TTab is table of ... index by binary_integer; ) and collections ( Type TTab is table of ...; ) in my packages very widely. And have noticed avery strange thing Oracle does. It seems to me that there are memory leaks in PGA when I use PL/SQL tables or collections. Let me a little example.
    CREATE OR REPLACE PACKAGE rds_mdt_test IS
    TYPE TNumberList IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
    PROCEDURE test_plsql_table(cnt INTEGER);
    END rds_mdt_test;
    CREATE OR REPLACE PACKAGE BODY rds_mdt_test IS
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    END;
    END rds_mdt_test;
    I run the following test code:
    BEGIN
    rds_mdt_test.test_plsql_table (1000000);
    END;
    and see that my session uses about 40M in PGA.
    If I repeat this example in the same session creating the PL/SQL table of smaller size, for instance:
    BEGIN
    rds_mdt_test.test_plsql_table (1);
    END;
    I see again that the size of used memory in PGA by my session was not decreased and still be the same.
    The same result I get if I use not PL/SQL tables, but collections or varrays.
    I have tried some techniques to make Oracle to free the memory, for instance to rewrite my procedure in the following ways:
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    x.DELETE;
    END;
    or
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    FOR indx in 1 .. cnt LOOP
    x.DELETE(indx);
    END LOOP;
    END;
    or
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    empty TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    x := empty;
    END;
    and so on, but result was the same.
    This is a huge problem for me as I have to manipulate collections and PL/SQL tables of very big size (from dozens of thousand of rows to millions or rows) and just a few sessions running my procedure may cause server's fall due to memory lack.
    I can not understand what Oracle reseveres such much memory for (I use local variables) -- is it a bug or a feature?
    I will be appreciated for any help.
    I use Oracle9.2.0.1.0 server under Windows2000.
    Thank you in advance.
    Dmitriy.

    Thank you, William!
    Your advice about using DBMS_SESSION.FREE_UNUSED_USER_MEMORY was very useful. Indeed it is the tool I was looking for.
    Now I write my code like this
    declare
    type TTab is table of ... index binary_integer;
    res TTab;
    empty_tab TTab;
    begin
    res(1) := ...;
    res := empty_tab;
    DBMS_SESSION.FREE_UNUSED_USER_MEMORY;
    end;
    I use construction "res := empty_tab;" to mark all memory allocated to PL/SQL table as unused according to Tom Kyte's advices. And I could live a hapy life if everything were so easy. Unfortunately, some tests I have done showed that there are some troubles in cleaning complex nested PL/SQL tables indexed by VARCHAR2 which I use in my current project.
    Let me another example.
    CREATE OR REPLACE PACKAGE rds_mdt_test IS
    TYPE TTab0 IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
    TYPE TRec1 IS RECORD(
    NAME VARCHAR2(4000),
    rows TTab0);
    TYPE TTab1 IS TABLE OF TRec1 INDEX BY BINARY_INTEGER;
    TYPE TRec2 IS RECORD(
    NAME VARCHAR2(4000),
    rows TTab1);
    TYPE TTab2 IS TABLE OF TRec2 INDEX BY BINARY_INTEGER;
    TYPE TStrTab IS TABLE OF NUMBER INDEX BY VARCHAR2(256);
    PROCEDURE test_plsql_table(cnt INTEGER);
    PROCEDURE test_str_tab(cnt INTEGER);
    x TTab2;
    empty_tab2 TTab2;
    empty_tab1 TTab1;
    empty_tab0 TTab0;
    str_tab TStrTab;
    empty_str_tab TStrTab;
    END rds_mdt_test;
    CREATE OR REPLACE PACKAGE BODY rds_mdt_test IS
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    BEGIN
    FOR indx1 IN 1 .. cnt LOOP
    FOR indx2 IN 1 .. cnt LOOP
    FOR indx3 IN 1 .. cnt LOOP
    x(indx1) .rows(indx2) .rows(indx3) := indx1;
    END LOOP;
    END LOOP;
    END LOOP;
    x := empty_tab2;
    dbms_session.free_unused_user_memory;
    END;
    PROCEDURE test_str_tab(cnt INTEGER) IS
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    str_tab(indx) := indx;
    END LOOP;
    str_tab := empty_str_tab;
    dbms_session.free_unused_user_memory;
    END;
    END rds_mdt_test;
    1. Running the script
    BEGIN
    rds_mdt_test.test_plsql_table ( 100 );
    END;
    I see that usage of PGA memory in my session is close to zero. So, I can judge that nested PL/SQL table indexed by BINARY_INTEGER and the memory allocated to it were cleaned successfully.
    2. Running the script
    BEGIN
    rds_mdt_test.test_str_tab ( 1000000 );
    END;
    I can see that plain PL/SQL table indexed by VARCHAR2 and memory allocated to it were cleaned also.
    3. Changing the package's type
    TYPE TTab2 IS TABLE OF TRec2 INDEX BY VARCHAR2(256);
    and running the script
    BEGIN
    rds_mdt_test.test_plsql_table ( 100 );
    END;
    I see that my session uses about 62M in PGA. If I run this script twice, the memory usage is doubled and so on.
    The same result I get if I rewrite not highest, but middle PL/SQL type:
    TYPE TTab1 IS TABLE OF TRec1 INDEX BY VARCHAR2(256);
    And only if I change the third, most nested type:
    TYPE TTab0 IS TABLE OF NUMBER INDEX BY VARCHAR2(256);
    I get the desired result -- all memory was returned to OS.
    So, as far as I can judge, in some cases Oracle does not clean complex PL/SQL tables indexed by VARCHAR2.
    Is it true or not? Perhaps there are some features in using such way indexed tables?

  • Table cleanup

    I am working on a project, where for a long time, they forgotten o cleanup TPVS tables. All the delivery and sales order tables got more than 15million records.  I am helping them to cleanup these tables.  SAP provided a standard programme which cleans up all the tables, but programme doesnu2019t have selective deletion option, which is causing dumps due to huge data.  Though we increased the heap memory to the max, still we are getting dumps. We know, the dumps are due to huge data, is there any way to clean the tables. I am a functional consultant, looking for some help in this group. Did any body come across this kind of error?   Any help is appreciated.
    thx
    Jeff

    Hello Jeff,
    -> Are you using the MAXDB for the APO database?
    What version?
    Could you please give more light on what TPVS tables are you planning to delete?
    -> You could post this question on u201CSCM - APOu201D forum.
    And I recommend you to create the SAP ticket & get SAP support.
    Start with SCM-APO-VS-BF component.
    The application team will check what tables are you going to clean.
    You could attach the dump to the SAP ticket too.
    If you have the High memory consumption when deleting deliveries, for example,
    then there are the SAP application notes 1270515 and 1262990, which could help you.
    So application team has to check, what is the problem and what is the best solution for you.
    In case the database experts will be needed the ticket will be forwarded to the database component.
    Thank you and best regards, Natalia Khlopina

  • Cleanup of Pointcloud $$ Tables (BLK, SDO_PC)

    I use SDO_PC with MDSYS.SDO_PC_BLK_TABLE. A lot of $$ tables are created and never go away:
    mdpce_1_14c17$$$,
    mdpce_1_17aea$$$,
    mdpce_2_14c17$$$,
    mdpce_2_17aea$$$,
    mdpce_3_14c17$$$,
    mdpcp_1_14c17$$$,
    mdpcp_1_17aea$$$,
    mdpcp_2_14c17$$$,
    mdpcp_2_17aea$$$,
    mdpcp_3_14c17$$$
    Every failed loading process creates some $$-tables and views.
    How to clean up unused $$-tables?
    How to tell which of these $$-tables are no longer needed.
    How to repair accidentally deleted $$-tables?
    My findings:
    these are not the RDT$$-tables which represent spatial indizes.
    I searched in SDO_PC_PKG; SDO_UTIL and in the user_sdo_* views. The RDT$$-tables are referenced, but not the pointcloud-$$-tables.
    SDO_UTIL.DROP_WORK_TABLES refers to them as "scratch tables".

    Hi,-
    If there are scratch tables from an aborted previous run of pointcloud creation,
    point cloud may not be created. Therefore, you need to clean up all scratch tables
    using the following SQL statements (in your case):
    SQL> exec sdo_util.drop_work_tables('14c17');
    SQL> exec sdo_util.drop_work_tables('17aea');
    What error do you get after every failed loading process? Please let us know further problems.
    Best regards
    baris

Maybe you are looking for

  • AP:  Issue with PO, Quantity is missing after user MR8M

    Good Morning SAP Gurus- We are currently on ECC 6.0.  I have an AP issue.  We had an open PO with QTY of 2 that a manual GR was done and therefore invoice was paid.  We were informed that the items never shipped therefore user executed a credit memo

  • Emac with dead display

    Hi guys. I was lucky enough to pick up a triplet of eMacs yesterday. one of them is a 700mhz model that was working until the power button tore off. The other two I was told were 1ghz models, but the CD tray reports 1.25ghz (woohoo!!) The one is in p

  • Payroll Calendar and Work Week Changes

    Hi  All, Good Afternoon. We are on SAP ECC 6.0 with EHP4. Currently our Biweekly payroll Work Week starts on Thursday and Ends on Wednesday . Our Management has decided to change the Work week start day will be  from Monday and will be ending on Sund

  • Pureview N808 with Luna BH-220 - intermittently di...

    I have my BH-220 paired with two phones an Iphone 4S and the N808. I notice that it intermittently disconnects from the 808 (and then pairs back). This does not happen when the N808 is paired alone with the Bh-220. Anyone else face similar issues ?

  • Equals problem

    Hi, there, why I use equals method to compare if two strings equals, I always get the wrong infomation: java.lang.NullPointerException what kind of class should I import? thanks!!