Temp Space Issue

Hi there folks.
I've been using this forum as a reference point for the last couple of months but this is the first time I've had to post anything because I happen to have a pickle of an issue. In fact, it's that beefy, even my company's development team is stumped... Yeah.
So, a quick bit of background. The current client I'm working for is a Government agency related to Health & Public Services. The database we use works as a go-between with Siebel CAM and Business Objects. As the two don't naturally speak to each other, occasionally we get inevitible data incosistencies. We had an Informatica CDC issue a little while ago which left a lot of stale data, primarily relating to tasks which should have been closed or naturally expired. To clear up the backlog and return consistency to the DB, we've been working on updating a workflow which will refresh the tasks across the entire DB and bring everything in line.
Now, when the DB was designed, Task_Refresh was created and worked when tested as the DB was getting loaded with data. The problem is, when it was tested initially, there was only a few hundred thousand records to filter through. Now that the DB is in full swing, there's 22 million records and we're encountering issues. Essentially when we run the workflow in Informatica, it gets as far as the Source Qualifier and fails, having chewed through the available 16gb of temp space in the space of an hour.
I've been working for weeks trying to tune the SQL code for the source qualifier and, along with the development team, I'm now well and truly stuck. Here is the query:
SELECT
    a.1,
    a.2,
    a.3,
    a.4,
    a.5,
    a.6,
    a.7,
    a.8,
    a.9,
    a.10,
    a.11,
    a.12,
    a.13,
    a.14,
    a.15,
    a.16,
    a.17,
    a.18,
    a.19,
    a.20,
    a.21,
    a.22,
    a.23,
    a.24,
    a.25,
    a.26,
    a.27,
    c.1,
    c.2,
    c.3,
    c.4,
    c.5,
    b.1,
    f.1,
    e.1,
    f.2,
    f.3,
    f.4,
    f.5,
    f.6,
    f.7,
    f.8,
    f.9,
    f.10,
    f.11,
    f.12
    FROM PARENT_TABLE a,  CHILD_TABLE2 c, CHILD_TABLE1 b, PARENT_TABLE2 e, PARENT_TABLE3 f
    WHERE   a.1 = b.PAR_ROW_ID(+)
    AND     a.1 = c.PAR_ROW_ID(+)
    AND     a.1 = e.PAR_ROW_ID(+)
    AND     a.TARGET_PER_ID = f.ROW_ID(+)
    AND
    (a.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
    AND a.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
    or
    (c.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
    AND c.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
    or
    (b.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
    AND b.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
    or
    (e.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
    AND e.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
   );We are running this on 10g 10.1.0.5.0 - 64bi
So, obviously I've replaced the table names and most of the column names, but a.1 is the primary key. What we have is the primary key compared to the PAR_ROW_ID column in each of the child tables and one of the other main tables, looking for matching IDs down the whole of the PARENT_TABLE. In this database, tasks can be entered in any of the five tables mentioned for various different reasons and functions. The dates in the last section are just arbitrary dates we have been working with, when the query works and is running into production, the dates will change, a week at a time based on a parameter file in Informatica.
In troubleshooting, we found if we remove the left outer joins in the WHERE clause, then the query will return data, but not the right amount of data. Basically, PARENT_TABLE is just too massive for the temp space we have available. We can't increase the temp space and we can't get rid of the joins because that changes the functionality significantly. We have also tried restructuring the query by filtering on the dates first but that doesn't help, though I suspect that the order of the WHERE clause is being ignored by Oracle due to it's own performance related decisions. I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
We have even tried setting the date and time for an hour's worth of entries and that was still too big.
I'm stuck in a catch 22 here because we need to get this working, but the client is adamant that we should not be changing the functionailty. Is there any possible solution any of you can see for this?
Thank you in advance for any help you can offer on this.
(also, we are providiing support for the applications the database uses [CAM, BO], so we don't actually run this db per se, the client does. We've asked them to increase the temp space and they wont.
The error when we run the workflow is: ORA-01652: unable to extend temp segment by 128 in tablespace BATCH_TEMP)

938041 wrote:
I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
Technilcally you should be able to rewrite this query as a UNION ALL of 4 separate parts, providing you remember to include a predicate in the 2nd, 3rd and 4th query blocks that eliminate rows that have already been reported by the earlier query blocks.
Each part will be driven by one of the four range based predicates you have - and for the three parts that drive off child tables the join to the parent should NOT be an outer join.
Effectively you are trying to transform the query manually using the CONCATENATION operator. It's possibly (but I suspect very unlikely) that the optimizer will do this for you if you add the hint /*+ use_concat */ to the query you currently have.
Regards
Jonathan Lewis

Similar Messages

  • V$temp_space_header showing temp space fully utilized but not true

    hi,
    any experience regarding temp space headeR?
    currently we are experiencing this issue:
    when using this:
    SELECT tablespace_name, SUM(bytes_used), SUM(bytes_free) FROM v$temp_space_header GROUP BY tablespace_name;
    TABLESPACE_NAME SUM(BYTES_USED) SUM(BYTES_FREE)
    TEMP 227632218112 0
    but when using this:
    SELECT NVL(A.tablespace_name, D.name) tablespace,
    D.mb_total,
    SUM (NVL(A.used_blocks,0) * D.block_size) / 1024 / 1024 mb_used,
    D.mb_total - SUM (NVL(A.used_blocks,0) * D.block_size) / 1024 / 1024 mb_free
    FROM v$sort_segment A,
    SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
    FROM v$tablespace B, v$tempfile C
    WHERE B.ts#= C.ts#
    GROUP BY B.name, C.block_size
    ) D
    WHERE A.tablespace_name (+) = D.name
    GROUP by NVL(A.tablespace_name, D.name), D.mb_total
    TABLESPACE MB_TOTAL MB_USED MB_FREE
    TEMP 217087 839 216248
    is this a bug??
    thanks.

    Hi,
    It may be the case that the operation you are doing needs more temp space than the amount of temp space that actually can be available.
    Instade of looking at free and used temp space, You can look at the operation which needs that much amount of temp space. May be there is some way to minimize that..
    Please share what actually are you doing when this problem is coming.
    Regards,
    Dipali

  • Report consuming a lot of temp Space (BIP)

    Hi Experts,
    I am facing an issue.
    Some BIP reports consuming a lot of temp space (5-30 gb temp directory), which is causing service down (BIP, RMS, ReIM and RPM). BIP, RMS, ReIM and RPM are installed on same server.
    Please help to troubleshoot this issue.
    Thanks in Advance

    plz see
    Troubleshooting Oracle BI Publisher Enterprise 11g [ID 1387643.1]
    Troubleshooting Oracle BI Publisher Enterprise 10g [ID 412232.1]

  • Unable to Extend TEMP space for CLOB

    Hi,
    I have a data extract process and I am writing the data to a CLOB variable. I'm getting the error "Unable to Extend TEMP space" for the larger extracts. I was wondering if writing the data to a CLOB column on a table and then regularly committing would be better than using a CLOB variable assuming time taken is not an issue.

    You do not need to add more temp files. This is not a problem of your temp tablespace. This is the problem of temp segments in your permanent tablespace. You need to add another datafile in EDWSTGDATA00 tablepsace. This happens when you are trying to create tables and indexes. Oracle first does the processing in temp segments(not temp tablespace) and at the end oracle converted those temp segments into permanent segments.
    Also, post the result of below query
    select file_name,sum(bytes/1024/1024)in_MB,autoextensible,sum(maxbytes/1024/1024)in_MB from dba_data_files where tablespace_name='STAGING_TEST' group by file_name,autoextensible order by FILE_NAME;

  • Temp space problem

    HI all,
    I receive an error while executing a procedure.
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    can any one please exlain what is the problem.
    thanks in advance
    baskar k

    hi
    First ORA-01652 may occur because there is simply no space available in the temp tablespace of which is being used. The second cause of ORA-01652 may have to do with the local temp segment not being able to extent space even though there is space in other instances.
    To trouble shoot for ORA-01652, and find out which of the above scenarios are causing ORA-01652 use this query offered by MetaLink:
    select sum(free_blocks)
    from gv$sort_segment
    where tablespace_name = '<TEMP TABLESPACE NAME>'
    You will know that the first scenario is causing ORA-01652 to be thrown if the free block reads '0' because it signifies that there is no free space.
    If there is a good amount of space, you know that there is another cause for ORA-01652, and it is probably the second scenario. It is important to note that in a non-RAC environment, local instances are not able to extend the temp segments, so in the RAC environment, ORA-01652 has to be handled differently. If you are experiencing ORA-01652 in a non-RA environment, be aware that every SQL making use of the tablespace can fail.
    In RAC, more sort segment space can be used from other instances, which can help resolve ORA-01652 more easily. Try using the query below:
    select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
    from gv$sort_segment;
    Basically, you can then find out how much temp segment space can be used for each instance by viewing the total_blocks, and the used_blocks can reveal the space which has been used so far, and and the free_blocks gives the amount of space allocated to this particular instance. This being, to resolve ORA-01652, you can check out that used_blocks = total_blocks and free_blocks = 0 will probably show for the instance, and ORA-01652 will be shown multiple times within the alert log.
    This basically means that free space from other instances is being requested, and typically signifies that there is instance contention. Instance contention within the temporary space can make the instance take more time to process.
    In sever cases, a slowdown may occur, in which you might want try one of the following work-arounds:
    Increase size of the temp tablespace
    Increase sort_area_size and/or pga_aggregate_target
    However, remember to not use the RAC feature of DEFAULT temp space.
    If ORA-01652 is causing the slowdown, SMON will probably not be able to process the sort segment requests, you you should try to diagnose the contention:
    Output from the following query periodically during the problem:
    select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
    from gv$sort_segment;
    Global hanganalyze and systemstate dumps
    Hope this helps
    CHeers

  • Problem with temp space allocation in parallel query

    Hello
    I've got a query which matches two large result sets (25m+ rows) against each other and does some basic filtering and aggregation. When I run this query in serial it takes about 30 mins and completes successfully. When I specify a parallel degree of 4 for each result set, it also completes successfully in about 20 minutes. However, when I specify that it should be run in parallel but don't specify a degree for each result set, it spawns 10 parallel servers and after a couple of minutes, bombs out from one of the parallel servers with:
    ORA-12801: error signaled in parallel query server P000
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPThis appears to be when it is about to perform a large hash join. The execution plan does not change whether the parallel degree is specified or not, and there is several GB of temp space available.
    I'm at a bit of a loss as to how to track down specifically what is causing this problem. I've looked at v$sesstat for all of the sessions involved and it hasn't really turned anything up. I've tried tracing the main session and that hasn't really turned up much either. From what I can tell, one of the sessions seems to try to allocate massive amounts of temp space that it just does not need, but I can figure out why.
    Any ideas of how to approach finding the cause of the problem?
    David

    Hello
    I've finally resolved this and the resolution was relatively simple - and was also the main thing that Mark Rittman said he did in his article: reduce the size of the hash join.
    After querying v$sql_workarea_active I could see what was happening which was that the sum of the temp space for all of the parallel slaves was exceeding the total amount of temp space available on the system. When run in serial, it was virtually at the limit. I guess the extra was just the overhead for each slave maintaining it's own hash table.
    I also made the mistake of misreading the exectuion plan - assuming that the data being pushed to the hash join was filtered to eliminate the data that was not of interest. Upon reflection, this was a rather stupid assumption on my part. Anyway, I used sub query factoring with the materialize hint to ensure that the hash join was only working on the data it should have been. This significantly reduced the size of the hash table and therefore the amount of temp space required.
    I did speak to oracle support and they suggested using pga_aggregate_target rather than the separate *area_size parameters.  I found that this had very little impact as the problem was related to the volume of data rather than whether it was being processed in memory or not.  That said, I did try upping the hash_area_size for the session with some initial success, but ultimately it didn't prove to be scalable.  We are however now using pga_aggregate_target in prod.
    So that's that. Problem sorted. And as the title of Mark Rittman's article suggests, I was trying to be too clever! :-)
    HTH
    David

  • Regarding space issue

    There is an space issue in datatop.Could you please help me out in deleting any logs and unnecessary data in that location.As the mount which own datatop is completely filled.

    Hi,
    Do you have any archivelog files on this disk? If so, consider moving/deleting obsolete files. For datafiles/tablespaces, if it is possible to shrink it, please do so. Otherwise, you need to move it to some other disk.
    Also, if AUTOEXTEND is enabled, you need to disable it (at least for now) until you have more space on this disk.
    You may also refer to the following document, it should be helpful.
    Note: 274666.1 - Cleaning An 11i Apps Instance Of Redundant Files
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=274666.1
    Regards,
    Hussein

  • Usage of temp space in oracle

    I am using Informatica to load into one table in Oracle .
    Source and target table contains on CLOB colum.
    Source table size is 1GB.
    Target is oracle 10g with 30GB of temp space .
    Whenever i run this job TEMP space usage is complete 30Gb and job is failing .
    Any has any clue in this ?

    Actually, the problem probably is that you are looking at the table but not at the CLOB storage. CLOB's are typically stored outside the table.. so the table might be 1GB, but you might have a MUCH larger storage for the CLOB data.
    Replace the owner and segment name with your owner and table_name in this query and see what gets reported.
    select segment_name, sum(bytes)
    from dba_extents
    where owner = 'OUTLN'
    and segment_name in
      (select 'OL$HINTS' from dual
       union
       select segment_name from dba_lobs where table_name = 'OL$HINTS' and owner = 'OUTLN')
    group by segment_name;

  • Consuming too much temp space

    Dear All
    A technical colleague of mine is executing a procedure which selects data from two databases and inserts it into the third one. The procedure is not new and he has been executing the same since a year now.However in past two weeks the procedure is either consuming too much amount of time ( 3-4 hours as against 10-12 mins ) or it fails as it utilises more amount of temp space on the database on which insertions are made. In the temporary tablespace i added about 10gb more but it is still not suffice for the procedure to execute successfully.The sga for the database onto which insertion is done is 2560M and pga for the same is 2G.
    Please suggest what is to be done as it is an extremely crucial procedure.
    Thanks in advance.
    Regards
    rdxdba

    If you have Diagnostic Pack licence try to use AWR to compare instance activity for this procedure execution. If not try to install Statspack.
    I recommend also to use SQL trace to have trace data for a "good" execution and for a "bad" execution and to compare TKPROF output for related trace files.
    If you are using Oracle 10 or 11 try to use DMBS_MONITOR as described in http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php.

  • Oracle.sql.BLOB.freeTemporary() is not freeing TEMP space in the database

    Hi Folks,
    We are using oracle.sql.BLOB to store some file information into the database.
    Allocation of the temp space is done as below
    BLOB blob=BLOB.createTemporary(conn, false, BLOB.DURATION_SESSION); // this results in the usage of TEMP space from database
    And subsequent release is done as below
    blob.freeTemporary(); // this should have release the space from the database.
    This is on Oracle 10g, Java 1.6, ojdbc6.jar There are no exceptions. Even a simple program results in the same.
    Anybody faced something similar? Any pointers would be really appreciated.
    Thanks,
    Deva
    Edited by: user10728663 on Oct 11, 2011 5:33 AM

    Thanks a lot for the information.
    Memory is fine. And I am able to reproduce this within the scope of a simple example as well.
    Would you have any reference to the thread which had earlier reported this as a bug. I tried a reasonable amount of search in the forum, but no success.
    Thanks very much for any pointers.

  • Mapping   Execute Mapping Utilization of All Temp Space

    Background:
    Creating a Master Table in Data Warehouse; Containing 16 tables (3 - outer joins; Remainder are inner; Total Target table size 8.2 Million reocrds) *** I've ran successfully when I create my own Select statement {takes about 60-70 seconds)
    I may be trying to throw too much at OWB, but when I've created this mapping and joins via (2) joiners (1 - Derived Table; other joins all other tables and Derived table) and have deployed; I have received an error that temp space can not be extended. The thing is and a I've looked into it; our TEMP space is 32gb and with further research via Explain plain in Toad (based on package that is created) that there are some Cartesian join and some other tasks that are trying to be utilized i.e. sort join, etc
    Question:  Am I throwing too much at OWB ? or are their better utilities within the tool that may make this easier ? I was also wondering, if I was able to run something like an explain plan in OWB and not have to run via Toad?
    Thank you for your help                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    I recently had a similar problem with OWB 10.2.0.1.
    The explain plan in our pre-prod environment had something like MERGE JOIN CARTESIAN, with nested loops etc.
    The same mapping in our prod environment had a few hash joins and sort soemwhere in there.
    Needless to say that in our prod env. the mapping took under 10 minutes, while at pre-prod it was taking over 15 hours with no end in sight.
    I finally had to insert and /*+ ORDERED */ hint right after the SELECT in my mapping and it solved everything. Ihad to to this manually in TOAD.
    Problem is that with OWB 10.2.0.1 inserting hints is buggy. This prompted us to switch over to OWB 10.2.0.4.
    Look at one of my other posts (there aren't many) to see how to properly switch over to a new version of OWB.
    Hope this helps you out a bit.
    regards....Mike

  • SQL query using lot of Temp space

    I have sql query which is using lot of temp space , please suggest some ways to reduce this
    SELECT A.POSITION_NBR, TO_CHAR(B.EFFDT,'YYYY-MM-DD'), rtrim( A.SEQNO), A.EMPLID, B.REG_REGION, A.MANAGER_ID, A.REPORTS_TO, case when A.POSITION_NBR = A.REPORTS_TO THEN 'POS reports to same position' else 'Positions with multiple Emp' End Case
    FROM PS_Z_RPTTO_TBL A, PS_POSITION_DATA B, PS_POSTN_SRCH_QRY B1
    WHERE B.POSITION_NBR = B1.POSITION_NBR AND B1.OPRID = 'MP9621Q' AND ( A.POSITION_NBR = B.POSITION_NBR AND ( A.REPORTS_TO = A.POSITION_NBR AND B.EFFDT =
    (SELECT MAX(B_ED.EFFDT)
    FROM PS_POSITION_DATA B_ED
    WHERE B.POSITION_NBR = B_ED.POSITION_NBR) AND A.POSITION_NBR <> '00203392') OR ( B.EFFDT =
    (SELECT MAX(B_ED.EFFDT)
    FROM PS_POSITION_DATA B_ED
    WHERE B.POSITION_NBR = B_ED.POSITION_NBR AND B_ED.EFFDT <= SYSDATE) AND B.MAX_HEAD_COUNT <>
    (SELECT Count( C.EMPLID)
    FROM PS_Z_RPTTO_TBL C)) ) UNION
    SELECT F.POSITION_NBR, TO_CHAR(F.EFFDT,'YYYY-MM-DD'), '', '', F.REG_REGION, '', F.REPORTS_TO, ''
    FROM PS_POSITION_DATA F, PS_POSTN_SRCH_QRY F1
    WHERE F.POSITION_NBR = F1.POSITION_NBR AND F1.OPRID = 'MP9621Q' AND ( F.EFFDT =
    (SELECT MAX(F_ED.EFFDT)
    FROM PS_POSITION_DATA F_ED
    WHERE F.POSITION_NBR = F_ED.POSITION_NBR AND F_ED.EFFDT <= SYSDATE) AND F.EFF_STATUS = 'A' AND F.DEPTID IN
    (SELECT G.DEPTID
    FROM PS_DEPT_TBL G
    WHERE G.EFFDT =
    (SELECT MAX(G_ED.EFFDT)
    FROM PS_DEPT_TBL G_ED
    WHERE G.SETID = G_ED.SETID AND G.DEPTID = G_ED.DEPTID AND G_ED.EFFDT <= SYSDATE) AND F.REG_REGION = G.SETID AND G.EFF_STATUS = 'I') )
    Thanks in Advance
    Rajan

    use {noformat}<your code here>{noformat} tags to format your code.
    I have sql query which is using lot of temp space , please suggest some ways to reduce thisIf your sort_area_size is not set sufficient oracle used temp space for sorting operation. As your code is not readable i cant say much more than this. Check with your DBA if you have to increase the temp space.

  • How to determine what's using data store temp space?

    How can one determine what's using data store temp space? We are interested to know what structures are occupying space in temp space and if possible what pid/process connected to TimesTen created them.
    Also, is there a procedure that will work if temp space is full?
    Recently one of our data stores ran of space. We we're unable to run commands like "monitor", "select * from monitor", "select count(*) from my_application_table", etc. These commands failed because they required temp space to run and temp space was full. We killed the application processes, this in turned freed up temp space, then we were able to run these queries.
    Ideally, we'd like to have a procedure to figure out what's using temp space when temp space is full.
    The other thing we could do is periodically monitor temp space prior to it filling to determine what's using temp space.

    That was my original thought, but once you click the slider track or thumb, and then enter a value in the text control, the clickTarget on the change event envoked by the change to the bound data (after entering a value in the text control) will be whatever slider element had last been clicked. If you've never clicked the slider, clickTarget=null. But once you've clicked the slider the clickTarget always has a value of "thumb" or "track", regardless of what triggered the change event.

  • 11g 11.1.1.4.0 premgen space issue

    I had 11.1.1.3.0 installled in XP-2.8 GHZ (core 2) 3GB machine and it was running fine with
    -Xms512m -Xmx1024m PermSize=128m -XX:MaxPermSize=512m
    Then I have installed 11.1.1.4.0 and statred getting 'could not allocate heap memory to JVM' so I have chnaged the setting to
    -Xms512m -Xmx768m PermSize=128m -XX:MaxPermSize=512m
    Then I got 'premgen-space' issue repetedly so change to
    -Xms512m -Xmx768m PermSize=512m -XX:MaxPermSize=512m
    or -Xms512m -Xmx768m PermSize=256m -XX:MaxPermSize=512m
    But still I am getting this premgen-space issue after every 10-15 minutes hence unable to work.
    Guru's pleas suggest for best JVM memory setting.
    Thanks,
    Biltu

    When configuring PermGenSpace you must be aware that this is taken from the HeapSize (physical RAM does not count). Things get difficult if the PermGenSpace equals the MaxheapSize. A good setting is -Xmx2048m -XX:MaxPermSize=512m.
    Since you run the Sun JDK, you could give jvisualvm a try. My tip is to get the plugins and use the VisualGC one to monitor the memory.
    --olaf                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • IdM DB Space Issue

    Hi Experts,
        We are facing IdM DB space issue in our PRD environment. Upon observing, it was found that the below tables are taking much space in IdM DB. Could anyone tell me what impact will my PRD system have if I truncate the below tables to free space. Please do help us out in this , if this problem is not solved, this will become the show stopper.
    mc_exec_stat
    MC_LOGS
    MC_SYSLOG
    mxi_entry
    mxi_link_audit
    mxi_link
    MXI_OLD_VALUES
    MXI_VALUES
    MXP_AUDIT
    MXP_Audit_Variables
    MXP_Ext_Audit
    Kind Regards,
    Mohamed Fazil

    I don't know what you mean with too much diskspace? Perhaps you can post the sizes here and the number of entries in your system so we can evaluate if it's normal or not.
    Most of the tables you list should not be touched and if you truncate link and entry you will delete all your assignments and entry data, and you would possibly be looking for a new job the next day.
    The log tables should be maintained by the housekeeping as mentioned, but
    mc_exec_stat
    MC_LOGS
    MC_SYSLOG
    can be cleaned but neither should have that much data, but large amounts of transactions cause logs to grow. Have you checked that your backup is cleaning the transaction logs? Also reduce loglevels to ERROR to reduce the amount logged here.
    MXP_Audit_Variables should not contain audits that are not in the provisioning queue, if you delete those workflows will fail.
    MXP_Ext_Audit can probably be cleaned without affecting operations (reports might be affected though) but you'll loose the detailed execution history on the entries, perhaps you can do partials, delete entries older than 3 years or similar.
    For future reference:
    List transaction log sizes (SQL Server):
    DBCC SQLPERF(LOGSPACE)
    List table siszes (SQL Server):
    DECLARE @SpaceUsed TABLE( TableName VARCHAR(100)
          ,No_Of_Rows BIGINT
          ,ReservedSpace VARCHAR(15)
          ,DataSpace VARCHAR(15)
          ,Index_Size VARCHAR(15)
          ,UnUsed_Space VARCHAR(15)
    DECLARE @str VARCHAR(500)
    SET @str =  'exec sp_spaceused ''?'''
    INSERT INTO @SpaceUsed EXEC sp_msforeachtable @command1=@str
    SELECT * FROM @SpaceUsed order by CAST(REPLACE(ReservedSpace,' KB','') as INT) desc
    Oracle undo, user and redofiles:
    select * from dba_data_files where tablespace_name LIKE 'UNDOTB%1' OR  tablespace_name LIKE 'USERS%'; 
    select l.group#,f.member,l.archived,l.bytes/1078576 bytes,l.status,f.type
    from v$log l, v$logfile f
    where l.group# = f.group#
    C
    Message was edited by: Per Krabsetsve

Maybe you are looking for

  • Did you know Lion would not be able to run your older applications?

    I'm asking this question because I'm seeing a lot of posts from people who didn't know. Mac-heads knew, of course, but it seems many normal people with far better and more important things to think about, who just expect their Macs to work, didn't he

  • Memory speed differs from chip

    I just purchased a used Mac Pro.  The specs are as follows: Dual 6 core 2.4GHz Xeon.  It came with 24 Gbs of memory but the model of the memory chip comes up in system report as: 1066MHz DDR3 ECC SDRAM.  I checked the specs of the model number, and I

  • How can you see matching tags in code view?

    I'm trying to figure out how to do a very basic, common function in DW CS5.5 that's pretty much in every other IDE I've seen. In NetBeans, for instnace, when you load a program, then click on a DIV tag, the program automatically highlights the end /D

  • A tool for analysis,design and manamement of a software development path

    Hi, I am looking for a tool helping in designing,analysing and managment of a software development process.Sth that integerates path of software development. All the best, Arash Kaviani.

  • How to use facetime to call my iphone 4s?

    I have an ipad2 & i cant used facetime to call my iphone 4s & vice versa.