Need Information ..Related to temp Space

Hi all,
I want to know tht is there any in Oracle to create logical table tht will not b in existence after it is no longer in use..
Like i have 1 procedure in which 1 query will fetch some records and i have to store tht resultset in some temp table tht is not required after da procedure ends..
I want to use da date for processing forward ..but at last i dnt need tht space which is havin resultset now..
I dnt want to use cursor b'coz again i have to declare some local variables to get value from tht..
Right now i m creating table dynamically and at last i m droppin it .. but when 2 user use da proc at same time it will give err Table name already in use..
So wt i need tht for each user a temporary table will b created and at last tht will b droped ..no physical existence shud b there..
I have used GLOBAL Temporary Table but tht also not giving desired result..
Ps suggest me any way to do it..

Do you use your telephone to post to this forum ?
Why isn't the GTT solving the problem for you ? Is it for any reason other than the fact that it persists as an object after the procedure has finished ?
You have said that if more than one person runs the procedure, then with an "on-the-fly" table this causes problems with "table already exists/does not exist". So the GTT solves that issue nicely.
And if more than one person is running it, then surely you have a longer-term need for it than just one run ?
Just use the GTT and unless you want all traces removed because you are up to no good, then live with the fact that the table exists and your procedure/package is still valid. You want to get rid of them ? Drop them after or make a call to DBMS_JOB as the very last line of your procedure to drop both the table and the procedure in n minutes time.

Similar Messages

  • RE: Need Information related to Integrated Planning and I views

    Hi All,
    Goodday,
    We are using the Integrated Planning extensively in the Project and also FOx too.
    We have came across the requiremnet like we need to add the functions whcih we are creatiung now in to the Portal???
    Can Any body advice on this how to do it????
    What I know as per my understanding is
    Te portal is made up of RÔLES (first level menu tab). Each role can itself be made up of a WORKSET (second level menu tab) and each workset is made up of an IVIEW (link to access the input/calculation screens or analysis statuses).
                                                      ROLE
    Workset1                                                          workset2                                        
    Iview1
    Iview2
    Ivie3
    Can any body could help me ont his what wa sthe Procedure I need to follow???
    Regards
    RAM

    Alright Ram,
    The process is first you create a ROLE, then a WORKSET, then a PAGE and within the PAGE you have IVIEWS.
    Personally I dont bother with worksets and just create ROLES, PAGES and iVIEWS.
    It depends on your requirements.
    Cheers,
    Nick.

  • Need Information Related to Finance

    Hi Everyone,
    Can anyone explain me about WBS ,WIP, AP ,AR, GL & related things..
    With Regards
    Pavan

    Hi,
    WBS
    Go through the below link
    http://www.sap-img.com/project/what-is-wbs-element.htm
    WIP
    http://help.sap.com/saphelp_470/helpdata/EN/90/ba6609446711d189420000e829fbbd/content.htm
    AP
    http://www.sap-basis-abap.com/sapfiar.htm
    AR
    http://www.sap-basis-abap.com/sapfiar.htm
    GL
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/FigL/FigL.pdf
    Regards,
    Marasa.

  • Need information related to ETL_PROC_WID Column

    Hi
    I am New to DAC As well as OBIA i need to know about the ETL_PROC_WID Column and it's imporatance in DAC
    thank's in Advance ...

    Hi,
    For populating ETL_PROC_WID we have a mapplet by name MPLT_GET_ETL_PROC_WID. You need to use this mapplet and pass Integration_id as the input for this mapplet and it will generate ETL_PROC_WID.
    W_PARAM_G is pupulated from the file (file_dual), which is available in srcFiles folder in informatica. The mapping used for populating W_PARAM_G is SIL_Parameters_Update.
    Thanks,
    Navin Kumar Bolla

  • Need information related to Hookup of Portal System to ECC system

    Hi all,
    I need to hook up my Portal-NPX system to ECC PRD system.
    Could someone please let me know the procedure to follow in order to create the connectivity between them.
    My Portal Version is 7.01/SP3
    My Backend system is ECC6.0/EHP4/SP6
    Database is Oracle.
    Solaris OS.
    Thanks,
    Sandeep

    Hi Sandeep,
    Please follow below links:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e09b3304-07d8-2b10-dbbf-81335825454f?QuickLink=index&overridelayout=true
    http://wiki.sdn.sap.com/wiki/display/EP/8EstablishSingleSignOnbetweenPortal%28frontend%29andESS6.0%28backend%29
    http://www.sap-press.de/download/dateien/1618/sappress_content_integration_sap_netweaver_portal.pdf (Go to Page 347)
    Thanks.
    Sushil

  • Need information related to oracle 10g standard / enterprise edition (diff)

    Hi,
    I had developed my application using oracle 10g enterprise edition and now i wanna to shift the entire thing to oracle 10g standard edition.
    i Ha ve query on thiis point.
    1) Is it safe to do so.
    2) what are the difference between both the edition.
    3) what are the restriction i can face if i am moving form 10g enterprise to 10g standard.
    4) what are the steps i have to do to do the same.
    5) risk,efficency,restriction and all the major difference.
    If you can suggest me something, then it will be very helpful for me.
    Thanks,
    Anit
    Edited by: Anit A. on Sep 5, 2008 12:24 AM

    Hi Anit,
    Is it safe to do so.ALWAYS check with your CSR for licensing deals, things change constantlt.
    what are the difference between both the edition.EE has all the goodies! Here is my list of differences in SE and EE: http://www.dba-oracle.com/art_so_oracle_standard_enterprise_edition.htm
    what are the restriction i can face if i am moving form 10g enterprise to 10g standard.No replication, no Data Guard, no stored outlines, no MV's . . . .
    what are the steps i have to do to do the same.Just install SE, point your $ORACLE_HOME to it and bounce
    Hope this answers your question . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference":
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • How much TEMP space needed for datapump import?

    How does one accurately predict how much TEMP table-space is needed prior to starting a data-pump import (via impdp)?  I need a way to predetermine this BEFORE starting the import.  It is my understanding that in data-pump imports, temp table-spaces are primarily used for the building of indexes among other operations.

    Yes, I could use autoextend but that just shifts the problem of checking the logical table-space size to checking the physical space to see that it has enough room to extend.
    I was really hoping for a formula to calculate the amount of TEMP space it would take up.  For example, RichardHarrison's post above about setting the TEMP table-space size to be twice as large as the largest index, but wasn't sure on the accuracy of that statement.  I was hoping someone has encountered this kind of scenario before and found an accurate way to predict how much space is really needed, or a good estimate to go by.
    I will try out the idea of setting the TEMP space size to be twice the size of the largest index and see how that goes, as it doesn't seem there is a practical way of accurately determining how much space it really needs.
    Thanks everyone.
    Ben.

  • Problem with temp space allocation in parallel query

    Hello
    I've got a query which matches two large result sets (25m+ rows) against each other and does some basic filtering and aggregation. When I run this query in serial it takes about 30 mins and completes successfully. When I specify a parallel degree of 4 for each result set, it also completes successfully in about 20 minutes. However, when I specify that it should be run in parallel but don't specify a degree for each result set, it spawns 10 parallel servers and after a couple of minutes, bombs out from one of the parallel servers with:
    ORA-12801: error signaled in parallel query server P000
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPThis appears to be when it is about to perform a large hash join. The execution plan does not change whether the parallel degree is specified or not, and there is several GB of temp space available.
    I'm at a bit of a loss as to how to track down specifically what is causing this problem. I've looked at v$sesstat for all of the sessions involved and it hasn't really turned anything up. I've tried tracing the main session and that hasn't really turned up much either. From what I can tell, one of the sessions seems to try to allocate massive amounts of temp space that it just does not need, but I can figure out why.
    Any ideas of how to approach finding the cause of the problem?
    David

    Hello
    I've finally resolved this and the resolution was relatively simple - and was also the main thing that Mark Rittman said he did in his article: reduce the size of the hash join.
    After querying v$sql_workarea_active I could see what was happening which was that the sum of the temp space for all of the parallel slaves was exceeding the total amount of temp space available on the system. When run in serial, it was virtually at the limit. I guess the extra was just the overhead for each slave maintaining it's own hash table.
    I also made the mistake of misreading the exectuion plan - assuming that the data being pushed to the hash join was filtered to eliminate the data that was not of interest. Upon reflection, this was a rather stupid assumption on my part. Anyway, I used sub query factoring with the materialize hint to ensure that the hash join was only working on the data it should have been. This significantly reduced the size of the hash table and therefore the amount of temp space required.
    I did speak to oracle support and they suggested using pga_aggregate_target rather than the separate *area_size parameters.  I found that this had very little impact as the problem was related to the volume of data rather than whether it was being processed in memory or not.  That said, I did try upping the hash_area_size for the session with some initial success, but ultimately it didn't prove to be scalable.  We are however now using pga_aggregate_target in prod.
    So that's that. Problem sorted. And as the title of Mark Rittman's article suggests, I was trying to be too clever! :-)
    HTH
    David

  • Relation between temp tablespace and index creation

    Hi,
    I have my Oracle database (11gR1) on windows 2008 server R1 64 bit..
    This is my development database. i have one table which has more than 2 billion rows , the problem i m facing here is while creating the index on this table i m getting temp segment error , while my temp tablespace size is 32 gb.
    Here my doubt is :
    1.What will happen in temp tablespace when index is created ? Relation between temp and index creation ?
    2. how to create the index on a huge table?
    3. What is the meaning og logging and no logging in INDEX creation .
    4. how can we over come for these kind of problem and manage the temp TS..
    Thanks & Regards,
    Vikash Chauradia

    add another tempfile?
    1.What will happen in temp tablespace when index is created ? Relation between temp and index creation ?
    index creation needs sort. how much depends on the size of the index.
    2. how to create the index on a huge table?
    create an interim (temporary? :)) huge temporary space for the very purpose.
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/indexes003.htm#i1006643
    3. What is the meaning og logging and no logging in INDEX creation .
    nologging means you the creation isnt in the logs so if you need to recover you cant get back to it. when using nologging in a prod env you might do it for performance during a period of heavy dml such as a large index creation and then backup afterwards. common enough.
    4. how can we over come for these kind of problem and manage the temp TS..
    current tempspace size =X
    is X big enough? if yes, cup of tea, if no, make X bigger.
    It doesnt matter what X is.

  • Maximum TEMP space ever used

    Hello, I would like to ask how to check the maximum space ever used for TEMP. I want to know it because I need to resize the TEMP and I want to know how small it can be.
    As I can see from a documentation http://docs.oracle.com/cd/B14117_01/server.101/b10755/dynviews_2095.htm
    max_size is max number of extens ever used in a segment
    I could multiply max_size by extent_size and it would give me the max size of temp ever used
    SQL> select segment_file, extent_size, max_size from v$sort_segment;
    SEGMENT_FILE EXTENT_SIZE MAX_SIZE
    0 128 23625
    0 128 753
    Is that correct ?
    Edited by: Przemek P on Oct 2, 2012 2:19 AM

    such dynamic performance views contain information since last instance startup, so this is not the ever used maximum
    check historical tablespace metrics (not collected for TEMP tablespace in 10g by default if you happen to have that version)
    check current allocated size of TEMP tablespace: if its made up by autoextensible tempfiles and they werent increased manually, thats the maximum amount of temp ever used
    none of the above will be accurate, but they can at least give you a direction
    also keep in mind that max temp usage in past will not necessarily be the max temp usage in the future

  • Temp Space Issue

    Hi there folks.
    I've been using this forum as a reference point for the last couple of months but this is the first time I've had to post anything because I happen to have a pickle of an issue. In fact, it's that beefy, even my company's development team is stumped... Yeah.
    So, a quick bit of background. The current client I'm working for is a Government agency related to Health & Public Services. The database we use works as a go-between with Siebel CAM and Business Objects. As the two don't naturally speak to each other, occasionally we get inevitible data incosistencies. We had an Informatica CDC issue a little while ago which left a lot of stale data, primarily relating to tasks which should have been closed or naturally expired. To clear up the backlog and return consistency to the DB, we've been working on updating a workflow which will refresh the tasks across the entire DB and bring everything in line.
    Now, when the DB was designed, Task_Refresh was created and worked when tested as the DB was getting loaded with data. The problem is, when it was tested initially, there was only a few hundred thousand records to filter through. Now that the DB is in full swing, there's 22 million records and we're encountering issues. Essentially when we run the workflow in Informatica, it gets as far as the Source Qualifier and fails, having chewed through the available 16gb of temp space in the space of an hour.
    I've been working for weeks trying to tune the SQL code for the source qualifier and, along with the development team, I'm now well and truly stuck. Here is the query:
    SELECT
        a.1,
        a.2,
        a.3,
        a.4,
        a.5,
        a.6,
        a.7,
        a.8,
        a.9,
        a.10,
        a.11,
        a.12,
        a.13,
        a.14,
        a.15,
        a.16,
        a.17,
        a.18,
        a.19,
        a.20,
        a.21,
        a.22,
        a.23,
        a.24,
        a.25,
        a.26,
        a.27,
        c.1,
        c.2,
        c.3,
        c.4,
        c.5,
        b.1,
        f.1,
        e.1,
        f.2,
        f.3,
        f.4,
        f.5,
        f.6,
        f.7,
        f.8,
        f.9,
        f.10,
        f.11,
        f.12
        FROM PARENT_TABLE a,  CHILD_TABLE2 c, CHILD_TABLE1 b, PARENT_TABLE2 e, PARENT_TABLE3 f
        WHERE   a.1 = b.PAR_ROW_ID(+)
        AND     a.1 = c.PAR_ROW_ID(+)
        AND     a.1 = e.PAR_ROW_ID(+)
        AND     a.TARGET_PER_ID = f.ROW_ID(+)
        AND
        (a.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND a.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (c.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND c.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (b.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND b.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (e.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND e.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
       );We are running this on 10g 10.1.0.5.0 - 64bi
    So, obviously I've replaced the table names and most of the column names, but a.1 is the primary key. What we have is the primary key compared to the PAR_ROW_ID column in each of the child tables and one of the other main tables, looking for matching IDs down the whole of the PARENT_TABLE. In this database, tasks can be entered in any of the five tables mentioned for various different reasons and functions. The dates in the last section are just arbitrary dates we have been working with, when the query works and is running into production, the dates will change, a week at a time based on a parameter file in Informatica.
    In troubleshooting, we found if we remove the left outer joins in the WHERE clause, then the query will return data, but not the right amount of data. Basically, PARENT_TABLE is just too massive for the temp space we have available. We can't increase the temp space and we can't get rid of the joins because that changes the functionality significantly. We have also tried restructuring the query by filtering on the dates first but that doesn't help, though I suspect that the order of the WHERE clause is being ignored by Oracle due to it's own performance related decisions. I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
    We have even tried setting the date and time for an hour's worth of entries and that was still too big.
    I'm stuck in a catch 22 here because we need to get this working, but the client is adamant that we should not be changing the functionailty. Is there any possible solution any of you can see for this?
    Thank you in advance for any help you can offer on this.
    (also, we are providiing support for the applications the database uses [CAM, BO], so we don't actually run this db per se, the client does. We've asked them to increase the temp space and they wont.
    The error when we run the workflow is: ORA-01652: unable to extend temp segment by 128 in tablespace BATCH_TEMP)

    938041 wrote:
    I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
    Technilcally you should be able to rewrite this query as a UNION ALL of 4 separate parts, providing you remember to include a predicate in the 2nd, 3rd and 4th query blocks that eliminate rows that have already been reported by the earlier query blocks.
    Each part will be driven by one of the four range based predicates you have - and for the three parts that drive off child tables the join to the parent should NOT be an outer join.
    Effectively you are trying to transform the query manually using the CONCATENATION operator. It's possibly (but I suspect very unlikely) that the optimizer will do this for you if you add the hint /*+ use_concat */ to the query you currently have.
    Regards
    Jonathan Lewis

  • The 2 year old fix for changing temp files is no longer valid, I need to change the temp file location in the latest firefox.

    In 3.6.3 I'm unable to find the parent cache to change where the temp files are located. I have an SSD drive, and I need to get the temp file off, and onto my platter drive. I have read the previous fix of about:config and changing the parent cache location, but it no longer seems to be there. I'm unable to locate anything relating to my SSD drive location in about:config.
    == User Agent ==
    Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.70 Safari/533.4

    Unfortunately, it appears that what I wrote about not being able to find the location IN about:config of the cache to change it from my C: drive to a different location, was mildly misunderstood that I didn't know how to get into about:config.
    The location of browser.cache.disk.parent_directory does not currently exist in 3.6.3. I'm assuming it's under a different location. Also using the simple string of "cache" lists several options, but none of them appear to be where firefox is currently dumping the temp files onto my C:
    I merely need to know how to switch the temp file location from 1 drive, to another. Not downloads, the temp file.

  • My iPad 4 is needed 4.1 GB free space

    i tried to install iOS 7 ,that capacity 1.4 GB it's ok but device is needed 4.1 GB free space , why was that? if any one can give clear information to me ,it may be reason for i'm using alot of applications?

    Space is alwys needed during upgrading for temporary storage of installtion files during the upgrade.

  • Consuming too much temp space

    Dear All
    A technical colleague of mine is executing a procedure which selects data from two databases and inserts it into the third one. The procedure is not new and he has been executing the same since a year now.However in past two weeks the procedure is either consuming too much amount of time ( 3-4 hours as against 10-12 mins ) or it fails as it utilises more amount of temp space on the database on which insertions are made. In the temporary tablespace i added about 10gb more but it is still not suffice for the procedure to execute successfully.The sga for the database onto which insertion is done is 2560M and pga for the same is 2G.
    Please suggest what is to be done as it is an extremely crucial procedure.
    Thanks in advance.
    Regards
    rdxdba

    If you have Diagnostic Pack licence try to use AWR to compare instance activity for this procedure execution. If not try to install Statspack.
    I recommend also to use SQL trace to have trace data for a "good" execution and for a "bad" execution and to compare TKPROF output for related trace files.
    If you are using Oracle 10 or 11 try to use DMBS_MONITOR as described in http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php.

  • Oracle.sql.BLOB.freeTemporary() is not freeing TEMP space in the database

    Hi Folks,
    We are using oracle.sql.BLOB to store some file information into the database.
    Allocation of the temp space is done as below
    BLOB blob=BLOB.createTemporary(conn, false, BLOB.DURATION_SESSION); // this results in the usage of TEMP space from database
    And subsequent release is done as below
    blob.freeTemporary(); // this should have release the space from the database.
    This is on Oracle 10g, Java 1.6, ojdbc6.jar There are no exceptions. Even a simple program results in the same.
    Anybody faced something similar? Any pointers would be really appreciated.
    Thanks,
    Deva
    Edited by: user10728663 on Oct 11, 2011 5:33 AM

    Thanks a lot for the information.
    Memory is fine. And I am able to reproduce this within the scope of a simple example as well.
    Would you have any reference to the thread which had earlier reported this as a bug. I tried a reasonable amount of search in the forum, but no success.
    Thanks very much for any pointers.

Maybe you are looking for

  • SAP NetWeaver 7.02 ABAP Trial Version - port issue

    Hi All, im trying to instal 7.02 trial on Win server 2003 but at the begining got this error: Error while connecting to communication partner - see preceeding messages. Could not connect to host localhost on port 21212. java.net.ConnectException: Con

  • HT5639 how do you uninstall boot camp partion after a failed install of windows 7

    I cannot remove a boot camp partition after a failed attempt to install windows 7 64 bit

  • 5800 update

    I have a 5800 which I have had since june 2009.  It has software version 20.0.012 on it. Model 5800 XpressMusic Type RM-356.  I would like to download the new maps with sat nav which says that I need version 31.0.008 or greater but when I go to the d

  • Datasocket failure with error 1184

    I used to have 3 pcs making up a network. They were all running Windows 2000  and Labview 8.5.1 and I was able to use one machine to host a shared variable engine and have the others read and write shared variables to and from it. Now I have upgraded

  • IWEB BLOG WEBSITE

    I created a blog website -- for the ease of changing items -- my website is thinteriorartdesign.com I'm trying to use this as a professional website as directed by a genius at the Apple Store. Here's my problem.... The images keep moving around on th