Mapping   Execute Mapping Utilization of All Temp Space

Background:
Creating a Master Table in Data Warehouse; Containing 16 tables (3 - outer joins; Remainder are inner; Total Target table size 8.2 Million reocrds) *** I've ran successfully when I create my own Select statement {takes about 60-70 seconds)
I may be trying to throw too much at OWB, but when I've created this mapping and joins via (2) joiners (1 - Derived Table; other joins all other tables and Derived table) and have deployed; I have received an error that temp space can not be extended. The thing is and a I've looked into it; our TEMP space is 32gb and with further research via Explain plain in Toad (based on package that is created) that there are some Cartesian join and some other tasks that are trying to be utilized i.e. sort join, etc
Question:  Am I throwing too much at OWB ? or are their better utilities within the tool that may make this easier ? I was also wondering, if I was able to run something like an explain plan in OWB and not have to run via Toad?
Thank you for your help                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

I recently had a similar problem with OWB 10.2.0.1.
The explain plan in our pre-prod environment had something like MERGE JOIN CARTESIAN, with nested loops etc.
The same mapping in our prod environment had a few hash joins and sort soemwhere in there.
Needless to say that in our prod env. the mapping took under 10 minutes, while at pre-prod it was taking over 15 hours with no end in sight.
I finally had to insert and /*+ ORDERED */ hint right after the SELECT in my mapping and it solved everything. Ihad to to this manually in TOAD.
Problem is that with OWB 10.2.0.1 inserting hints is buggy. This prompted us to switch over to OWB 10.2.0.4.
Look at one of my other posts (there aren't many) to see how to properly switch over to a new version of OWB.
Hope this helps you out a bit.
regards....Mike

Similar Messages

  • Mapping and Monitoring all the User and the Field exits

    Hello Dears,
    Are possible, with the Solution Manager to map and monitor all the user and the field exists existing in my ECC6 Productivity Environment?
    Anyone has some documentation?
    Regards to all.
    FS.

    Hello Gurus,
    Someone has any information about this question?
    Regards to all.
    FS.

  • In Photo 1.0, how does one access the map showing where all photos were taken, as could be done previously in iPhoto?

    In Photo 1.0, how does one access the map showing where all photos were taken, as could be done previously in iPhoto?

    Hi JohnDory,
    The information side-bar from iPhoto has been removed in Photos App, and so it's been converted into a pop-up window showing both the exposure, aperture and so technical photo parameters, as well as the comments, faces and LOCATION for that photo.
    This small floating window is shown whenever you click the button in the app title bar, right clicking a specific photo or pressing ⌘i
    If you open the albums view (clicking in the name of the album list, NOT an album name, you can see in your left sidebar - which can be shown or hidden) and press ⌘i without selecting a specific photo the Info pop-up will show the map for your whole library (as well as the total amount of photos, videos, GB used, etc)
    So, I'm afraid the "Locations" view (which I really loved) from iPhoto has been ripped off... and we can only get "some sort of locations view" by this method.
    As for locations... there is no option for manual geotagging (so, setting location to a specific photography which doesn't have it yet)... that really ****** me off 
    Regards,
    braincasualties.

  • Temp files eating up all disk space on SCSM 2012 R2 install

    Hello Everyone,
    Problem
    Under this path SCSM is creating a ton of temp files using up all disk space (C:\Windows\System32\config\systemprofile\AppData\Local\Microsoft\System Center Service Manager 2010)  All files are .bin and .hash files.  I just cleaned out 120 GBs
    of files which accumulated over the period of 30 days.   
    Does anyone know what could be causing this?  This was not a problem when we were on server 2008 R2.  When we migrated SCSM management server to server 2012 r2 it when problem started.
    Environment
    Management server (server 2012 R2)
    Data warehouse server (server 2008 R2)
    Data on remote SQL server.  SQL 2008 R2
    CU  5 was the last update

    Might be worth checking that the AutoRefresh/Cache policy is set to True via the SDK?
    https://msdn.microsoft.com/en-us/library/gg461136.aspx
    From PowerShell:
    Add-Type -Path "C:\Program Files\Microsoft System Center 2012 R2\Service Manager\SDK Binaries\Microsoft.EnterpriseManagement.Core.dll"
    $NS = "Microsoft.EnterpriseManagement"
    $EMGType = "$NS.EnterpriseManagementGroup"
    $EMG = new-object $EMGType localhost
    $EMG.AutoRefreshCache
    Output True or False
    $EMG.CacheMode
    Configuration
    This is the output from mine:
    Good luck!

  • I have a first generation ITouch and want to only store music on it. How do I delete the built-in apps such as Stocks, Mail, Photos, Videos, Calendars, Maps, etc. to free up space?

    I have a first generation ITouch and want to only store music on it. How do I delete the built-in apps such as Stocks, Mail, Photos, Videos, Calendars, Maps, etc. to free up space? Thanks.

    You can't.

  • [solved]xterm doesn't utilize all screen space in tiling wm's

    As can be seen in the screenshot, xterm leaves a bunch of pixels unused at the side and bottom. No other program does this. Do I have to use another *term that uses all the space, or can xterm/ratpoison be fixed in some way?
    This is with ratpoison, but if I recall correct it was the same in Musca.
    Last edited by hatten (2010-02-17 14:16:30)

    hatten wrote:As can be seen in the screenshot, xterm leaves a bunch of pixels unused at the side and bottom. No other program does this. Do I have to use another *term that uses all the space, or can xterm/ratpoison be fixed in some way?
    I used to think this way however now I much prefer to set xmonad to honour size hints.
    In a dynamic tiling wm, ignoring size hints causes issues when the terms are shuffled around and resized on the display as you open and close windows. You often end up with a partial line of text at the bottom of the terminal that is visible but not cleared with Ctr-L or clear. It looks ugly. I much prefer my terminals to have a gap around the edges rather than seeing this partial text on the bottom edge.

  • Temp space problem

    HI all,
    I receive an error while executing a procedure.
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    can any one please exlain what is the problem.
    thanks in advance
    baskar k

    hi
    First ORA-01652 may occur because there is simply no space available in the temp tablespace of which is being used. The second cause of ORA-01652 may have to do with the local temp segment not being able to extent space even though there is space in other instances.
    To trouble shoot for ORA-01652, and find out which of the above scenarios are causing ORA-01652 use this query offered by MetaLink:
    select sum(free_blocks)
    from gv$sort_segment
    where tablespace_name = '<TEMP TABLESPACE NAME>'
    You will know that the first scenario is causing ORA-01652 to be thrown if the free block reads '0' because it signifies that there is no free space.
    If there is a good amount of space, you know that there is another cause for ORA-01652, and it is probably the second scenario. It is important to note that in a non-RAC environment, local instances are not able to extend the temp segments, so in the RAC environment, ORA-01652 has to be handled differently. If you are experiencing ORA-01652 in a non-RA environment, be aware that every SQL making use of the tablespace can fail.
    In RAC, more sort segment space can be used from other instances, which can help resolve ORA-01652 more easily. Try using the query below:
    select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
    from gv$sort_segment;
    Basically, you can then find out how much temp segment space can be used for each instance by viewing the total_blocks, and the used_blocks can reveal the space which has been used so far, and and the free_blocks gives the amount of space allocated to this particular instance. This being, to resolve ORA-01652, you can check out that used_blocks = total_blocks and free_blocks = 0 will probably show for the instance, and ORA-01652 will be shown multiple times within the alert log.
    This basically means that free space from other instances is being requested, and typically signifies that there is instance contention. Instance contention within the temporary space can make the instance take more time to process.
    In sever cases, a slowdown may occur, in which you might want try one of the following work-arounds:
    Increase size of the temp tablespace
    Increase sort_area_size and/or pga_aggregate_target
    However, remember to not use the RAC feature of DEFAULT temp space.
    If ORA-01652 is causing the slowdown, SMON will probably not be able to process the sort segment requests, you you should try to diagnose the contention:
    Output from the following query periodically during the problem:
    select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
    from gv$sort_segment;
    Global hanganalyze and systemstate dumps
    Hope this helps
    CHeers

  • Consuming too much temp space

    Dear All
    A technical colleague of mine is executing a procedure which selects data from two databases and inserts it into the third one. The procedure is not new and he has been executing the same since a year now.However in past two weeks the procedure is either consuming too much amount of time ( 3-4 hours as against 10-12 mins ) or it fails as it utilises more amount of temp space on the database on which insertions are made. In the temporary tablespace i added about 10gb more but it is still not suffice for the procedure to execute successfully.The sga for the database onto which insertion is done is 2560M and pga for the same is 2G.
    Please suggest what is to be done as it is an extremely crucial procedure.
    Thanks in advance.
    Regards
    rdxdba

    If you have Diagnostic Pack licence try to use AWR to compare instance activity for this procedure execution. If not try to install Statspack.
    I recommend also to use SQL trace to have trace data for a "good" execution and for a "bad" execution and to compare TKPROF output for related trace files.
    If you are using Oracle 10 or 11 try to use DMBS_MONITOR as described in http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php.

  • Investigate extend PGA vs TEMP space

    I have the following request from one of my BI consultants. How would I go about troubleshooting this / finding a solution ?
    "Hi Dirk
    <user x> has to perform large data queries on <database y> and we keep running out of TEMP space.
    Can you please investigate which will be better:
    - To extend the the PGA or
    - To extend the TEMP table space."

    Dirk, before you go changing your system configuration in the hope the changes will fix your problem you need to understand why your application is consuming all available TEMP space. There is a very good change that even after switching to or increasing the PGA_AGGREGATE_TARGET that the same problem will exist unless you understand why the space is being used.
    It is possible that the total concurrent number of sorts/hash joins/lob operations just need more temp than is allocated.
    If is also possible for the query plan for a query executed concurrently is using a sort/merge or hash join that consumes a large amount of space and if the query was changed to use a Nest Loops plan or an index added to support the join that the requirement for sort space to support the sort/hash operations would go away.
    HTH -- Mark D Powell --

  • HT201656 In itunes my iphone shows up as having nearly 8G of "Other"...what is taking up all this space?

    In itunes my iphone shows up as having nearly 8GB of "Other"...what is taking up all this space?
    On the iphone itself under settings, general, usage it indicates 12.7GB used and only 840MB available.  I have one app (Audible) that is taking up 1.6GB, photos&camera take up 636MB, Spotify takes up 495MB, everything else takes up less less than 100MB each and totalling less than 2.5GB.
    So if Audible is taking up 1.6GB and Spotify is taking up 636MB and all other apps take up 2.5GB this combined totals around 5GB.  I have a 16GB iphone 4s. 
    16GB minus 5GB (my calculated usage) equals 11GB remaining.  Allow for some required space utilization (let's say 3GB) and that leaves 8GB remaining.  Why does my phone indicate 840MB available?  That seems to equate to the amount shown in itunes that indicates 8GB of "other".
    What is this 8GB of "other" that is taking up so much storage space on my iphone?
    Thanks in advance.
    John

    johncdaly1955 wrote:
    In itunes my iphone shows up as having nearly 8GB of "Other"...what is taking up all this space?
    Other is usually around 1 GB...
    A  ' Large Other ' usually indicates Corrupt Data...
    First Try a Restore from Backup...
    But... if the Large Other Persists, that is an Indicator of Corrupt Data in the Backup...
    Then a Restore as New is the way to go...
    Details Here  >  http://support.apple.com/kb/HT1414
    More Info Here...
    maclife.com/how_remove_other_data_your_iphone
    More Info about ‘Other’ in this Discussion
    https://discussions.apple.com/message/19958116

  • Problem with temp space allocation in parallel query

    Hello
    I've got a query which matches two large result sets (25m+ rows) against each other and does some basic filtering and aggregation. When I run this query in serial it takes about 30 mins and completes successfully. When I specify a parallel degree of 4 for each result set, it also completes successfully in about 20 minutes. However, when I specify that it should be run in parallel but don't specify a degree for each result set, it spawns 10 parallel servers and after a couple of minutes, bombs out from one of the parallel servers with:
    ORA-12801: error signaled in parallel query server P000
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPThis appears to be when it is about to perform a large hash join. The execution plan does not change whether the parallel degree is specified or not, and there is several GB of temp space available.
    I'm at a bit of a loss as to how to track down specifically what is causing this problem. I've looked at v$sesstat for all of the sessions involved and it hasn't really turned anything up. I've tried tracing the main session and that hasn't really turned up much either. From what I can tell, one of the sessions seems to try to allocate massive amounts of temp space that it just does not need, but I can figure out why.
    Any ideas of how to approach finding the cause of the problem?
    David

    Hello
    I've finally resolved this and the resolution was relatively simple - and was also the main thing that Mark Rittman said he did in his article: reduce the size of the hash join.
    After querying v$sql_workarea_active I could see what was happening which was that the sum of the temp space for all of the parallel slaves was exceeding the total amount of temp space available on the system. When run in serial, it was virtually at the limit. I guess the extra was just the overhead for each slave maintaining it's own hash table.
    I also made the mistake of misreading the exectuion plan - assuming that the data being pushed to the hash join was filtered to eliminate the data that was not of interest. Upon reflection, this was a rather stupid assumption on my part. Anyway, I used sub query factoring with the materialize hint to ensure that the hash join was only working on the data it should have been. This significantly reduced the size of the hash table and therefore the amount of temp space required.
    I did speak to oracle support and they suggested using pga_aggregate_target rather than the separate *area_size parameters.  I found that this had very little impact as the problem was related to the volume of data rather than whether it was being processed in memory or not.  That said, I did try upping the hash_area_size for the session with some initial success, but ultimately it didn't prove to be scalable.  We are however now using pga_aggregate_target in prod.
    So that's that. Problem sorted. And as the title of Mark Rittman's article suggests, I was trying to be too clever! :-)
    HTH
    David

  • Temp Space Issue

    Hi there folks.
    I've been using this forum as a reference point for the last couple of months but this is the first time I've had to post anything because I happen to have a pickle of an issue. In fact, it's that beefy, even my company's development team is stumped... Yeah.
    So, a quick bit of background. The current client I'm working for is a Government agency related to Health & Public Services. The database we use works as a go-between with Siebel CAM and Business Objects. As the two don't naturally speak to each other, occasionally we get inevitible data incosistencies. We had an Informatica CDC issue a little while ago which left a lot of stale data, primarily relating to tasks which should have been closed or naturally expired. To clear up the backlog and return consistency to the DB, we've been working on updating a workflow which will refresh the tasks across the entire DB and bring everything in line.
    Now, when the DB was designed, Task_Refresh was created and worked when tested as the DB was getting loaded with data. The problem is, when it was tested initially, there was only a few hundred thousand records to filter through. Now that the DB is in full swing, there's 22 million records and we're encountering issues. Essentially when we run the workflow in Informatica, it gets as far as the Source Qualifier and fails, having chewed through the available 16gb of temp space in the space of an hour.
    I've been working for weeks trying to tune the SQL code for the source qualifier and, along with the development team, I'm now well and truly stuck. Here is the query:
    SELECT
        a.1,
        a.2,
        a.3,
        a.4,
        a.5,
        a.6,
        a.7,
        a.8,
        a.9,
        a.10,
        a.11,
        a.12,
        a.13,
        a.14,
        a.15,
        a.16,
        a.17,
        a.18,
        a.19,
        a.20,
        a.21,
        a.22,
        a.23,
        a.24,
        a.25,
        a.26,
        a.27,
        c.1,
        c.2,
        c.3,
        c.4,
        c.5,
        b.1,
        f.1,
        e.1,
        f.2,
        f.3,
        f.4,
        f.5,
        f.6,
        f.7,
        f.8,
        f.9,
        f.10,
        f.11,
        f.12
        FROM PARENT_TABLE a,  CHILD_TABLE2 c, CHILD_TABLE1 b, PARENT_TABLE2 e, PARENT_TABLE3 f
        WHERE   a.1 = b.PAR_ROW_ID(+)
        AND     a.1 = c.PAR_ROW_ID(+)
        AND     a.1 = e.PAR_ROW_ID(+)
        AND     a.TARGET_PER_ID = f.ROW_ID(+)
        AND
        (a.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND a.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (c.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND c.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (b.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND b.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (e.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND e.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
       );We are running this on 10g 10.1.0.5.0 - 64bi
    So, obviously I've replaced the table names and most of the column names, but a.1 is the primary key. What we have is the primary key compared to the PAR_ROW_ID column in each of the child tables and one of the other main tables, looking for matching IDs down the whole of the PARENT_TABLE. In this database, tasks can be entered in any of the five tables mentioned for various different reasons and functions. The dates in the last section are just arbitrary dates we have been working with, when the query works and is running into production, the dates will change, a week at a time based on a parameter file in Informatica.
    In troubleshooting, we found if we remove the left outer joins in the WHERE clause, then the query will return data, but not the right amount of data. Basically, PARENT_TABLE is just too massive for the temp space we have available. We can't increase the temp space and we can't get rid of the joins because that changes the functionality significantly. We have also tried restructuring the query by filtering on the dates first but that doesn't help, though I suspect that the order of the WHERE clause is being ignored by Oracle due to it's own performance related decisions. I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
    We have even tried setting the date and time for an hour's worth of entries and that was still too big.
    I'm stuck in a catch 22 here because we need to get this working, but the client is adamant that we should not be changing the functionailty. Is there any possible solution any of you can see for this?
    Thank you in advance for any help you can offer on this.
    (also, we are providiing support for the applications the database uses [CAM, BO], so we don't actually run this db per se, the client does. We've asked them to increase the temp space and they wont.
    The error when we run the workflow is: ORA-01652: unable to extend temp segment by 128 in tablespace BATCH_TEMP)

    938041 wrote:
    I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
    Technilcally you should be able to rewrite this query as a UNION ALL of 4 separate parts, providing you remember to include a predicate in the 2nd, 3rd and 4th query blocks that eliminate rows that have already been reported by the earlier query blocks.
    Each part will be driven by one of the four range based predicates you have - and for the three parts that drive off child tables the join to the parent should NOT be an outer join.
    Effectively you are trying to transform the query manually using the CONCATENATION operator. It's possibly (but I suspect very unlikely) that the optimizer will do this for you if you add the hint /*+ use_concat */ to the query you currently have.
    Regards
    Jonathan Lewis

  • How can utilize/use  the extended space in system.img ?

    Hello Guru's
    In Oracle VM , How can utilize/use the extended space in system.img ?
    a) Increased the system.img size using the following command:
    # dd if=/dev/zero bs=1M count=12960 >> /OVS/running_pool/18_test1/System.img
    b) Verified the added size of system.img file at OVS server.
    # ls -lh System.img
    -rw-r--r-- 1 root root 19G Feb 9 20:58 System.img
    c) Started GVM and additional size/memory is not shown ?
    # df -m
    Filesystem 1M-blocks Used Available Use% Mounted on
    /dev/xvda2 3984 2292 1652 59% /
    /dev/xvda1 92 12 75 14% /boot
    tmpfs 512 0 512 0% /dev/shm
    + Tried working with resizefs , resize2fs , cmd did not work.
    (GVM is created using Oracle provided template)
    + am i missing anything ?
    Oracle VM Setup Detail:
    oracle-logos-4.9.17-7ovs
    enterprise-linux-ovs-5-0.17
    ovs-release-2.2-0.17
    ovs-utils-1.0-33
    kernel-ovs-2.6.18-128.2.1.4.9.el5
    ovs-agent-2.3-29
    Thanks in advance for your help.
    Best Regards
    Basu

    I am not positive what you did is going to work, but it seems like you did the equivalent of imaging a small disk on a bigger disk. In that case, the first thing to do is update the partition table with fdisk. Start with fdisk -l in the vm for general information about the disk. Hopefully, the additional space will show. Then, work with fdisk to use the extra space. The easiest is to add a partition, then a file system on it. It is also possible to expand the last partition (you might have to delete it first), then expand what is on it (may be RAID, LVM, or the file system), layer by layer. As usual with fdisk, you run the risk of thrashing it all, so you may want to practice on a copy first. Obviously, the extra space can only be added to the last partition of the small disk.
    If the situation is more complex, you may have to boot a VM with both the small disk and the big disk at the same time. If you boot the VM off of an iso DVD image, you can then shell out, then run fdisk to partition the big disk the way you want, then use dd to copy the partitions you want in the order you want from the small disk. I am pretty sure not all combinations will work, but you get the idea. You can then take the small disk out, and boot off of the big disk.
    Come to think of it, you might just be better off with adding more space as a second virtual disk. You would then be free to partition/format it the way you want without messing with the first disk. Linux is so good with disk management, so many options.
    As a general statement, though, I like to put some distance between the high level file systems and the low level disk partitions, so I use LVM (Logical Volume Manager).
    Best of luck, keep us posted.

  • Flash Disks for TEMP space

    Hi,
    I have noticed that whenever a SQL statement needs to make heavy use of TEMP (GROUP BY, SORT, Etc.) with large amounts of data (2GB up to 50GB) execution time slows considerably. A large part of that time (As reported by OEM SQL-MONITOR) is waiting on reads/writes to TEMP. It occured to me that I could cut off a portion of my cell flash, configure it into grid disk, and create a small disk group for Flash temp space. This wouldn't need to be mirrored, as it is TEMP space. Has anybody tried this? (This is a data warehouse with a large amount of data, so it's not unusual for queries to use 5GB to 10GB of TEMP).
    Thanks,
    Doug

    All,
    Well, I had a chance to test this & it actually works very well. The first thing I found was that it is very fast & easy to configure some of the flash into grid disks, and there is no need to even take the databases down or cycle them. On each Cell (using DCLI lets it all be scripted up) you simply drop the flash cache, recreate it smaller, and configure the rest into grid disk. (Reversing it is just as easy). Then, since TEMP does not need to be mirrored, you simple create a disk group specifying External redundancy. Create a TEMP tablespace in this disk group, and you are off and running.
    I created a simple SQL that did a GROUP BY and SORT on 10 columns of a 500M row table. That SQL ran 1min, 5 secs using hard disk TEMP and 1 minute flat using flash temp. I ran these repeatedly, and the timings were consistent. This wrote 4GB to TEMP. Then I doubled the amount of data by doing a UNION ALL of the table with itself (in an inline view). This wrote 8GB to TEMP, and the difference between hard and flash temp times grew to about 15 secs. I doubled this a few more times, and the more data written to TEMP, the larger the improvement in run times. (I don't think writes to flash are slower than writes to hard disk ... they seem to be faster. The Oracle engineers would have to answer that one, but the statistics from OEM show the writes to flash TEMP as being faster than to hard disk ... particularly as the volume of data gets larger).
    In any event, we have some processes that rebuild a large number of MVs at a time (requiring about 400GB of TEMP). I'm looking at putting someting in place to build a flash TEMP to support this load, and then drop it when the load is complete. We'll try it manually first ... I think we may be looking at cutting significant time off the process.
    Thanks,
    Doug

  • Temp space during import

    I am doing schema refresh...running out of temp space on development....has 500 MB space...no time to add space because storage team has no space they requsted space...for now running import please suggest option
    Oracle--10.2.0.3
    Using IMP utility
    Loading around 8 Gb of data...

    abhishek gera wrote:
    By default, import commits at the end of each table, therefore it is very
    likely your rollback segments will run out of space.
    To workaround this problem, without adding space to the rollback segment
    tablespace, you can specify 'COMMIT=Y' on import. This overrides the
    default and commits at the end of each buffer (also an import parameter),
    rather than at the end of the table.No, it's not at all likely, I think. The OP is running out of temp space, not undo. I don't think this is relevant.

Maybe you are looking for

  • I want to set up Firefox to only allow certain pages to load - how do I do this?

    My children are having my old laptop and I dont want them to be able to 'stumble' across anything unsuitable. How can I set up Firefox so that it only allows pages to load from websites I choose? Thanx!

  • Can I sync purchased/merge/move apps from an old apple-ID to a new one?

    Hello all. I have been using an apple-ID, which I have been sharing with my family, for about two years now. Now I want to have my own apple-ID, so that I can have fully control over it myself, and so that I can use iCloud on my own apple devices and

  • Flash compatibility with MAC

    Hello. I am new to the site in Flash, and I´m having problems with Mac browsers. Working on PC, but when I prove the site in a MAC, they do not work properly. If someone can give me a hand I´ll be really grateful. Thanks.

  • Exception with Crystal Reports 2008 Runtime SP2 and Export to Word/RTF

    When I export a report (DataSource ADO.NET DataSet) with many records (in my case about 17000 but the records are stretched from about 370 pages to about 800 pages ), I get an exception: System.Runtime.InteropServices.COMException: Ungültiger Zeiger

  • How to stop JPEG starting ACR 4.1

    I have reviewed Jeff Schewes article on ACR 4.1 and the CS3 Help. I have been unable to stop JPEG files from scheduling ACR. I have checked and unchecked the Always open JPEG files with settings using Camera RAW in the Camera Raw Preferences. How do