Usage of temp space in oracle

I am using Informatica to load into one table in Oracle .
Source and target table contains on CLOB colum.
Source table size is 1GB.
Target is oracle 10g with 30GB of temp space .
Whenever i run this job TEMP space usage is complete 30Gb and job is failing .
Any has any clue in this ?

Actually, the problem probably is that you are looking at the table but not at the CLOB storage. CLOB's are typically stored outside the table.. so the table might be 1GB, but you might have a MUCH larger storage for the CLOB data.
Replace the owner and segment name with your owner and table_name in this query and see what gets reported.
select segment_name, sum(bytes)
from dba_extents
where owner = 'OUTLN'
and segment_name in
  (select 'OL$HINTS' from dual
   union
   select segment_name from dba_lobs where table_name = 'OL$HINTS' and owner = 'OUTLN')
group by segment_name;

Similar Messages

  • Oracle.sql.BLOB.freeTemporary() is not freeing TEMP space in the database

    Hi Folks,
    We are using oracle.sql.BLOB to store some file information into the database.
    Allocation of the temp space is done as below
    BLOB blob=BLOB.createTemporary(conn, false, BLOB.DURATION_SESSION); // this results in the usage of TEMP space from database
    And subsequent release is done as below
    blob.freeTemporary(); // this should have release the space from the database.
    This is on Oracle 10g, Java 1.6, ojdbc6.jar There are no exceptions. Even a simple program results in the same.
    Anybody faced something similar? Any pointers would be really appreciated.
    Thanks,
    Deva
    Edited by: user10728663 on Oct 11, 2011 5:33 AM

    Thanks a lot for the information.
    Memory is fine. And I am able to reproduce this within the scope of a simple example as well.
    Would you have any reference to the thread which had earlier reported this as a bug. I tried a reasonable amount of search in the forum, but no success.
    Thanks very much for any pointers.

  • Never release the temp space

    It seems my oracle database never release temp space.
    The usage of temp space kept at 99.8% ,but there was no alarm in alert.log.
    It is not acceptable to shutdown the machine (7*24).
    It is said that oracle do not use temp space if the memory for sorting can hold processing statement.
    But there is no large work load on the machine and whenever
    select * from v$sort_usage
    no record
    what s wrong?
    how can i release temp space without shutting down the server?
    Sun fire v880 4 cpu at 1050Mhz Memory 8G
    Solaris 8 0202
    Oracle 92

    You do not need to worry about it, unless you are getting "Unable to allocate ... in table TEMP" messages.
    I believe that the space will be released if you bounce the database.
    The 99% you are seeing is like a high water mark, which says that at some point 99% of your TEMP space was used. However no rows in v$sort_usage means that no space is currently being used.
    So, relax!

  • Fail to create temp file in Oracle 10g on CentOS

    My disk is full so I delete six temp files and create only one, but when I execute this SQL give me this error. I know i have more than 4GB free space but i don´t know what problem is happen.
    ALTER TABLESPACE temp ADD TEMPFILE '/oracle/oradata/ral/temp01.dbf' SIZE 512m AUTOEXTEND ON NEXT 250m MAXSIZE 2048m;
    ERROR
    ALTER TABLESPACE temp ADD TEMPFILE '/oracle/oradata/ral/temp01.dbf' SIZE 512m AUTOEXTEND ON NEXT 250m MAXSIZE 2048m
    Informe de error:
    Error SQL: ORA-01119: error create database file '/oracle/oradata/ral/temp01.dbf'
    ORA-27044: no se ha podido escribir el bloque de cabecera del archivo
    Linux-x86_64 Error: 28: No space left on device
    Additional information: 3
    +01119. 00000 - "error in creating database file '%s'"+
    *Cause:    Usually due to not having enough space on the device.+
    *Action:+
    Could you help me plz?
    Thanx in advance

    993296 wrote:
    My disk is full so I delete six temp files and create only one, but when I execute this SQL give me this error. I know i have more than 4GB free space but i don´t know what problem is happen.
    ALTER TABLESPACE temp ADD TEMPFILE '/oracle/oradata/ral/temp01.dbf' SIZE 512m AUTOEXTEND ON NEXT 250m MAXSIZE 2048m;
    ERROR
    ALTER TABLESPACE temp ADD TEMPFILE '/oracle/oradata/ral/temp01.dbf' SIZE 512m AUTOEXTEND ON NEXT 250m MAXSIZE 2048m
    Informe de error:
    Error SQL: ORA-01119: error create database file '/oracle/oradata/ral/temp01.dbf'
    ORA-27044: no se ha podido escribir el bloque de cabecera del archivo
    Linux-x86_64 Error: 28: No space left on device
    Additional information: 3
    +01119. 00000 - "error in creating database file '%s'"+
    *Cause:    Usually due to not having enough space on the device.+
    *Action:+
    Could you help me plz?
    Thanx in advanceHi,
    please check space under /oracle/oradata/ral filesystem
    Linux-x86_64 Error: 28: No space left on device
    Regards

  • Problem with temp space allocation in parallel query

    Hello
    I've got a query which matches two large result sets (25m+ rows) against each other and does some basic filtering and aggregation. When I run this query in serial it takes about 30 mins and completes successfully. When I specify a parallel degree of 4 for each result set, it also completes successfully in about 20 minutes. However, when I specify that it should be run in parallel but don't specify a degree for each result set, it spawns 10 parallel servers and after a couple of minutes, bombs out from one of the parallel servers with:
    ORA-12801: error signaled in parallel query server P000
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPThis appears to be when it is about to perform a large hash join. The execution plan does not change whether the parallel degree is specified or not, and there is several GB of temp space available.
    I'm at a bit of a loss as to how to track down specifically what is causing this problem. I've looked at v$sesstat for all of the sessions involved and it hasn't really turned anything up. I've tried tracing the main session and that hasn't really turned up much either. From what I can tell, one of the sessions seems to try to allocate massive amounts of temp space that it just does not need, but I can figure out why.
    Any ideas of how to approach finding the cause of the problem?
    David

    Hello
    I've finally resolved this and the resolution was relatively simple - and was also the main thing that Mark Rittman said he did in his article: reduce the size of the hash join.
    After querying v$sql_workarea_active I could see what was happening which was that the sum of the temp space for all of the parallel slaves was exceeding the total amount of temp space available on the system. When run in serial, it was virtually at the limit. I guess the extra was just the overhead for each slave maintaining it's own hash table.
    I also made the mistake of misreading the exectuion plan - assuming that the data being pushed to the hash join was filtered to eliminate the data that was not of interest. Upon reflection, this was a rather stupid assumption on my part. Anyway, I used sub query factoring with the materialize hint to ensure that the hash join was only working on the data it should have been. This significantly reduced the size of the hash table and therefore the amount of temp space required.
    I did speak to oracle support and they suggested using pga_aggregate_target rather than the separate *area_size parameters.  I found that this had very little impact as the problem was related to the volume of data rather than whether it was being processed in memory or not.  That said, I did try upping the hash_area_size for the session with some initial success, but ultimately it didn't prove to be scalable.  We are however now using pga_aggregate_target in prod.
    So that's that. Problem sorted. And as the title of Mark Rittman's article suggests, I was trying to be too clever! :-)
    HTH
    David

  • Consuming too much temp space

    Dear All
    A technical colleague of mine is executing a procedure which selects data from two databases and inserts it into the third one. The procedure is not new and he has been executing the same since a year now.However in past two weeks the procedure is either consuming too much amount of time ( 3-4 hours as against 10-12 mins ) or it fails as it utilises more amount of temp space on the database on which insertions are made. In the temporary tablespace i added about 10gb more but it is still not suffice for the procedure to execute successfully.The sga for the database onto which insertion is done is 2560M and pga for the same is 2G.
    Please suggest what is to be done as it is an extremely crucial procedure.
    Thanks in advance.
    Regards
    rdxdba

    If you have Diagnostic Pack licence try to use AWR to compare instance activity for this procedure execution. If not try to install Statspack.
    I recommend also to use SQL trace to have trace data for a "good" execution and for a "bad" execution and to compare TKPROF output for related trace files.
    If you are using Oracle 10 or 11 try to use DMBS_MONITOR as described in http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php.

  • SQL query using lot of Temp space

    I have sql query which is using lot of temp space , please suggest some ways to reduce this
    SELECT A.POSITION_NBR, TO_CHAR(B.EFFDT,'YYYY-MM-DD'), rtrim( A.SEQNO), A.EMPLID, B.REG_REGION, A.MANAGER_ID, A.REPORTS_TO, case when A.POSITION_NBR = A.REPORTS_TO THEN 'POS reports to same position' else 'Positions with multiple Emp' End Case
    FROM PS_Z_RPTTO_TBL A, PS_POSITION_DATA B, PS_POSTN_SRCH_QRY B1
    WHERE B.POSITION_NBR = B1.POSITION_NBR AND B1.OPRID = 'MP9621Q' AND ( A.POSITION_NBR = B.POSITION_NBR AND ( A.REPORTS_TO = A.POSITION_NBR AND B.EFFDT =
    (SELECT MAX(B_ED.EFFDT)
    FROM PS_POSITION_DATA B_ED
    WHERE B.POSITION_NBR = B_ED.POSITION_NBR) AND A.POSITION_NBR <> '00203392') OR ( B.EFFDT =
    (SELECT MAX(B_ED.EFFDT)
    FROM PS_POSITION_DATA B_ED
    WHERE B.POSITION_NBR = B_ED.POSITION_NBR AND B_ED.EFFDT <= SYSDATE) AND B.MAX_HEAD_COUNT <>
    (SELECT Count( C.EMPLID)
    FROM PS_Z_RPTTO_TBL C)) ) UNION
    SELECT F.POSITION_NBR, TO_CHAR(F.EFFDT,'YYYY-MM-DD'), '', '', F.REG_REGION, '', F.REPORTS_TO, ''
    FROM PS_POSITION_DATA F, PS_POSTN_SRCH_QRY F1
    WHERE F.POSITION_NBR = F1.POSITION_NBR AND F1.OPRID = 'MP9621Q' AND ( F.EFFDT =
    (SELECT MAX(F_ED.EFFDT)
    FROM PS_POSITION_DATA F_ED
    WHERE F.POSITION_NBR = F_ED.POSITION_NBR AND F_ED.EFFDT <= SYSDATE) AND F.EFF_STATUS = 'A' AND F.DEPTID IN
    (SELECT G.DEPTID
    FROM PS_DEPT_TBL G
    WHERE G.EFFDT =
    (SELECT MAX(G_ED.EFFDT)
    FROM PS_DEPT_TBL G_ED
    WHERE G.SETID = G_ED.SETID AND G.DEPTID = G_ED.DEPTID AND G_ED.EFFDT <= SYSDATE) AND F.REG_REGION = G.SETID AND G.EFF_STATUS = 'I') )
    Thanks in Advance
    Rajan

    use {noformat}<your code here>{noformat} tags to format your code.
    I have sql query which is using lot of temp space , please suggest some ways to reduce thisIf your sort_area_size is not set sufficient oracle used temp space for sorting operation. As your code is not readable i cant say much more than this. Check with your DBA if you have to increase the temp space.

  • Estimating how much temp space a query will take?

    I have a query that is "SELECT * FROM some_table ORDER BY field_name DESC". some_table has an avg_row_len of 458 bytes and stats are current. There are just about 6 million rows in some_table.
    TEMP is set to 500MB and the query fails for lack of TEMP space. I show about 176MB of TEMP is presently in use, so worst case I should have 324MB free.
    So which calculation is correct for how much TEMP space is needed:
    (a) 458 avg_row_len * 6,000,000 = about 3GB of space (and DBA_SEGMENTS agrees with this rough math). That's assuming it puts the whole row into the sort.
    (b) 6,000,000 rows * 4 bytes for a ROWID (I think they're 4 bytes) = 22MB. That's assuming it sorts just a bunch of pointers to rows (which is how I thought it would work).

    Don't forget to add the length of the column being sorted to the rowid length before you multiply. A [url http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/logical.htm#CNCPT89008]rowid has four pieces, not four bytes. Also check your plan, in case there is more than just the sort going on for you.
    With appropriate sort_area_size or pga target, you may reduce the need for temp. See [url http://download.oracle.com/docs/cd/E11882_01/server.112/e16638/memory.htm#i49320]pga memory management in the docs, to start.

  • Report consuming a lot of temp Space (BIP)

    Hi Experts,
    I am facing an issue.
    Some BIP reports consuming a lot of temp space (5-30 gb temp directory), which is causing service down (BIP, RMS, ReIM and RPM). BIP, RMS, ReIM and RPM are installed on same server.
    Please help to troubleshoot this issue.
    Thanks in Advance

    plz see
    Troubleshooting Oracle BI Publisher Enterprise 11g [ID 1387643.1]
    Troubleshooting Oracle BI Publisher Enterprise 10g [ID 412232.1]

  • Temp Space Issue

    Hi there folks.
    I've been using this forum as a reference point for the last couple of months but this is the first time I've had to post anything because I happen to have a pickle of an issue. In fact, it's that beefy, even my company's development team is stumped... Yeah.
    So, a quick bit of background. The current client I'm working for is a Government agency related to Health & Public Services. The database we use works as a go-between with Siebel CAM and Business Objects. As the two don't naturally speak to each other, occasionally we get inevitible data incosistencies. We had an Informatica CDC issue a little while ago which left a lot of stale data, primarily relating to tasks which should have been closed or naturally expired. To clear up the backlog and return consistency to the DB, we've been working on updating a workflow which will refresh the tasks across the entire DB and bring everything in line.
    Now, when the DB was designed, Task_Refresh was created and worked when tested as the DB was getting loaded with data. The problem is, when it was tested initially, there was only a few hundred thousand records to filter through. Now that the DB is in full swing, there's 22 million records and we're encountering issues. Essentially when we run the workflow in Informatica, it gets as far as the Source Qualifier and fails, having chewed through the available 16gb of temp space in the space of an hour.
    I've been working for weeks trying to tune the SQL code for the source qualifier and, along with the development team, I'm now well and truly stuck. Here is the query:
    SELECT
        a.1,
        a.2,
        a.3,
        a.4,
        a.5,
        a.6,
        a.7,
        a.8,
        a.9,
        a.10,
        a.11,
        a.12,
        a.13,
        a.14,
        a.15,
        a.16,
        a.17,
        a.18,
        a.19,
        a.20,
        a.21,
        a.22,
        a.23,
        a.24,
        a.25,
        a.26,
        a.27,
        c.1,
        c.2,
        c.3,
        c.4,
        c.5,
        b.1,
        f.1,
        e.1,
        f.2,
        f.3,
        f.4,
        f.5,
        f.6,
        f.7,
        f.8,
        f.9,
        f.10,
        f.11,
        f.12
        FROM PARENT_TABLE a,  CHILD_TABLE2 c, CHILD_TABLE1 b, PARENT_TABLE2 e, PARENT_TABLE3 f
        WHERE   a.1 = b.PAR_ROW_ID(+)
        AND     a.1 = c.PAR_ROW_ID(+)
        AND     a.1 = e.PAR_ROW_ID(+)
        AND     a.TARGET_PER_ID = f.ROW_ID(+)
        AND
        (a.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND a.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (c.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND c.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (b.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND b.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (e.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND e.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
       );We are running this on 10g 10.1.0.5.0 - 64bi
    So, obviously I've replaced the table names and most of the column names, but a.1 is the primary key. What we have is the primary key compared to the PAR_ROW_ID column in each of the child tables and one of the other main tables, looking for matching IDs down the whole of the PARENT_TABLE. In this database, tasks can be entered in any of the five tables mentioned for various different reasons and functions. The dates in the last section are just arbitrary dates we have been working with, when the query works and is running into production, the dates will change, a week at a time based on a parameter file in Informatica.
    In troubleshooting, we found if we remove the left outer joins in the WHERE clause, then the query will return data, but not the right amount of data. Basically, PARENT_TABLE is just too massive for the temp space we have available. We can't increase the temp space and we can't get rid of the joins because that changes the functionality significantly. We have also tried restructuring the query by filtering on the dates first but that doesn't help, though I suspect that the order of the WHERE clause is being ignored by Oracle due to it's own performance related decisions. I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
    We have even tried setting the date and time for an hour's worth of entries and that was still too big.
    I'm stuck in a catch 22 here because we need to get this working, but the client is adamant that we should not be changing the functionailty. Is there any possible solution any of you can see for this?
    Thank you in advance for any help you can offer on this.
    (also, we are providiing support for the applications the database uses [CAM, BO], so we don't actually run this db per se, the client does. We've asked them to increase the temp space and they wont.
    The error when we run the workflow is: ORA-01652: unable to extend temp segment by 128 in tablespace BATCH_TEMP)

    938041 wrote:
    I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
    Technilcally you should be able to rewrite this query as a UNION ALL of 4 separate parts, providing you remember to include a predicate in the 2nd, 3rd and 4th query blocks that eliminate rows that have already been reported by the earlier query blocks.
    Each part will be driven by one of the four range based predicates you have - and for the three parts that drive off child tables the join to the parent should NOT be an outer join.
    Effectively you are trying to transform the query manually using the CONCATENATION operator. It's possibly (but I suspect very unlikely) that the optimizer will do this for you if you add the hint /*+ use_concat */ to the query you currently have.
    Regards
    Jonathan Lewis

  • Unable to Extend TEMP space for CLOB

    Hi,
    I have a data extract process and I am writing the data to a CLOB variable. I'm getting the error "Unable to Extend TEMP space" for the larger extracts. I was wondering if writing the data to a CLOB column on a table and then regularly committing would be better than using a CLOB variable assuming time taken is not an issue.

    You do not need to add more temp files. This is not a problem of your temp tablespace. This is the problem of temp segments in your permanent tablespace. You need to add another datafile in EDWSTGDATA00 tablepsace. This happens when you are trying to create tables and indexes. Oracle first does the processing in temp segments(not temp tablespace) and at the end oracle converted those temp segments into permanent segments.
    Also, post the result of below query
    select file_name,sum(bytes/1024/1024)in_MB,autoextensible,sum(maxbytes/1024/1024)in_MB from dba_data_files where tablespace_name='STAGING_TEST' group by file_name,autoextensible order by FILE_NAME;

  • Flash Disks for TEMP space

    Hi,
    I have noticed that whenever a SQL statement needs to make heavy use of TEMP (GROUP BY, SORT, Etc.) with large amounts of data (2GB up to 50GB) execution time slows considerably. A large part of that time (As reported by OEM SQL-MONITOR) is waiting on reads/writes to TEMP. It occured to me that I could cut off a portion of my cell flash, configure it into grid disk, and create a small disk group for Flash temp space. This wouldn't need to be mirrored, as it is TEMP space. Has anybody tried this? (This is a data warehouse with a large amount of data, so it's not unusual for queries to use 5GB to 10GB of TEMP).
    Thanks,
    Doug

    All,
    Well, I had a chance to test this & it actually works very well. The first thing I found was that it is very fast & easy to configure some of the flash into grid disks, and there is no need to even take the databases down or cycle them. On each Cell (using DCLI lets it all be scripted up) you simply drop the flash cache, recreate it smaller, and configure the rest into grid disk. (Reversing it is just as easy). Then, since TEMP does not need to be mirrored, you simple create a disk group specifying External redundancy. Create a TEMP tablespace in this disk group, and you are off and running.
    I created a simple SQL that did a GROUP BY and SORT on 10 columns of a 500M row table. That SQL ran 1min, 5 secs using hard disk TEMP and 1 minute flat using flash temp. I ran these repeatedly, and the timings were consistent. This wrote 4GB to TEMP. Then I doubled the amount of data by doing a UNION ALL of the table with itself (in an inline view). This wrote 8GB to TEMP, and the difference between hard and flash temp times grew to about 15 secs. I doubled this a few more times, and the more data written to TEMP, the larger the improvement in run times. (I don't think writes to flash are slower than writes to hard disk ... they seem to be faster. The Oracle engineers would have to answer that one, but the statistics from OEM show the writes to flash TEMP as being faster than to hard disk ... particularly as the volume of data gets larger).
    In any event, we have some processes that rebuild a large number of MVs at a time (requiring about 400GB of TEMP). I'm looking at putting someting in place to build a flash TEMP to support this load, and then drop it when the load is complete. We'll try it manually first ... I think we may be looking at cutting significant time off the process.
    Thanks,
    Doug

  • Hacking around temp space SS enqueues

    Metalink Notes 465840 (Configuring Temporary Tablespaces for RAC Databases for Optimal Performance) and 459036 (Temporary tablespace SS Contention In RAC) refer.
    I have the following bugs occurring with a vengeance on 10.1.0.3:
    Bug 4882834 - EXCESSIVE SS AND TS CONTENTION ON NEW 10G CLUSTERS
    Bug 6331283 - LONG WAITS ON 'DFS LOCK HANDLE'
    I'm trying to find a work around for many background processes running 24x7 processing a lot of data. These are run in a single schema (VPDB applies and this schema is the owner and exempt from FGAC).
    What would be nice is to have something similar to SET TRANSACTION to select a specific dedicated undo space.. but for temp space. If I can get each of my major job queues (home rolled FIFO and LIFO processing queues using DBMS_JOB) to use a different temp space, SS contention should be hopefully reduced.. or not?
    Anyone else sitting with this problem or did in the past? And if past, exactly what Oracle patchset resolved the problem?
    Edited:
    Fixed the spelling error in subject title

    > How big per trasnaction sort size and pga size?
    Fairly large. A typical transaction can select up to 50+ million rows and aggregate it into a summarised table. There are a couple of these running in parallel. Given the nature of the volumes we process, there's very little flexibility in this regard (besides growing data volumes and an increasing complexity of processing)...
    Though, when a process does go through without butting its head against an Oracle bug (fortunately more often than not), performance is pretty impressive.

  • Temp space during import

    I am doing schema refresh...running out of temp space on development....has 500 MB space...no time to add space because storage team has no space they requsted space...for now running import please suggest option
    Oracle--10.2.0.3
    Using IMP utility
    Loading around 8 Gb of data...

    abhishek gera wrote:
    By default, import commits at the end of each table, therefore it is very
    likely your rollback segments will run out of space.
    To workaround this problem, without adding space to the rollback segment
    tablespace, you can specify 'COMMIT=Y' on import. This overrides the
    default and commits at the end of each buffer (also an import parameter),
    rather than at the end of the table.No, it's not at all likely, I think. The OP is running out of temp space, not undo. I don't think this is relevant.

  • RunInstaller fails to find temp space

    I am installing Oracle 11.1.0 on my HPUX 11.31 Itanium system. I have followed all of the preinstallation tasks and have downloaded the software.
    I have logged into the oracle account, and have cd's to the directory that contains my expanded oracle installation kit. I have defined my DISPLAY for the GUI, and have TMPDIR and TEMP set to /tmp, which has almost 1GB of free space. The directory containing the oracle installation kit is owned by the oracle user account.
    When I invoke ./runInstaller, I receive the following text:
    Error in GetCurrentDir(): 2
    Error in GetCurrentDir(): 2
    Error in GetCurrentDir(): 2
    Starting Oracle Universal Installer…
    Error Returned::: No such file or directory
    Checking temp space: 0 MB available, 415 MB required. Failed <<<<
    Checking swap space: must be greater than 150 MB. Actual 8192 MB. Passed
    Checking monitor: must be configured to display at least 256 colors. Actual 256. Passed
    Some requirement checks failed. You must fulfill these requirements before continuing with the installation, at which time they will be rechecked.
    I found out that the GetCurrentDir() is a binary function in the install/.oui application. Where is the installation when the GetCurrentDir() function is called? Why is the installing indicating that there is 0 MB available for temp space when I know that I have pleny of available temp space? Has anyone seen something like this. I'm stumped!
    Regards, Rob

    I have created a new temporary location for the installation on /opt1/tmp
    echo $TMP
    /opt1/tmp
    echo $TMPDIR
    /opt1/tmp
    echo $TEMP
    /opt1/tmp
    I originally tried this with $TMP, $TMPDIR and $TEMP pointing to /tmp, and received the same errors.
    df -k /opt1
    /opt1 (/dev/vg01/lvol1): 20170896 total allocated Kb
    12010598 free allocated Kb
    8160298 used allocated Kb
    41% allocation used
    - Rob

Maybe you are looking for

  • Is the only way to upgrade PS from CS6 to CC is to buy creative cloud?

    Can you no longer purchase the CC upgrade outright for photoshop? Is the only way now to get creative cloud and pay monthly?

  • Can't connect by wired connection

    I just moved into a new house, and ordered a fios connection through Surewest. I bought a LinkSys E3000 Wireless-N Router, and my iMac is connecting fine to the internet wirelessly...however I'm not getting anything when it's wired to the router. In

  • Firefox with avira run in background having some periodic lagging issue

    Firefox 5 with avira run in background is lagging like 5 ~ 10 minute period. this was after avira installed a new feature called WebGuard, is it had something to do with the lagging? or just my hardware failure? i being monitor the task manager, but

  • Resizing an image and displaying

    hi all, i am new to graphics, i want to resize the image (say w=300,h=300) before displaying to panel, i tried in the following code but no help? is this current way? in imagepanel class i am calling *"resize(image,300,300)"*. i don't know is that cu

  • Adobe Photoshop CS6 crashes when loading files

    Basically what it says. When I load a gile it crashes. I've only tried loading JPG's. I have also uninstalled and re-installed it. Thanks Sean