Temp space problem

HI all,
I receive an error while executing a procedure.
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
can any one please exlain what is the problem.
thanks in advance
baskar k

hi
First ORA-01652 may occur because there is simply no space available in the temp tablespace of which is being used. The second cause of ORA-01652 may have to do with the local temp segment not being able to extent space even though there is space in other instances.
To trouble shoot for ORA-01652, and find out which of the above scenarios are causing ORA-01652 use this query offered by MetaLink:
select sum(free_blocks)
from gv$sort_segment
where tablespace_name = '<TEMP TABLESPACE NAME>'
You will know that the first scenario is causing ORA-01652 to be thrown if the free block reads '0' because it signifies that there is no free space.
If there is a good amount of space, you know that there is another cause for ORA-01652, and it is probably the second scenario. It is important to note that in a non-RAC environment, local instances are not able to extend the temp segments, so in the RAC environment, ORA-01652 has to be handled differently. If you are experiencing ORA-01652 in a non-RA environment, be aware that every SQL making use of the tablespace can fail.
In RAC, more sort segment space can be used from other instances, which can help resolve ORA-01652 more easily. Try using the query below:
select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
from gv$sort_segment;
Basically, you can then find out how much temp segment space can be used for each instance by viewing the total_blocks, and the used_blocks can reveal the space which has been used so far, and and the free_blocks gives the amount of space allocated to this particular instance. This being, to resolve ORA-01652, you can check out that used_blocks = total_blocks and free_blocks = 0 will probably show for the instance, and ORA-01652 will be shown multiple times within the alert log.
This basically means that free space from other instances is being requested, and typically signifies that there is instance contention. Instance contention within the temporary space can make the instance take more time to process.
In sever cases, a slowdown may occur, in which you might want try one of the following work-arounds:
Increase size of the temp tablespace
Increase sort_area_size and/or pga_aggregate_target
However, remember to not use the RAC feature of DEFAULT temp space.
If ORA-01652 is causing the slowdown, SMON will probably not be able to process the sort segment requests, you you should try to diagnose the contention:
Output from the following query periodically during the problem:
select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
from gv$sort_segment;
Global hanganalyze and systemstate dumps
Hope this helps
CHeers

Similar Messages

  • Problem with temp space allocation in parallel query

    Hello
    I've got a query which matches two large result sets (25m+ rows) against each other and does some basic filtering and aggregation. When I run this query in serial it takes about 30 mins and completes successfully. When I specify a parallel degree of 4 for each result set, it also completes successfully in about 20 minutes. However, when I specify that it should be run in parallel but don't specify a degree for each result set, it spawns 10 parallel servers and after a couple of minutes, bombs out from one of the parallel servers with:
    ORA-12801: error signaled in parallel query server P000
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPThis appears to be when it is about to perform a large hash join. The execution plan does not change whether the parallel degree is specified or not, and there is several GB of temp space available.
    I'm at a bit of a loss as to how to track down specifically what is causing this problem. I've looked at v$sesstat for all of the sessions involved and it hasn't really turned anything up. I've tried tracing the main session and that hasn't really turned up much either. From what I can tell, one of the sessions seems to try to allocate massive amounts of temp space that it just does not need, but I can figure out why.
    Any ideas of how to approach finding the cause of the problem?
    David

    Hello
    I've finally resolved this and the resolution was relatively simple - and was also the main thing that Mark Rittman said he did in his article: reduce the size of the hash join.
    After querying v$sql_workarea_active I could see what was happening which was that the sum of the temp space for all of the parallel slaves was exceeding the total amount of temp space available on the system. When run in serial, it was virtually at the limit. I guess the extra was just the overhead for each slave maintaining it's own hash table.
    I also made the mistake of misreading the exectuion plan - assuming that the data being pushed to the hash join was filtered to eliminate the data that was not of interest. Upon reflection, this was a rather stupid assumption on my part. Anyway, I used sub query factoring with the materialize hint to ensure that the hash join was only working on the data it should have been. This significantly reduced the size of the hash table and therefore the amount of temp space required.
    I did speak to oracle support and they suggested using pga_aggregate_target rather than the separate *area_size parameters.  I found that this had very little impact as the problem was related to the volume of data rather than whether it was being processed in memory or not.  That said, I did try upping the hash_area_size for the session with some initial success, but ultimately it didn't prove to be scalable.  We are however now using pga_aggregate_target in prod.
    So that's that. Problem sorted. And as the title of Mark Rittman's article suggests, I was trying to be too clever! :-)
    HTH
    David

  • Space Problem in G: - after NW2004s SR1 installation on Windows 2003 server

    HI Friends,
    I have installed NW2004s SR1 on Windows 2003 server with Oracle 10g database successfully. My server has sufficient harddisk space as follows.
    C: 9.6GB
    G: 24GB
    H:64GB & I: 73GB.
    RAM is 6GB
    While installing NW2004s i have defined a paging size on C: 400MB-1600MB and
    i have set the option as System Manageble on other drives G, H& I drives. I did not get any Swap space problem while installing.
    During installation I have chosen G: drive to store MirrorlogA, MirrorlogB, Oraarch, OriglogA, OriglogB folders of Oracle for redolog archives,.Total size of all these folders is 17.4 GB
    G: drive has one more folder Program files which is of 155KB  and no other files exist in it.
    But My system says that I am running out of Space on G: drive and free space available now is only 44MB.
    My question is where has the rest 5GB of space gone? Is it consumed as a part of Paging file which i have set on this drive as System manageble??
    I have set the same option on other drives too i..e H & I. But i am not facing any space issues for these drives.
    What could be going wrong?? Please suggest me how to correct this
    Thanks
    mv_d

    898118 wrote:
    Hi All,
    I am trying to face some problem to install oracle 10g database on windows 2003 server with service pack 2. Once I start to installation it give me an error "Error in writing to directory 'c:\documents and settings\administrator\local settings\temp\orainstall2004xxx'. Please ensure that this directory is writable and has at least 45MB of disk space. Installation cannot continue"
    I double-insured that there is enough disk-space and I am also in the Administrators-Group.
    Please guide me for the above problem although I have done oracle 9i database installation with the same configuration.
    Regards
    Muhammad ShoaibOracle really doesn't like to be installed into directories with spaces in their names.

  • Usage of temp space in oracle

    I am using Informatica to load into one table in Oracle .
    Source and target table contains on CLOB colum.
    Source table size is 1GB.
    Target is oracle 10g with 30GB of temp space .
    Whenever i run this job TEMP space usage is complete 30Gb and job is failing .
    Any has any clue in this ?

    Actually, the problem probably is that you are looking at the table but not at the CLOB storage. CLOB's are typically stored outside the table.. so the table might be 1GB, but you might have a MUCH larger storage for the CLOB data.
    Replace the owner and segment name with your owner and table_name in this query and see what gets reported.
    select segment_name, sum(bytes)
    from dba_extents
    where owner = 'OUTLN'
    and segment_name in
      (select 'OL$HINTS' from dual
       union
       select segment_name from dba_lobs where table_name = 'OL$HINTS' and owner = 'OUTLN')
    group by segment_name;

  • Mapping   Execute Mapping Utilization of All Temp Space

    Background:
    Creating a Master Table in Data Warehouse; Containing 16 tables (3 - outer joins; Remainder are inner; Total Target table size 8.2 Million reocrds) *** I've ran successfully when I create my own Select statement {takes about 60-70 seconds)
    I may be trying to throw too much at OWB, but when I've created this mapping and joins via (2) joiners (1 - Derived Table; other joins all other tables and Derived table) and have deployed; I have received an error that temp space can not be extended. The thing is and a I've looked into it; our TEMP space is 32gb and with further research via Explain plain in Toad (based on package that is created) that there are some Cartesian join and some other tasks that are trying to be utilized i.e. sort join, etc
    Question:  Am I throwing too much at OWB ? or are their better utilities within the tool that may make this easier ? I was also wondering, if I was able to run something like an explain plan in OWB and not have to run via Toad?
    Thank you for your help                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    I recently had a similar problem with OWB 10.2.0.1.
    The explain plan in our pre-prod environment had something like MERGE JOIN CARTESIAN, with nested loops etc.
    The same mapping in our prod environment had a few hash joins and sort soemwhere in there.
    Needless to say that in our prod env. the mapping took under 10 minutes, while at pre-prod it was taking over 15 hours with no end in sight.
    I finally had to insert and /*+ ORDERED */ hint right after the SELECT in my mapping and it solved everything. Ihad to to this manually in TOAD.
    Problem is that with OWB 10.2.0.1 inserting hints is buggy. This prompted us to switch over to OWB 10.2.0.4.
    Look at one of my other posts (there aren't many) to see how to properly switch over to a new version of OWB.
    Hope this helps you out a bit.
    regards....Mike

  • V$temp_space_header showing temp space fully utilized but not true

    hi,
    any experience regarding temp space headeR?
    currently we are experiencing this issue:
    when using this:
    SELECT tablespace_name, SUM(bytes_used), SUM(bytes_free) FROM v$temp_space_header GROUP BY tablespace_name;
    TABLESPACE_NAME SUM(BYTES_USED) SUM(BYTES_FREE)
    TEMP 227632218112 0
    but when using this:
    SELECT NVL(A.tablespace_name, D.name) tablespace,
    D.mb_total,
    SUM (NVL(A.used_blocks,0) * D.block_size) / 1024 / 1024 mb_used,
    D.mb_total - SUM (NVL(A.used_blocks,0) * D.block_size) / 1024 / 1024 mb_free
    FROM v$sort_segment A,
    SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
    FROM v$tablespace B, v$tempfile C
    WHERE B.ts#= C.ts#
    GROUP BY B.name, C.block_size
    ) D
    WHERE A.tablespace_name (+) = D.name
    GROUP by NVL(A.tablespace_name, D.name), D.mb_total
    TABLESPACE MB_TOTAL MB_USED MB_FREE
    TEMP 217087 839 216248
    is this a bug??
    thanks.

    Hi,
    It may be the case that the operation you are doing needs more temp space than the amount of temp space that actually can be available.
    Instade of looking at free and used temp space, You can look at the operation which needs that much amount of temp space. May be there is some way to minimize that..
    Please share what actually are you doing when this problem is coming.
    Regards,
    Dipali

  • Temp Space Issue

    Hi there folks.
    I've been using this forum as a reference point for the last couple of months but this is the first time I've had to post anything because I happen to have a pickle of an issue. In fact, it's that beefy, even my company's development team is stumped... Yeah.
    So, a quick bit of background. The current client I'm working for is a Government agency related to Health & Public Services. The database we use works as a go-between with Siebel CAM and Business Objects. As the two don't naturally speak to each other, occasionally we get inevitible data incosistencies. We had an Informatica CDC issue a little while ago which left a lot of stale data, primarily relating to tasks which should have been closed or naturally expired. To clear up the backlog and return consistency to the DB, we've been working on updating a workflow which will refresh the tasks across the entire DB and bring everything in line.
    Now, when the DB was designed, Task_Refresh was created and worked when tested as the DB was getting loaded with data. The problem is, when it was tested initially, there was only a few hundred thousand records to filter through. Now that the DB is in full swing, there's 22 million records and we're encountering issues. Essentially when we run the workflow in Informatica, it gets as far as the Source Qualifier and fails, having chewed through the available 16gb of temp space in the space of an hour.
    I've been working for weeks trying to tune the SQL code for the source qualifier and, along with the development team, I'm now well and truly stuck. Here is the query:
    SELECT
        a.1,
        a.2,
        a.3,
        a.4,
        a.5,
        a.6,
        a.7,
        a.8,
        a.9,
        a.10,
        a.11,
        a.12,
        a.13,
        a.14,
        a.15,
        a.16,
        a.17,
        a.18,
        a.19,
        a.20,
        a.21,
        a.22,
        a.23,
        a.24,
        a.25,
        a.26,
        a.27,
        c.1,
        c.2,
        c.3,
        c.4,
        c.5,
        b.1,
        f.1,
        e.1,
        f.2,
        f.3,
        f.4,
        f.5,
        f.6,
        f.7,
        f.8,
        f.9,
        f.10,
        f.11,
        f.12
        FROM PARENT_TABLE a,  CHILD_TABLE2 c, CHILD_TABLE1 b, PARENT_TABLE2 e, PARENT_TABLE3 f
        WHERE   a.1 = b.PAR_ROW_ID(+)
        AND     a.1 = c.PAR_ROW_ID(+)
        AND     a.1 = e.PAR_ROW_ID(+)
        AND     a.TARGET_PER_ID = f.ROW_ID(+)
        AND
        (a.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND a.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (c.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND c.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (b.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND b.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
        or
        (e.LAST_UPD < TO_DATE('9/30/2011 00:00:00','mm/dd/yyyy hh24:mi:ss')
        AND e.LAST_UPD >= TO_DATE('9/23/2011 00:00:01','mm/dd/yyyy hh24:mi:ss'))
       );We are running this on 10g 10.1.0.5.0 - 64bi
    So, obviously I've replaced the table names and most of the column names, but a.1 is the primary key. What we have is the primary key compared to the PAR_ROW_ID column in each of the child tables and one of the other main tables, looking for matching IDs down the whole of the PARENT_TABLE. In this database, tasks can be entered in any of the five tables mentioned for various different reasons and functions. The dates in the last section are just arbitrary dates we have been working with, when the query works and is running into production, the dates will change, a week at a time based on a parameter file in Informatica.
    In troubleshooting, we found if we remove the left outer joins in the WHERE clause, then the query will return data, but not the right amount of data. Basically, PARENT_TABLE is just too massive for the temp space we have available. We can't increase the temp space and we can't get rid of the joins because that changes the functionality significantly. We have also tried restructuring the query by filtering on the dates first but that doesn't help, though I suspect that the order of the WHERE clause is being ignored by Oracle due to it's own performance related decisions. I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
    We have even tried setting the date and time for an hour's worth of entries and that was still too big.
    I'm stuck in a catch 22 here because we need to get this working, but the client is adamant that we should not be changing the functionailty. Is there any possible solution any of you can see for this?
    Thank you in advance for any help you can offer on this.
    (also, we are providiing support for the applications the database uses [CAM, BO], so we don't actually run this db per se, the client does. We've asked them to increase the temp space and they wont.
    The error when we run the workflow is: ORA-01652: unable to extend temp segment by 128 in tablespace BATCH_TEMP)

    938041 wrote:
    I have tried restructuring the query into 5 smaller queries and using Unions to tie it all together at the end, but still nothing.
    Technilcally you should be able to rewrite this query as a UNION ALL of 4 separate parts, providing you remember to include a predicate in the 2nd, 3rd and 4th query blocks that eliminate rows that have already been reported by the earlier query blocks.
    Each part will be driven by one of the four range based predicates you have - and for the three parts that drive off child tables the join to the parent should NOT be an outer join.
    Effectively you are trying to transform the query manually using the CONCATENATION operator. It's possibly (but I suspect very unlikely) that the optimizer will do this for you if you add the hint /*+ use_concat */ to the query you currently have.
    Regards
    Jonathan Lewis

  • How much TEMP space needed for datapump import?

    How does one accurately predict how much TEMP table-space is needed prior to starting a data-pump import (via impdp)?  I need a way to predetermine this BEFORE starting the import.  It is my understanding that in data-pump imports, temp table-spaces are primarily used for the building of indexes among other operations.

    Yes, I could use autoextend but that just shifts the problem of checking the logical table-space size to checking the physical space to see that it has enough room to extend.
    I was really hoping for a formula to calculate the amount of TEMP space it would take up.  For example, RichardHarrison's post above about setting the TEMP table-space size to be twice as large as the largest index, but wasn't sure on the accuracy of that statement.  I was hoping someone has encountered this kind of scenario before and found an accurate way to predict how much space is really needed, or a good estimate to go by.
    I will try out the idea of setting the TEMP space size to be twice the size of the largest index and see how that goes, as it doesn't seem there is a practical way of accurately determining how much space it really needs.
    Thanks everyone.
    Ben.

  • Unable to Extend TEMP space for CLOB

    Hi,
    I have a data extract process and I am writing the data to a CLOB variable. I'm getting the error "Unable to Extend TEMP space" for the larger extracts. I was wondering if writing the data to a CLOB column on a table and then regularly committing would be better than using a CLOB variable assuming time taken is not an issue.

    You do not need to add more temp files. This is not a problem of your temp tablespace. This is the problem of temp segments in your permanent tablespace. You need to add another datafile in EDWSTGDATA00 tablepsace. This happens when you are trying to create tables and indexes. Oracle first does the processing in temp segments(not temp tablespace) and at the end oracle converted those temp segments into permanent segments.
    Also, post the result of below query
    select file_name,sum(bytes/1024/1024)in_MB,autoextensible,sum(maxbytes/1024/1024)in_MB from dba_data_files where tablespace_name='STAGING_TEST' group by file_name,autoextensible order by FILE_NAME;

  • When is temp space freed

    I have a problem between 2 executions of my application. The sequence is:
    Stop application.
    Start the 2nd run.
    One second into the 2nd run, I get a TEMP space full error. When is temp space freed?
    Thanks

    Yes, that's essentially correct. Note that there are different uses of space within Temp. Many uses are transaction, cursor or connection specific (locks, sort space, materialised results etc.) and are freed when the associated transaction/cursor/connection is closed. Some uses (prepared command cache) are not connection specific and will not be freed until the datastore is unloaded from memory (though there are limits on how large the prepared command cache can grow so it should never result in Temp space exhaustion).
    If you suspect that Temp space is not being freed when it should be then you should log an SR so that support can investigate in case there is some obscure bug at work.
    Regards,
    Chris

  • Investigate extend PGA vs TEMP space

    I have the following request from one of my BI consultants. How would I go about troubleshooting this / finding a solution ?
    "Hi Dirk
    <user x> has to perform large data queries on <database y> and we keep running out of TEMP space.
    Can you please investigate which will be better:
    - To extend the the PGA or
    - To extend the TEMP table space."

    Dirk, before you go changing your system configuration in the hope the changes will fix your problem you need to understand why your application is consuming all available TEMP space. There is a very good change that even after switching to or increasing the PGA_AGGREGATE_TARGET that the same problem will exist unless you understand why the space is being used.
    It is possible that the total concurrent number of sorts/hash joins/lob operations just need more temp than is allocated.
    If is also possible for the query plan for a query executed concurrently is using a sort/merge or hash join that consumes a large amount of space and if the query was changed to use a Nest Loops plan or an index added to support the join that the requirement for sort space to support the sort/hash operations would go away.
    HTH -- Mark D Powell --

  • Hacking around temp space SS enqueues

    Metalink Notes 465840 (Configuring Temporary Tablespaces for RAC Databases for Optimal Performance) and 459036 (Temporary tablespace SS Contention In RAC) refer.
    I have the following bugs occurring with a vengeance on 10.1.0.3:
    Bug 4882834 - EXCESSIVE SS AND TS CONTENTION ON NEW 10G CLUSTERS
    Bug 6331283 - LONG WAITS ON 'DFS LOCK HANDLE'
    I'm trying to find a work around for many background processes running 24x7 processing a lot of data. These are run in a single schema (VPDB applies and this schema is the owner and exempt from FGAC).
    What would be nice is to have something similar to SET TRANSACTION to select a specific dedicated undo space.. but for temp space. If I can get each of my major job queues (home rolled FIFO and LIFO processing queues using DBMS_JOB) to use a different temp space, SS contention should be hopefully reduced.. or not?
    Anyone else sitting with this problem or did in the past? And if past, exactly what Oracle patchset resolved the problem?
    Edited:
    Fixed the spelling error in subject title

    > How big per trasnaction sort size and pga size?
    Fairly large. A typical transaction can select up to 50+ million rows and aggregate it into a summarised table. There are a couple of these running in parallel. Given the nature of the volumes we process, there's very little flexibility in this regard (besides growing data volumes and an increasing complexity of processing)...
    Though, when a process does go through without butting its head against an Oracle bug (fortunately more often than not), performance is pretty impressive.

  • Temp space during import

    I am doing schema refresh...running out of temp space on development....has 500 MB space...no time to add space because storage team has no space they requsted space...for now running import please suggest option
    Oracle--10.2.0.3
    Using IMP utility
    Loading around 8 Gb of data...

    abhishek gera wrote:
    By default, import commits at the end of each table, therefore it is very
    likely your rollback segments will run out of space.
    To workaround this problem, without adding space to the rollback segment
    tablespace, you can specify 'COMMIT=Y' on import. This overrides the
    default and commits at the end of each buffer (also an import parameter),
    rather than at the end of the table.No, it's not at all likely, I think. The OP is running out of temp space, not undo. I don't think this is relevant.

  • HD space problem - startup disk shouldn't be full...

    Hello everyone.
    I'm having a really weird problem with my hard drive. All of a sudden, a message popped up saying that my Startup Disk was almost full, and the system profiler read it as having "zero k". This is extremely odd because I had about 19 Gb free just a moment before, and after checking folders, apps and documents there really shouldn't be a space problem. The first message came up right after importing a CD, but that shouldn't have caused this.
    I tried deleting some items, but space increased only by that tiny amount. After a restart, the indicated hard drive space randomly jumped up to 1 Gb. When I woke it back up right now it was down to 200 Mb, and after another restart it's just come up to 650Mb. It just seems to fluctuate...
    If anyone knows what the issue might be or what I could do about it, please help! Thanks,
    silvia

    Hi. Thanks for the tip, it was really helpful. Everything was as expected except for some log files in my Library. The sizes are mostly up to 100 K except for one huge 15.9 Gb file, called console.log.2 in a strange folder called 501
    It's located in
    Library > Logs > Console > 501 >
    I'm really not sure what these files are and what they do, or what these folders are. Can I trash them or is that a problem?
    Also, I've noticed that my hard drive space is actually continuously decreasing little by little... Do you think this is related?
    Thanks,
    silvia

  • Consuming too much temp space

    Dear All
    A technical colleague of mine is executing a procedure which selects data from two databases and inserts it into the third one. The procedure is not new and he has been executing the same since a year now.However in past two weeks the procedure is either consuming too much amount of time ( 3-4 hours as against 10-12 mins ) or it fails as it utilises more amount of temp space on the database on which insertions are made. In the temporary tablespace i added about 10gb more but it is still not suffice for the procedure to execute successfully.The sga for the database onto which insertion is done is 2560M and pga for the same is 2G.
    Please suggest what is to be done as it is an extremely crucial procedure.
    Thanks in advance.
    Regards
    rdxdba

    If you have Diagnostic Pack licence try to use AWR to compare instance activity for this procedure execution. If not try to install Statspack.
    I recommend also to use SQL trace to have trace data for a "good" execution and for a "bad" execution and to compare TKPROF output for related trace files.
    If you are using Oracle 10 or 11 try to use DMBS_MONITOR as described in http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php.

Maybe you are looking for

  • Problem while releasing the request

    I created a customized Z report from the existing standard SAP report and also created a new tcode into the development server, and released the request via se09 into quality server.  Now when I try to execute this tcode in the quality server, its gi

  • No longer able to burn DVDs

    My Powerbook is suddenly refusing to burn DVDs. I have spent the past couple of days testing with a variety of brands including Verbatim, but I get the error message 'Communication to the disc drive failed error code 0x80020022'. I can play DVDs and

  • How to access BPS variables in ABAP Exit Function

    Hi Experts, I am using a Exit Function. My BPS variables contain multiple values. I want to trasfer then (may be directly read) to an internal table from where I can loop over then. Could you please suggest me the code for this? Points will be awarde

  • PDF viewing problem in iBooks app

    After I sync-ed a pdf document to iBooks app for viewing in my ipad, I found the text have some overlappings.... It was strange. But the thing is: it looks normal on the preview in my macbook pro. Please advise me how to solve this issue, thanks.

  • Can you block a ipod 8gb thats been stolen

    Hi all is it possible to block or something to a stolen ipod touch, I did not set up mobileme as did not know about this until to late can anyone help me please,