Performance problem querying multiple CLOBS

We are running Oracle 8.1.6 Standard Edition on Sun E420r, 2 X 450Mhz processors, 2 Gb memory
Solaris 7. I have created an Oracle Text indexes on several columns in a large table, including varchar2 and CLOB. I am simulating search engine queries where the user chooses to find matches on the exact phrase, all of the words (AND) and any of the words (OR). I am hitting performance problems when querying on multiple CLOBs using the OR, e.g.
select count(*) from articles
where contains (abstract , 'matter OR dark OR detection') > 0
or contains (subject , 'matter OR dark OR detection') > 0
Columns abstract and subject are CLOBs. However, this query works fine for AND;
select count(*) from articles
where contains (abstract , 'matter AND dark AND detection') > 0
or contains (subject , 'matter AND dark AND detection') > 0
The explain plan gives a cost of 2157 for OR and 14.3 for AND.
I realise that multiple contains are not a good thing, but the AND returns sub-second, and the OR is taking minutes! The indexes are created thus:
create index article_abstract_search on article(abstract)
INDEXTYPE IS ctxsys.context parameters ('STORAGE mystore memory 52428800');
The data and index tables are on separate tablespaces.
Can anyone suggest what is going on here, and any alternatives?
Many thanks,
Geoff Robinson

Thanks for your reply, Omar.
I have read the performance FAQ already, and it points out single CONTAINS clauses are preferred, but I need to check 2 columns. Also, I don't just want a count(*), I will need to select field values. As you can see from my 2 queries, the first has multiple CLOB columns using OR, and the second AND, with the second taking that much longer. Even with only a single CONTAINS, the cost estimate is 5 times more for OR than for AND.
Add an extra CONTAINS and it becomes 300 times more costly!
The root table is 3 million rows, the 2 token tables have 6.5 and 3 million rows respectively. All tables have been fully analyzed.
Regards
Geoff

Similar Messages

  • Performance problems - query on the copy of a table is faster than the orig

    Hi all,
    I have sql (select) performance problems with a specific table (costs 7800)(Oracle 10.2.0.4.0, Linux).
    So I copied the table 1:1 (same structure, same data, same indexes etc. ) under the same area (database, user, tablespace etc.) and gathered the table_stats with the same parameters. The same query on this copied table is faster and the costs are just 3600.
    Why for gods sake is the query on this new table faster??
    I appreciate any idea.
    Thank you!
    Fibo
    Edited by: user954903 on 13.01.2010 04:23

    Could you please share more information/link which can elaborate the significance of using SHRINK clause.
    If this is so useful and can shrink the unused space , why not my database architect has suggested this :).
    Any disadvantage also?It moves the highwater mark back to the lowest position, therefore full tables scans would work faster in some cases. Also it can reduce number of migrated rows and number of used blocks.
    Disadvantage is that it involves row movement, so operations which based on rowid are permitted during the shrinking.
    I think it is even better to stop all operations on the table when shrinking and disable all triggers. Another problem is that this process can take a long time.
    Guru's, please correct me if I'm mistaken.
    Edited by: Oleg Gorskin on 13.01.2010 5:50

  • Cm:select performance problem with multiple likes query clause

    I have query like <br>
              <b>listItem like '*abc.xml*' && serviceId like '*xyz.xml*'</b><br>
              Can we have two likes clauses mentioned above in the cm:select. The above is executing successfully but takes too much time to process. <br><br>
              Can we simplify the above mentioned query or any solution. Please help me in this issue.<br><br>
              Thanks & Regards,<br>
              Murthy Nalluri

    A few notes:
    1. You seem to have either a VPD policy active or you're using views that add some more predicates to the query, according to the plan posted (the access on the PK_OPERATOR_GROUP index). Could this make any difference?
    2. The estimates of the optimizer are really very accurate - actually astonishing - compared to the tkprof output, so the optimizer seems to have a very good picture of the cardinalities and therefore the plan should be reasonable.
    3. Did you gather index statistics as well (using COMPUTE STATISTICS when creating the index or "cascade=>true" option) when gathering the statistics? I assume you're on 9i, not 10g according to the plan and tkprof output.
    4. Looking at the amount of data that needs to be processed it is unlikely that this query takes only 3 seconds, the 20 seconds seems to be OK.
    If you are sure that for a similar amount of underlying data the query took only 3 seconds in the past it would be very useful if you - by any chance - have an execution plan at hand of that "3 seconds" execution.
    One thing that I could imagine is that due to the monthly data growth that you've mentioned one or more of the tables have exceeded the "2% of the buffer cache" threshold and therefore are no longer treated as "small tables" in the buffer cache. This could explain that you now have more physical reads than in the past and therefore the query takes longer to execute than before.
    I think that this query could only be executed in 3 seconds if it is somewhere using a predicate that is more selective and could benefit from an indexed access path.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Performance problem: Query explain plan changes in pl/sql vs. literal args

    I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between 123 and 456;
    Its performance was excellent with literal arguments (1 sec per execution).
    But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
    Ex:
    CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
    CURSOR LT_CURSOR IS
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between param1 AND param2;
    BEGIN
    FOR aRecord IN LT_CURSOR
    LOOP
    (print timestamp...)
    END LOOP;
    END runQuery;
    Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
    Solution:
    Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
    Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
    Downside: Query not cached for future use. Perfectly fine for this query's purpose.
    Other suggestions are welcome.

    AmandaSoosai wrote:
    I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between 123 and 456;
    Its performance was excellent with literal arguments (1 sec per execution).
    But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
    Ex:
    CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
    CURSOR LT_CURSOR IS
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between param1 AND param2;
    BEGIN
    FOR aRecord IN LT_CURSOR
    LOOP
    (print timestamp...)
    END LOOP;
    END runQuery;
    Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
    Solution:
    Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
    Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
    Downside: Query not cached for future use. Perfectly fine for this query's purpose.
    Other suggestions are welcome.Best wild guess based on what you've posted is a bind variable mismatch (your column is declared as a NUMBER data type and your bind variable is declared as a VARCHAR for example). Unless you have histograms on the columns in question ... which, if you're using bind variables is usually a really bad idea.
    A basic illustration of my guess
    http://blogs.oracle.com/optimizer/entry/how_do_i_get_sql_executed_from_an_application_to_uses_the_same_execution_plan_i_get_from_sqlplus

  • Performance Problems on Faces Navigation Diagram and Hyperthreading query

    Am I the only one having performance problems when dealing with Faces-Config Diagrams of about 35 JSPs displayed on the sheet. using Jdev 10.1.3 It's taking my workstation about a full minute and a half to update the name of an arrow. The most stressed component during this task seems to be the CPU.
    And just another question has anybody investigated how is the performance of Jdev affected by either enabling or disabling hyperthreading? In my case my CPU usage manages to reach only 50%. I'm tempted to switch HT off to let JDev use all the cpu power. if that would be the case.

    Hello Diego,
    you mentioned that you compared a BEx Query with the Web INtelligence report. Could you provide more details here ?
    - what are the elements in the rows, columns and free characterisitcs in the BEx Query ?
    - was the query execute as designed in the BEx Query Designer with BEx Web Reporting ?
    - what are the elements in the WebIntelligence Query panel ?
    thanks
    Ingo

  • Performance Problem in Select query

    Hi,
    I have performance Problem in following Select Query :
    SELECT VBELN POSNR LFIMG VRKME VGBEL VGPOS
      FROM LIPS INTO CORRESPONDING FIELDS OF TABLE GT_LIPS
       FOR ALL ENTRIES IN GT_EKPO1
       WHERE VGBEL = GT_EKPO1-EBELN
         AND VGPOS = GT_EKPO1-EBELP.
    as per trace i have analysed that it is fetch the complete table scan from the LIPS table and table contants almost 3 lakh records.
    Kindly Suggest what we can do to optimize this query.
    Regards,
    Harsh

    types: begin of line,
              vbeln type lips-vbeln
              posnr type lips-posnr
              lfimg type lips-lfimg
             vrkme type lips-vrkme
             vgbel type lips- vgbel
             vgpos type lips-vgpos
             end of line.
    data: itab type standard table of line,
             wa type line.
    IF GT_EKPO1[] IS NOT INITIAL.
    SELECT VBELN POSNR LFIMG VRKME VGBEL VGPOS
    FROM LIPS INTO  TABLE ITAB
    FOR ALL ENTRIES IN GT_EKPO1
    WHERE VGBEL = GT_EKPO1-EBELN
    AND VGPOS = GT_EKPO1-EBELP.
    ENDIF.

  • Query performance problem

    I am having performance problems executing a query.
    System:
    Windows 2003 EE
    Oracle 9i version 9.2.0.6
    DETAIL table with 120Million rows partitioned in 19 partitions by SD_DATEKEY field
    We are trying to retrieve the info from an account (SD_KEY) ordered by date (SD_DATEKEY). This account has about 7000 rows and it takes about 1 minute to return the first 100 rows ordered by SD_DATEKEY. This time should be around 5 seconds to be acceptable.
    There is a partitioned index by SD_KEY and SD_DATEKEY.
    This is the query:
    SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' AND ROWNUM < 101 ORDER BY SD_DATEKEY
    The problem is that all the 7000 rows are read prior to be ordered. I think that it is not necessary for the optimizer to access all the partitions to read all the rows because only the first 100 are needed and the partitions are bounded by SD_DATEKEY.
    Any idea to accelerate this query? I know that including a WHERE clause for SD_DATEKEY will increase the performance but I need the first 100 rows and I don't know the date to limit the query.
    Anybody knows if this time is a normal response time for tis query or should it be improved?
    Thank to all in advance for the future help.

    Thank to all for the replies.
    - We have computed statistics and no changes in the response time.
    - We are discussing about restrict the query to some partitions but for the moment this is not the best solution because we don't know where are the latest 100 rows.
    - The query from Maurice had the same response time (more or less)
    select * from
    (SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' ORDER BY SD_DATEKEY)
    where ROWNUM < 101
    - We have a local index on SD_DATEKEY. Do we need another one on SD_KEY? Should it be created as BITMAP?
    I can't test inmediately your sugestions because this is a problem with one of our customers. In our test system (that has only 10Million records) the indexes accelerate the query but this is not the same in the customer system. I think the problem is the total records on the table.

  • Query Performance Problem!! Oracle 25 minutes || SQLServer 3 minutes

    Hi all,
    I'm having a performance problem with this query bellow. It runs in 3 minutes on SQLServer and 25 minutes in Oracle.
    SELECT
    CASE WHEN (GROUPING(a.estado) = 1) THEN 'TOTAL'
    ELSE ISNULL(a.estado, 'UNKNOWN')
    END AS estado,
    CASE WHEN (GROUPING(m.id_plano) = 1) THEN 'GERAL'
    ELSE ISNULL(m.id_plano, 'UNKNOWN')
    END AS id_plano,
    sum(m.valor_2s_parcelas) valor_2s_parcelas,
    convert(decimal(15,2),convert(int,sum(convert(int,(m.valor_2s_parcelas+.0000000001)*100)*
    isnull(e.percentual,0.0))/100.0+.0000000001))/100 BB_Educar
    FROM
    movimento_dco m ,
    evento_plano e,
    agencia_tb a
    WHERE
    m.id_plano = e.id_plano
    AND m.agencia *= a.prefixo
    --AND  m.id_plano LIKE     'pm60%'
    AND m.data_pagamento >= '20070501'
    AND m.data_pagamento <= '20070531'
    AND m.codigo_retorno = '00'
    AND m.id_parcela > 1
    AND m.valor_2s_parcelas > 0.
    AND e.id_evento = 'BB-Educar'
    AND a.banco_id = '001'
    AND a.ordem = '00'
    group by m.id_plano, a.estado WITH ROLLUP
    order by a.estado, m.id_plano DESC
    Can anyone help me with this query?

    What version of Oracle, what version of SQL? Are the tables the same exact size? are they both indexed the same? Are you running on the some or similar hardware? Are the Oracle parameters similar like SGA size and PGA_AGGREGATE Target? Did you run statistics in Oracle?
    Did you compare execution plans in SQL Server vs Oracle to see if SQl Servers execution plan is more superior than the one Oracle is trying to use? (most likely stale statistics).
    There are many variables and we need more information than just the Query : ).

  • Performance problem with query on bkpf table

    hi good morning all ,
    i ahave a performance problem with a below query on bkpf table .
    SELECT bukrs
               belnr
               gjahr
          FROM bkpf
          INTO TABLE ist_bkpf_temp 
         WHERE budat IN s_budat.
    is ther any possibility to improve the performanece by using index .
    plz help me ,
    thanks in advance ,
    regards ,
    srinivas

    hi,
    if u can add bukrs as input field or if u have bukrs as part of any other internal table to filter out the data u can use:
    for ex:
    SELECT bukrs
    belnr
    gjahr
    FROM bkpf
    INTO TABLE ist_bkpf_temp
    WHERE budat IN s_budat
        and    bukrs in s_bukrs.
    or
    SELECT bukrs
    belnr
    gjahr
    FROM bkpf
    INTO TABLE ist_bkpf_temp
    for all entries in itab
    WHERE budat IN s_budat
        and bukrs = itab-bukrs.
    Just see , if it is possible to do any one of the above?? It has to be verified with ur requirement.

  • Query performance problem - events 2505-read cache and 2510-write cache

    Hi,
    I am experiencing severe performance problems with a query, specifically with events 2505 (Read Cache) and 2510 (Write Cache) which went up to 11000 seconds on some executions. Data Manager (400 s), OLAP data selection (90 s) and OLAP user exit (250 s) are other the other event with noticeable times. All other events are very quick.
    The query settings (RSRT) are
    persistent cache across each app server -> cluster table,
    update cache in delta process is checked ->group on infoprovider type
    use cache despite virtual characteristics/key figs checked (one info-cube has1 virtual key figure which should have a static result for a day)
    =>Do you know how I can get more details than what's in 0TCT_C02 to break down the read and write cache events time or do you have any recommandation?
    I have checked and no dataloads were in progres on the info-providers and no master data loads (change run). Overall system performance was acceptable for other queries.
    Thanks

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • [svn] 3363: Fix performance problem when changing multiple DisplayObject-dependent properties .

    Revision: 3363
    Author: [email protected]
    Date: 2008-09-25 11:58:56 -0700 (Thu, 25 Sep 2008)
    Log Message:
    Fix performance problem when changing multiple DisplayObject-dependent properties. The call to assignDisplayObjects() is now batched up into the next commitProperties() call.
    Bugs: SDK-17033
    Reviewer: Deepa
    Ticket Links:
    http://bugs.adobe.com/jira/browse/SDK-17033
    Modified Paths:
    flex/sdk/trunk/frameworks/projects/flex4/src/flex/core/Group.as

    hello sir,
    i want to your help
    i was installed fresh windows 7 via cd rom and then after installed all software.
    and now after 1 day customer complained me that cd rom not read any cd and i m also check when i insurt cd so its not read and when i am double click on cd rom icon its eject so what i do for that please reply on my email address.
    [text removed for privacy]
    VIMAL

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • OBIEE 11g performance problem

    Hi,
    I am facing a performance problem in OBIEE 11g. When I run the query taking from nqquery.log in database, it is giving me the result within few seconds. But In the OBIEE Answers the query runs forever not showing any data.
    Attaching the query below.
    Please help to solve.
    Thanks
    Titas
    [2012-10-16T18:07:34.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-23] [] [ecid: 3a39339b45a46ab4:-70b1919f:13a1f282668:-8000-00000000000769b2] [tid: 44475940] [requestid: 26e1001e] [sessionid: 26e10000] [username: weblogic] -------------------- General Query Info: [[
    Repository: Star, Subject Area: BM_BG Pascua Lama, Presentation: BG PL Project Analysis
    [2012-10-16T18:07:34.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-18] [] [ecid: 3a39339b45a46ab4:-70b1919f:13a1f282668:-8000-00000000000769b2] [tid: 44475940] [requestid: 26e1001e] [sessionid: 26e10000] [username: weblogic] -------------------- Sending query to database named XXBG Pascua Lama (id: <<26911>>), connection pool named Connection Pool, logical request hash e3feca59, physical request hash 5ab00db6: [[
    WITH
    SAWITH0 AS (select sum(T6051.COST_AMT_PROJ_RATE) as c1,
    sum(T6051.COST_AMOUNT) as c2,
    T6051.AFE_NUMBER as c3,
    T6051.BUDGET_OWNER as c4,
    T6051.COMMENTS as c5,
    T6051.COMMODITY as c6,
    T6051.COST_PERIOD as c7,
    T6051.COST_SOURCE as c8,
    T6051.COST_TYPE as c9,
    T6051.DATA_SEL as c10,
    T6051.FACILITY as c11,
    T6051.HISTORICAL as c12,
    T6051.OPERATING_UNIT as c13,
    T5633.project_number as c14,
    T5637.task_number as c15
    from
    (SELECT project_id proj_id
    ,segment1 project_number
    ,org_id
    FROM pa.pa_projects_all
    WHERE org_id IN (825, 865, 962, 2161)) T5633,
    (SELECT project_id proj_id
    ,task_id
    ,task_number
    ,task_name
    FROM pa.pa_tasks) T5637,
    (SELECT xxbg_pl_proj_analysis_cost_v.AFE_NUMBER,
    xxbg_pl_proj_analysis_cost_v.BUDGET_OWNER,
    xxbg_pl_proj_analysis_cost_v.COMMENTS,
    xxbg_pl_proj_analysis_cost_v.COMMODITY,
    xxbg_pl_proj_analysis_cost_v.COST_PERIOD,
    xxbg_pl_proj_analysis_cost_v.COST_SOURCE,
    xxbg_pl_proj_analysis_cost_v.COST_TYPE,
    xxbg_pl_proj_analysis_cost_v.FACILITY,
    xxbg_pl_proj_analysis_cost_v.HISTORICAL,
    xxbg_pl_proj_analysis_cost_v.PO_NUMBER_COST_CONTROL,
    xxbg_pl_proj_analysis_cost_v.PREVIOUS_PROJECT,
    xxbg_pl_proj_analysis_cost_v.PREV_AFE_NUMBER,
    xxbg_pl_proj_analysis_cost_v.PREV_COST_CONTROL_ACC_CODE,
    xxbg_pl_proj_analysis_cost_v.PREV_COST_TYPE,
    xxbg_pl_proj_analysis_cost_v.PROJECT_NUMBER,
    xxbg_pl_proj_analysis_cost_v.SUPPLIER_NAME,
    xxbg_pl_proj_analysis_cost_v.TASK_DESCRIPTION,
    xxbg_pl_proj_analysis_cost_v.TASK_NUMBER,
    xxbg_pl_proj_analysis_cost_v.TRANSACTION_NUMBER,
    xxbg_pl_proj_analysis_cost_v.WORK_PACKAGE,
    xxbg_pl_proj_analysis_cost_v.WP_OWNER,
    xxbg_pl_proj_analysis_cost_v.OPERATING_UNIT,
    xxbg_pl_proj_analysis_cost_v.DATA_SEL,
    pa_periods_all.PERIOD_NAME,
    xxbg_pl_proj_analysis_cost_v.ORG_ID,
    xxbg_pl_proj_analysis_cost_v.COST_AMT_PROJ_RATE COST_AMT_PROJ_RATE,
    xxbg_pl_proj_analysis_cost_v.COST_AMOUNT COST_AMOUNT,
    xxbg_pl_proj_analysis_cost_v.project_id,
    xxbg_pl_proj_analysis_cost_v.task_id
    FROM (select xpac.*,
    decode(xpac.historical, 'Y', 'Historical', 'N', 'Current') data_sel
    from apps.xxbg_pl_proj_analysis_cost_v xpac
    union
    select xpac.*, 'All' data_sel
    from apps.xxbg_pl_proj_analysis_cost_v xpac) xxbg_pl_proj_analysis_cost_v,
    (select period_name, org_id from apps.pa_periods_all) pa_periods_all
    WHERE ((xxbg_pl_proj_analysis_cost_v.ORG_ID = pa_periods_all.ORG_ID))
    AND (xxbg_pl_proj_analysis_cost_v.ORG_ID IN (825,865,962,2161))
    AND (APPS.XXBG_PL_PA_COMMITMENT_PKG.GET_LAST_DAY(xxbg_pl_proj_analysis_cost_v.COST_PERIOD) <=
    APPS.XXBG_PL_PA_COMMITMENT_PKG.GET_LAST_DAY(pa_periods_all.PERIOD_NAME))) T6051
    where ( T5633.proj_id = T5637.proj_id and T5633.project_number = 'SUDPALAPAS11' and T5637.proj_id = T6051.PROJECT_ID and T5637.task_id = T6051.TASK_ID and T5637.task_number = '2100.2000.01.BC0100' and T6051.DATA_SEL = 'All' and T6051.OPERATING_UNIT = 'Compañía Minera Nevada SpA' and T6051.PERIOD_NAME = 'JUL-12' )
    group by T5633.project_number, T5637.task_number, T6051.AFE_NUMBER, T6051.BUDGET_OWNER, T6051.COMMENTS, T6051.COMMODITY, T6051.COST_PERIOD, T6051.COST_SOURCE, T6051.COST_TYPE, T6051.DATA_SEL, T6051.FACILITY, T6051.HISTORICAL, T6051.OPERATING_UNIT)
    select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6, D1.c7 as c7, D1.c8 as c8, D1.c9 as c9, D1.c10 as c10, D1.c11 as c11, D1.c12 as c12, D1.c13 as c13, D1.c14 as c14, D1.c15 as c15, D1.c16 as c16 from ( select distinct 0 as c1,
    D1.c3 as c2,
    D1.c4 as c3,
    D1.c5 as c4,
    D1.c6 as c5,
    D1.c7 as c6,
    D1.c8 as c7,
    D1.c9 as c8,
    D1.c10 as c9,
    D1.c11 as c10,
    D1.c12 as c11,
    D1.c13 as c12,
    D1.c14 as c13,
    D1.c15 as c14,
    D1.c2 as c15,
    D1.c1 as c16
    from
    SAWITH0 D1
    order by c13, c14, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12 ) D1 where rownum <= 65001

    Hi Titas,
    with such problems typically, at least for me, the cause turns out to be something simple and embarrassing like:
    - I am connected to another database,
    - The database is right but I have made some manual adjustments without committing them,
    - I have got the wrong query from the query log,
    - I have got the right query but my request is based on multiple queries.
    Do other OBIEE reports work fine?
    Have you tried removing columns one by one to see whether it makes a difference?
    -JR

  • OC4J CMP Performance Problems

    Hello,
    Can anyone help me solve the following OC4J (9.0.4.0.0) CMP performance problems and thus not having to ditch OC4J in favour of better performing CMP implementation?
    Problem 1:
    Multiple SELECTs per row, waste of round-trip-times.
    For reading a single entity, OC4J issues multiple SQL queries to the database.
    The first query selects all attribute-columns and all foreign-key-columns for One-to-One relationships. So far, that's ok, but: OC4J did not yet select the foreign-key-columns for the Many-to-One relationships. These foreign-key-columns are then fetched one by one (!!!) in subsequent SQL queries.
    So, dependending on the relationships that my CMP bean class got, it takes several SQL queries instead of one.
    This behaviour results a high number of round-trip-times.
    Question:
    How can I make OC4J read an entity with a single SQL query?
    Problem 2:
    Checking multiplicity at read-time.
    OC4J seems to always check for the correct multiplicity of CMP beans even when just reading.
    When navigating a One-to-One relationship between two beans "A" and "B" (foreign-key is stored on the "A"-side) from "A" to "B", OC4J reads bean "B" and then issues a SQL query to find the primary-key of all "A"s that refer to the retrieved "B" entity.
    If this query yields more than one entity then the multiplicity is wrong and OC4J "fixes" this by NULLing some foreign-keys in order to re-establish a One-to-One multiplicity.
    While this behaviour is a must (EJB spec.) when modifying relationships, I do not expect the container to modify relationships at read-time. Furthermore, these additional checks at read-time again result in a higher number of round-trip-times.
    Question:
    How do I turn off checking multiplicity at read-time?
    Problem 3:
    OC4J doesn't cache and re-use prepared statements even while in the same transaction.
    Looking at a tcpdump, I can see many identical SQL queries. In some places OC4J seems to re-use prepared statements for multiple executions, but I did not investigate further.
    Question:
    How do I turn on caching (and re-using) prepared statements in OC4J?
    Problem 4:
    Read-Ahead and Deep-Read-Ahead of entities.
    Question:
    Is this supported?
    Just in case you want to have a look at my datasource, here is it (slightly modified to protect the innocent):
    <data-source
    class="com.evermind.sql.DriverManagerDataSource"
    name="MyAppDS"
    location="java:jdbc/MyAppDS"
    xa-location="java:jdbc/xa/MyAppXADS"
    ejb-location="java:jdbc/MyAppDS"
    connection-driver="oracle.jdbc.driver.OracleDriver"
    username="mylogin"
    password="mypassw"
    url="jdbc:oracle:thin:@dbserver:1521:INSTNAME"
    inactivity-timeout="300"
    />
    And my orion-ejb-jar.xml defines "java:jdbc/MyAppDS" for each entity bean.
    Note that I did not include direct comparison to other application servers because I wouldn't ask for something in OC4J that wasn't available in other containers.
    In case someone wants to suggest to try with the latest OC4J with CMPs based on TopLink: how can we easily transform orion-ejb-jar.xml into the TopLink deployment descriptor? (Using the Orion's CMP engine, there's no difference...)
    Suggestions welcome.
    Stefan Schmidt

    Steve, thanks for forwarding my questions and for pointing to some additional documentation.
    Regarding problem 3 (caching and re-using prepared statements): this is actually a FAQ that I shouldn't have asked. Sorry, I somehow missed to add an appropiate stmt-cache-size parameter to the datasource. OC4J now nicely re-uses prepared statements (verified with a tcpdump).
    With this simple change, performance improved, but is still not on par with other CMP containers. Though the overall number of (network) round-trip-times has reduced now because of re-using prepared statements, the overall number of SQL queries issued is still the same due to problems 1 and 2 described in my initial posting.
    Note that the additional SQL queries in problems 1 and 2 may only have a big impact in certain scenarios. In most scenarios you might not even notice a performance degration.
    In our case, entity beans have lots of relationships (usually more than 5 and up to 20..30), so problems 1 and 2 have a huge impact. In this particular SPEC benchmark there are just a few relationships, so you might not even (performance-wise) notice the additional SQL queries. In addition (without having studied the benchmark in detail), the primary goal of the benchmark is to achieve a high number of transactions per time frame. In our cause, we also need short transaction processing times as this is the performance as perceived by the user.
    Suggestions welcome.

  • XMLAGG performance problems

    Hello,
    I've been having performance problems with one query using XMLAGG. Explain plan shows that all tables are joined using correct indexes - there is nothing wrong with it. After series of experiments with the query I found out that the problem is in use of XMLAGG function. It aggregates the data before submitting it to the result XML document.
    I there any way to generate something like
    <Main attr1="...">
    <el1>...</el1>
    <el2>...</el2>
    <SubElement>
    <row>row1 from the database</row>
    <row>row2 from the database</row>
    <row>rowN from the database</row>
    </SubElement>
    <Main>
    without using XMLAGG in the subquery or somehow switch the aggregation off? I dont really need the records to be aggregated, all I need is to be able to generate multiple record based XML in the subquery which would have normally failed on (single-row query returned more than one record exception) without XMLAGG.
    Thanks in advance
    Alexey

    TKPROF: Release 9.2.0.1.0 - Production on Wed Jan 31 15:21:34 2007
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Trace file: lnsqd1_ora_7350_perfoamance_test_01.trc
    Sort options: default
    count = number of times OCI procedure was executed
    cpu = cpu time in seconds executing
    elapsed = elapsed time in seconds executing
    disk = number of physical reads of buffers from disk
    query = number of buffers gotten for consistent read
    current = number of buffers gotten in current mode (usually for update)
    rows = number of rows processed by the fetch or execute call
    ALTER SESSION SET EVENTS '10046 trace name context forever, level 1'
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 1 0.00 0.00 0 0 0 0
    Misses in library cache during parse: 0
    Optimizer goal: ALL_ROWS
    Parsing user id: 113
    create table tmp_003
    as
    select a.*, 'NEW ' as eventtype
    from v_deal_xml a
    where a.deal_id in
    select a.pk_value
    from da_history a, gx_type b
    where a.evtime <= to_date('1/22/2007 12:59:26', 'MM/DD/RRRR HH24:MI:SS')
    and a.type_id = b.type_id
    and b.xmltype = 'GDP/DEAL'
    and a.eventtype = 'NEW'
    call count cpu elapsed disk query current rows
    Parse 1 0.12 0.12 0 60 1 0
    Execute 1 108.30 105.87 0 6343 1387 172
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 108.42 106.00 0 6403 1388 172
    Misses in library cache during parse: 1
    Optimizer goal: ALL_ROWS
    Parsing user id: 113
    insert into tmp_003
    select a.*, 'UPDATE' as eventtype
    from v_deal_xml a
    where a.deal_id in
    select a.pk_value --b.xmltype, a.*
    from da_history a, gx_type b
    where a.evtime <= to_date('1/22/2007 12:59:26', 'MM/DD/RRRR HH24:MI:SS')
    and a.type_id = b.type_id
    and b.xmltype = 'GDP/DEAL'
    and a.eventtype = 'UPDATE'
    call count cpu elapsed disk query current rows
    Parse 1 0.13 0.13 0 60 0 0
    Execute 1 256.74 250.96 0 18752 3815 381
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 256.87 251.09 0 18812 3815 381
    Misses in library cache during parse: 1
    Optimizer goal: ALL_ROWS
    Parsing user id: 113
    Rows Row Source Operation
    381 NESTED LOOPS (cr=1194 pr=0 pw=0 time=26458 us)
    381 VIEW VW_NSO_1 (cr=430 pr=0 pw=0 time=6665 us)
    381 SORT UNIQUE (cr=430 pr=0 pw=0 time=6285 us)
    381 MERGE JOIN (cr=430 pr=0 pw=0 time=3761 us)
    1 TABLE ACCESS BY INDEX ROWID GX_TYPE (cr=2 pr=0 pw=0 time=67 us)
    5 INDEX FULL SCAN XPKGX_TYPE (cr=1 pr=0 pw=0 time=24 us)(object id 71282)
    381 SORT JOIN (cr=428 pr=0 pw=0 time=3698 us)
    603 TABLE ACCESS FULL DA_HISTORY (cr=428 pr=0 pw=0 time=1339 us)
    381 TABLE ACCESS BY INDEX ROWID DEAL (cr=764 pr=0 pw=0 time=16692 us)
    381 INDEX UNIQUE SCAN DEAL_PK (cr=383 pr=0 pw=0 time=8926 us)(object id 71236)
    commit
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 1 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 0 1 0
    Misses in library cache during parse: 0
    Parsing user id: 113
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 3 0.25 0.25 0 120 1 0
    Execute 4 365.04 356.84 0 25095 5203 553
    Fetch 0 0.00 0.00 0 0 0 0
    total 7 365.29 357.10 0 25215 5204 553
    Misses in library cache during parse: 2
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 83 0.01 0.01 0 0 0 0
    Execute 125 0.03 0.03 0 108 201 46
    Fetch 114 0.01 0.00 0 221 0 72
    total 322 0.05 0.04 0 329 201 118
    Misses in library cache during parse: 3
    Misses in library cache during execute: 3
    4 user SQL statements in session.
    125 internal SQL statements in session.
    129 SQL statements in session.
    Trace file: lnsqd1_ora_7350_perfoamance_test_01.trc
    Trace file compatibility: 9.00.01
    Sort options: default
    1 session in tracefile.
    4 user SQL statements in trace file.
    125 internal SQL statements in trace file.
    129 SQL statements in trace file.
    45 unique SQL statements in trace file.
    11196 lines in trace file.

Maybe you are looking for

  • HTML DB Cursor Focus

    Hi everybody, I work with HTML DB 1.6. My question is about Cursor Focus. In my page Cursor Focus is SET TO: First item on page My first item is a Select list, my second item is an other Select list and my third item is a Text area. My cursor is focu

  • Audio causes networking probl

    http://it.slashdot.org/article.pl si...28200&from=rss I read that and it made me wonder if perhaps its a two way street. Could networking on Vista cause problems with peoples sound, specificaly the poping and crackling problems that many seem to be p

  • Access to free WiFi only in France and Italy?

    I plan on taking my iPad but do not need constant internet access so I am not bothering with 3G providers over there. Will my device access open WiFi networks via Safari just as it does here in the US?

  • I updated to IOS 6 but my WiFi won't work!

    Whenever I try to connect to my wifi it will do it for like 3seconds then stop. Why isn't my wifi working?!?!

  • How to produce a podcast

    I want to make my own podcast however I don't have a server to publish it to. Can someone expalin to me how to publish a podcast and how i can make alternative plans. Thanks.. '-'