Like clause and index access

Hi All,
I've gone through below link to how to use index while using LIKE operator in SQL.
[http://laurentschneider.com/wordpress/2009/07/how-to-tune-where-name-likebc.html]
I've tried the same approach in Employees Table,but index not being used.
--there is a normal Index on Job_id named by EMP_JOB_IX.
SQL10.2> select /*+ INDEX(employees EMP_JOB_IX)*/ e.job_id,e.first_name
from employees e where e.job_id like '%ABC%';
---Explain plan
SELECT STATEMENT, GOAL = ALL_ROWS               3     5     80
TABLE ACCESS FULL     HR     EMPLOYEES     3     5     80
Pls let me know why it is full table scan.(using version 10.2)
Thanks,

SQL>EXEC dbms_stats.gather_table_stats(user,'EMPLOYEES',cascade=>true);
SQL>select /*+ INDEX(employees ,EMP_JOB_IX)*/ e.job_id,e.first_name
from employees e where e.job_id like '%ABC%' ;
SELECT STATEMENT, GOAL = ALL_ROWS               3     5     80
TABLE ACCESS FULL     HR     EMPLOYEES     3     5     80
Still showing the Table full scan.

Similar Messages

  • LIKE, LIKEC and Index usage

    I've table that contains about 20 million rows, and I've created index for varchar2(100) column. It works well if I do
    SELECT * FROM MY_TABLE WHERE MY_COL LIKE 'FOO%';
    But if I change query to use LIKEC (to make unicode escaped strings work):
    SELECT * FROM MY_TABLE WHERE MY_COL LIKEC 'FOO%';
    I always get full table scan in explain plan.
    I tried to use NVARCHAR, or index created by TO_NCHAR but I always end up hitting full table scan.
    Should I create index some special way or do something else before I get index working?

    Just a gut feeling : is the database using character semantics or byte semantics?
    My gut feeling, after looking up the documentation, is it should be character semantics.
    BTW: Not posting version info decreases the chance you get an adequate reply.
    Sybrand Bakker
    Senior Oracle DBA

  • Create Index to use Like Clause

    Hi All,
    I want one of my query to use a index which runs with a LIKE Clause. I have not done that before but i have heard and seen through forums that its possible to create indexes for a column with Like Clause using function based index.
    Function
    Request the forum users to achieve my objective. Let me list down what i have done.
    Function
    CREATE OR REPLACE FUNCTION RND_LIKE(P_NO IN VARCHAR2)
    RETURN VARCHAR2 IS
    RESULT VARCHAR2(240);
    BEGIN
    RETURN P_NO||'%';
    END RND_LIKE;
    SELECT ENAME FROM EMP WHERE ENAME LIKE RND_LIKE('A')
    Here based on this function i want to create a function based index and force the same to my query. Request the forum users to help me out in this.
    Thanks
    Edited by: ramarun on Dec 18, 2009 9:26 PM

    In the case you had there , Oracle would use an index on ename in a query if you were to type A% in the ename item on a Form. You wouldn't need a function index for that.
    Here's the link to the documentation to create a function based index http://download-uk.oracle.com/docs/cd/B28359_01/server.111/b28310/indexes003.htm#i1006674

  • How to identify frequently accessed tables and indexes

    Hi,
    Could some one give me the exact queries to identify the frequently accessed Tabels and Indexes
    Regards
    Naveen

    Hi,
    depends from your definition of "frequently accessed" but i this you can use a query like this:
    select owner,object_name, sum( value)
    from V$SEGMENT_STATISTICS
    where OBJECT_TYPE='TABLE'
    and owner='NAVEEN_4_EX'
    and STATISTIC_NAME in ('logical writes','logical reads')
    group by owner,object_name
    h.h.
    Sam

  • How can my laptop access the downloaded files like mp3 and pdf documents which are in my iphone 4G?

    how can my laptop access the downloaded files like mp3 and pdf documents which are in my iphone?

    Yes you will need to NAT at some point to go from private to public address space. Here is a basic configuration if you are interested:
    interface F8
    ip nat inside
    interface G0
    ip nat outside
    ip access-list standard NAT
     permit 192.168.11.0 0.0.0.255
    ip nat inside source list NAT interface G0 overload

  • I have problem  in signing in I don't like read and accept terms I cannot access this I don't know about this read and accept terms this is poor, I just like android's simple option and simple singing in, but how to use apple I'd?

    I have problem  in signing in I don't like read and accept terms I cannot access this I don't know about this read and accept terms this is poor, I just like android's simple option and simple singing in, but how to use apple I'd?

    Your computer is called an iMac.
    Did you turn proxies on?  If so, you should turn them off.

  • Generate the tablespace clause for tables and indexes

    How can I make Designer generate the tablespace clause for the create table and create index statements?
    I assigned tables and indexes to Tablespaces objects in designer but they don't seem to have any effect on the generator.
    I am using the latest version of Oracle Designer.
    Thanks
    Message was edited by:
    bikerc

    Guess I am not real clear what you want.
    In the DB admin tab you will need to create the tablespace with the data file.
    Then you need to assign the table space to the table.
    You will need to generate from the DB Admin tab.
    Hope this helps.
    Michael

  • HT1527 I have windows 7 and have tried the troubleshooting steps but am still unable to connect to the itunes store yet my PC is connected to the internet, it just keeps looking like its trying to access the store

    I have windows 7 and have tried the troubleshooting steps but am still unable to connect to the itunes store yet my PC is connected to the internet, it just keeps looking like its trying to access the store

    I have the same problem but the solution Wendyz99 did has not been successful for me. Part of the itunes store loads on the right side where the tabs for Quick Links and Top Charts are but the center section of the page is white. Also if I click on a song that is in the top chart area it will take me to the page but I'm not able to purchase the song or preview the song.
    I feel I have tried all the solutions I can find on line and even had conversations with apple tech with no help.
    I have down loaded the Autoruns program and in the winstock providers I have Bonjour and Microsoft Windows Live ID Namespace Provider. I have removed Windows Live and this still does not resolve the problem. I am at a complete loss. I am running windows 7, 32bit. Someone please help!

  • I am running MS Office for Mac 2008 on my MacBook Pro.  I do not have the Standard and Formatting toolbar fixed into position, as I would expect and would like.  I can access them via the View menu, where I have to select and de-select them before I can g

    I am running MS Office for Mac 2008 on my MacBook Pro.  I do not have the Standard and Formatting toolbar fixed into position, as I would expect and would like.  I can access them via the View menu, where I have to select and de-select them before I can get them to display, but lose them when I log off. Can anyone help?  I have tried all the usual and supplied help facilities to no avail.  Thanks

    Post the question in Microsoft's own Office for Mac forum.

  • PreparedStatements and LIKE clauses

    Hi,
    I've come across a slight problem with PreparedStatements and LIKE clauses. Say I want my query to look something like this:
    SELECT * FROM CONTENT_TABLE WHERE CONTENT LIKE '%test%'If I were to use a PreparedStatement I'd do something like this:
    PreparedStatement searchContent = con.prepareStatement("SELECT * FROM CONTENT_TABLE WHERE CONTENT LIKE ?");
    searchContent.setString(1, queryString);This isn't correct becuase it matches where CONTENT LIKE 'queryString'
    I tried this:
    PreparedStatement searchContent = con.prepareStatement("SELECT * FROM CONTENT_TABLE WHERE CONTENT LIKE '%?%'");
    searchContent.setString(1, queryString);But I get a syntax error in my SQL.
    Is there another way around this?
    Thanks

    just guessing try this:
    PreparedStatement searchContent = con.prepareStatement("SELECT * FROM CONTENT_TABLE WHERE CONTENT LIKE '%?%'");searchContent.setString(1, "%"+queryString+"%");
    This might work, since ? gets replaced by '...', so i am guessing %..% will fit..
    give it a shot, i might be wrong.

  • Doubt regarding FOR ALL ENTRIES and INDEXES

    Hi iam Aslam ..
    and i have a  doubt  ..regrding .. .
    1)   what are  the  disadvs of using FOR ALL ENTRIES
    2)  what are the disadvs of using INDEXES
    3)    what is the  disadvs of  using  Binary search ..
    4) . how can u do performance tuning ...if u have    more than one SELECT  statements  between ... Loop and Endloop .......
    please answer to these   questions   or  reply me to [email protected] ..
    thanks  in advance ..
    bye

    HI
    <b>1) what are the disadvs of using FOR ALL ENTRIES</b>
    if there is no data available for you condition mentioned in the where condition then it will retrive all the data from the database table , which we don't want , but we can solve that easily
    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    Points # 1/2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    Point # 1
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    Point # 2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Point # 3
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    Point # 4
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    Point # 5
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements           contd..  SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    Point # 1
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    Point # 2
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    Point # 3
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements       contd…           Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements    contd…For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements    contd…  Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    Point # 1
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    Point # 2
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Point # 3
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    Point # 2
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    Point # 3
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    Point # 5
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    Point # 6
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    Point # 7
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    Point # 8
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    Point # 9
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 10
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    Point # 11
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    Point # 12
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 13
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Conforming and Indexing Errors, Media Pending, Audio won't play in timeline

    I'm working on a desktop PC which is running Windows 7 Professional 64-bit and Adobe Premiere Pro (version CS5.5). It's currently utilizing a second gen. 3.4Ghz i7 2600 processor, 16GB of 1600Mhz RAM, 64GB solid-state drive and a ASUS P8Z68-V Intel Z68 Motherboard with onboard audio (Realtek ALC892 chipset) and onboard video. My problem is this:
    The conforming and indexing of all of my imported media never seems to finish regardless of how many times I reopen the project file and wait for it. On the lower right-hand portion of the screen, next to the conforming/indexing progress bar, is a little red "X". When clicked, it pops up with a list of errors that read: "An unexpected error occurred while performing a conform action on the following file...". As a result, my audio channels have no waveform and during playback there are no audible tones or levels. On some video clips there's just text that reads "Media Pending". This only appears to happen with project files that I saved on external hard drives, and I suspect it has something to do with the Media Cache Files folder and how Premiere Pro locates these conform/index files. I've also encountered this problem in CS3 and CS4.
    I have a few questions:
    1) How do I avoid error messages in regards to indexing and conforming
    2) How do you know when indexing/conforming has completed itself? (there doesn't seem to be a progress log or a list of commands/executions)
    3) Indexing and conforming appears to be an automatic process, but is there a way to do it manually?
    4) What's the best way to setup your media cache files when you click EDIT > PREFERENCES > MEDIA?
    5) If I have approximately 1 hour of footage, what's an average wait time for conforming/indexing? What about 5 hours of footage? 10?
    6) Adobe recommends not editing until the conforming and indexing has completed itself-- how important is this?
    7) Sometimes it appears as though the conforming and indexing has finished, but then I still have problems with playback. Do I have to reopen the project for it to continue with the conforming/indexing progress? I've already determined that the video file I'm working with is intact and free of any corruption.
    I'm fine with having to wait for a project to conform and index, but it never seems to complete itself! Any help regarding this matter would be greatly appreciated.

    Harm filled in pretty much all the salient details, but I'll do another pass here.
    1) How do I avoid error messages in regards to indexing and conforming
    Two parts here.  One, conforming only happens for certain media files, ie the ones where performance is critical and we can't depend on extracting the audio fast enough for realtime playback.  That's basically anything in an .mpeg wrapper, or AVCHD material.  So if you edit XDCAM HD/EX or P2, or RED, or even AVIs or QT, those formats don't require audio conforming.
    If you're stuck editing AVCHD or MPEG2, then it needs to conform.  But, that being said, you shouldn't be getting errors in the first place. I think it's related to your external drives.  More below...
    2) How do you know when indexing/conforming has completed itself? (there doesn't seem to be a progress log or a list of commands/executions)
    Nope, you have a progress status bar indicating which file it's working on.  If there's an error, it shows up in the events panel.
    3) Indexing and conforming appears to be an automatic process, but is there a way to do it manually?
    No.
    4) What's the best way to setup your media cache files when you click EDIT > PREFERENCES > MEDIA?
    While some people like having the check box for having the conform files beside the media, I hate it.  Yes, it means that if you move the project to a different system & reopen, it means that you potentially can avoid recreating CFA files, but I find the drive littering not worth it.  I much prefer having setting the Media prefs to point to a specific media drive.  Usually a raid, if available.  Definitely not an external drive that you disconnect & walk away with.  If you don't have a permanent raid on your system, then preferably a dedicated internal drive for media (think along the lines as your Photoshop 'scratch disk').  Failing that, leave it on your C: drive, although with a 64 Gig SSD, you probably don't have much room for transient temporaries.
    5) If I have approximately 1 hour of footage, what's an average wait time for conforming/indexing? What about 5 hours of footage? 10?
    Like Harm said.  Totally dependant on the media container & the speed of your drive i/o.  The conforming is iterating through the entire file & pulling audio data, so it's not CPU intensive, it's all i/o.
    6) Adobe recommends not editing until the conforming and indexing has completed itself-- how important is this?
    If you're trying to play/scrub while conforming, it's going to be pokey.  Esp. if you're trying to access the file that's actively being conformd.  As I just said, we're hitting the files for all the audio.  The i/o is being saturated already, so unless you have a stellar raid, you don't have much headroom.
    7) Sometimes it appears as though the conforming and indexing has finished, but then I still have problems with playback. Do I have to reopen the project for it to continue with the conforming/indexing progress? I've already determined that the video file I'm working with is intact and free of any corruption.
    You should be good to go.  Sounds like there's something else at play here.
    Okay, back to what I think is wrong:  you don't mention what kind of external drives you're using.  You're making a bad assumption that blowing away conformed files & doing a reconform is buggy - I doubt it, as that's the same process that happened when you initially brought in the files.  I've blown away my media cache folder multiple times and have never seen failures on reconform.  So it's got to be one of two things:  either a read error from the source when attempting to pull the audio, or a write error to the destination.  Now I don't know where you currently are pointing the media cache directory, or what your source drive is, so I can only speculate.
    My suggestion is to do some elimination.   Copy one of the files that failed on you to your C drive, & target your media cache directory also to C:.  Pick a new project, import your copied file, confirm that it conforms correctly & behaves.   Then, try to use the same clip from your external drive, keeping the media cache to C:.  If that's still good, then try targeting another (local/internal) drive as your media cache target; close/restart, then import the clip from C:, and then import the clip from your external drive.  This troubleshooting should give us something.
    PS, if you're trying to edit from external USB drives, good luck.  I find it a major PITA that I avoid as much as possible.  Firewire isn't much better.  I know some people do it successfully, but I think it's a road fraught with peril.  These devices are generally not designed for heavy duty I/O and a flaky connection or drive is nothing but pain.
    Cheers

  • Distinct clause and query performance

    Friends,
    I have a query which returns results in 40 seconds without distinct clause and when I add distinct clause it takes over 2 hours.
    I have verified following -
    1. indexes/table statistics are up to date.
    2. columns that are used in where clause but are not indexed have upto date column statistics
    Any idea what could be the reason, explain plan shows that distinct clause has a very expensive cost.
    Thanks
    Query and explain plan is below
    SELECT
    DISTINCT -- with distinct 2hrs + and without 40 seconds
    quote_by_dst.qte_hdr_stat_cd, quote_by_dst.qte_ln_cond_cd,
                    product.prod_nm, product.prod_id,
                    cs_ship_by_dst.bto_ds_cac_ownr_ud,
                    quote_by_dst.qte_csup_csup_am, cs_ship_by_dst.bto_ds_cac_nm,
                    product.spl_sht_nm,
                       product.prod_blg_un_fac_um
                    || ' '
                    || product.prod_blg_um
                    || ' '
                    || product.prod_stck_um,
                    product.prod_blg_um, quote_by_dst.qte_ln_brk_1_blg_uom_am,
                    quote_by_dst.qte_csup_avg_cst_am,
                    quote_by_dst.qte_csup_rev_gm_pct_am,
                    quote_by_dst.qte_csup_avg_cst_am, cs_ship_by_dst.bto_id,
                    cs_ship_by_dst.bto_ds_cac_cd,
                       cs_ship_by_dst.bto_ds_cac_cd
                    || product.prod_id
                    || cs_ship_by_dst.bto_id
               FROM infowhse.quote_by_dst4 quote_by_dst,
                    infowhse.product,
                    infowhse.cs_ship_by_dst4 cs_ship_by_dst,
                    infowhse.department
              WHERE (quote_by_dst.dpt_cd = department.dpt_cd)
                AND (quote_by_dst.cus_dpt_id = cs_ship_by_dst.cus_dpt_id)
                AND (product.prod_id = quote_by_dst.prod_id)
                AND (    (   quote_by_dst.qte_ln_cond_cd = 'E'
                          OR quote_by_dst.qte_ln_cond_cd = 'C'
                     AND quote_by_dst.qte_hdr_stat_cd = 'A'
                     AND ((cs_ship_by_dst.bto_cust_type_cd) = '01')
                     AND cs_ship_by_dst.bto_ds_cac_ownr_ud = 'EHOC'
                     AND department.dpt_cd > '0.00'
                    )Explain plan
    Plan
    SELECT STATEMENT  CHOOSECost: 911,832,256  Bytes: 433,941,639,459  Cardinality: 2,729,192,701                                               
         15 SORT UNIQUE  Cost: 911,832,256  Bytes: 433,941,639,459  Cardinality: 2,729,192,701                                          
              14 NESTED LOOPS  Cost: 68,705  Bytes: 433,941,639,459  Cardinality: 2,729,192,701                                     
                   12 HASH JOIN  Cost: 68,705  Bytes: 425,754,061,356  Cardinality: 2,729,192,701                                
                        1 INDEX FAST FULL SCAN NON-UNIQUE INFOWHSE.DST_SEC_DST_SEC_DST_CD_IX Cost: 25  Bytes: 922,700  Cardinality: 184,540                           
                        11 HASH JOIN  Cost: 16,179  Bytes: 1,199,209,082  Cardinality: 7,941,782                           
                             2 INDEX FAST FULL SCAN NON-UNIQUE INFOWHSE.DST_SEC_DST_SEC_DST_CD_IX Cost: 25  Bytes: 922,700  Cardinality: 184,540                      
                             10 HASH JOIN  Cost: 15,879  Bytes: 3,374,060  Cardinality: 23,110                      
                                  8 HASH JOIN  Cost: 15,200  Bytes: 2,981,190  Cardinality: 23,110                 
                                       6 HASH JOIN  Cost: 13,113  Bytes: 1,779,470  Cardinality: 23,110            
                                            3 TABLE ACCESS FULL INFOWHSE.CUSTOMER_SHIP Cost: 5,640  Bytes: 42,372  Cardinality: 1,177       
                                            5 PARTITION RANGE ALL  Partition #: 11  Partitions accessed #1 - #12     
                                                 4 TABLE ACCESS FULL INFOWHSE.QUOTE Cost: 7,328  Bytes: 38,826,590  Cardinality: 946,990  Partition #: 11  Partitions accessed #1 - #12
                                       7 TABLE ACCESS FULL INFOWHSE.PRODUCT Cost: 1,542  Bytes: 9,246,640  Cardinality: 177,820            
                                  9 INDEX FAST FULL SCAN NON-UNIQUE INFOWHSE.CUST_SHIP_SLSDST_DTP_SICALL_IX Cost: 185  Bytes: 9,878,411  Cardinality: 581,083                 
                   13 INDEX UNIQUE SCAN UNIQUE INFOWHSE.DEPARTMENT_PK Bytes: 3  Cardinality: 1                                

    This might be more useful.
    Query is still running.
    There is heavy wait time for scattered file read.
    Results from
    SELECT * FROM V$SESSION_WAIT WHERE SID = 48;
    SID   SEQ#  EVENT                           P1TEXT                          P1    P1RAW            P2TEXT                          P2    P2RAW            P3TEXT                          P3    P3RAW            WAIT_TIME                              SECONDS_IN_WAIT                        STATE              
    48    6865  db file scattered read          file#                           108   000000000000006C block#                          1593370000000000026E69 blocks                          32    0000000000000020 2                                      30                                      WAITED KNOWN TIME  
    SELECT * FROM V$SESSION_EVENT WHERE SID = 48;
    SID                                    EVENT                                                            TOTAL_WAITS                            TOTAL_TIMEOUTS                         TIME_WAITED                            AVERAGE_WAIT                           MAX_WAIT                               TIME_WAITED_MICRO                     
    48                                     log file sync                                                    1                                      0                                      0                                      0                                      0                                      563                                   
    48                                     db file sequential read                                          11                                     0                                      0                                      0                                      0                                      243                                   
    48                                     db file scattered read                                           6820                                   0                                      330                                    0                                      7                                      3296557                               
    48                                     SQL*Net message to client                                        19                                     0                                      0                                      0                                      0                                      23                                    
    48                                     SQL*Net message from client                                      18                                     0                                      128                                    7                                      127                                    1281912                                Sorry for long post.

  • WHERE clauses and Merge Join Cartesian?

    For some reason, Siebel is generating queries like this:
                        AND CONCAT (CONCAT (t41828.lvl8anc_postn, '-'),
                                    t41828.lvl8_emp_full_name
                                   ) = 'GEO-SMITH, BILL'
                        AND CONCAT (CONCAT (t41828.lvl6anc_postn, '-'),
                                    t41828.lvl6_emp_full_name
                                   ) = 'GROUP1-DOE, JOHN'and this ends up with 3 merge join cartesian in autotrace and this query takes several hours.
    However, by rewriting just those 2 clauses to:
                        AND (t41828.lvl8anc_postn = 'GEO' and
                                    t41828.lvl8_emp_full_name = 'SMITH, BILL')
                        AND (t41828.lvl6anc_postn = 'GROUP1' and
                                    t41828.lvl6_emp_full_name = 'DOE, JOHN')the merge join cartesians go away, and it runs in 7 seconds.
    However, since Siebel is generating the query, and we are having issues, we decided to add columns for testing that is equivalent to the concatenation of the two columns above. After doing this, and creating indexes on the new columns, it is now back to the merge join cartesian and I cannot get rid of it.
    So:
    1. How can rewriting the WHERE (not adding or deleting anything here, just rewriting it in a different way) clause eliminate the merge join cartesian?
    I'm guessing by understanding that single question, I will be able to come up with a better solution to this.

    Note we made two additional columns:
    CONCAT (CONCAT (t41828.lvl8anc_postn, '-'), t41828.lvl8_emp_full_name) => col1 CONCAT (CONCAT (t41828.lvl6anc_postn, '-'), t41828.lvl6_emp_full_name) => col2
    When I created indexes on col1 and col2 and used them in the query, the cartesians returned, and I cannot figure out why. So, I'm still confused why:
    In the original query:
    1. Using CONCAT, and thus no indexes => cartesian
    2. No CONCAT, and indexes => no cartesian
    With the new columns, col1 and col2:
    1. No CONCAT needed, full table scan or indexes => cartesian

  • How to partition tables and indexes in this scenario?

    So our situation is pretty simple. We have 3 tables.
    A, B and C
    the model is A->>B->>C
    Currently A, B and C are range partitioned on a key created_date however it's typical that only C is every qualfied with created date. There is a foreign key from B -> A and C -> B
    we have many queries where the data is identified by state that is indexed currently non partitioned on columns in A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at this time.
    It is typical that we qualify A on either account or user or both. There are indexes (non partitioned) on these
    We have a problem now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This often results in large full table scans. Our solution has been to remove the leading wildcard but this isn't always possible.
    We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on created_date but rarely qualify by that.
    We are also wondering where and how we can benefit from either global partitioned index or local partitioned indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.
    I am also wondering what impact pushing the state from A that's used to qualify A down to C would have any advantage.
    C is the table that currently we qualify with the partition key so I figure if you also pushed down the state from A that's used to qualify the set of C's we want based on the set of B's we want based on the set of A thru qualfying on columns within A.
    If we push down some of those values to C and simply use C when filtering I'm wondering what the plans will look like compared to having to work all the way up from the bottom to the top before it begins qualifying A.
    Edited by: steffi on Jan 14, 2011 11:36 PM

    We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on >created_date but rarely qualify by that. Very good question. Why did you partition on it? You will never have INSERTS on these partitions, but maybe deletes and updates? The only advantage (I can think of) would be to set these partitions in a read only tablespace to ease backup... but that's a weired reason.
    we have many queries where the data is identified by state that is indexed currently non partitioned on columns in >A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at >this time.Of course. Why should they be partitioned by Create_date?
    It is typical that we qualify A on either account or user or both. There are indexes (non partitioned) on these
    We have a problem now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This >often results in large full table scans. Our solution has been to remove the leading wildcard but this isn't always possible.I would suspect full index scan. Isn't it?
    We are also wondering where and how we can benefit from either global partitioned index or local partitioned >indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.As A is not accessed by any partition, why should C and B profit? You should look to partition by the key you are using to access. But, you are looking to tune your SQLs where the access is like '%ACCOUNT' on A. Then when there is a match. ORACLE joins via your index and nested loop (right?) to B and C.
    I am also wondering what impact pushing the state from A that's used to qualify A down to C would have any >advantage.Why should it. It just makes the table and indexes larger => more IO.
    C is the table that currently we qualify with the partition key so I figure if you also pushed down the state from A >that's used to qualify the set of C's we want based on the set of B's we want based on the set of A thru qualfying >on columns within A.If the access from A to C would be .. AND A.CREATE_DATE =C.CREATE_DATE and c.key like '%what I want%' which does not qualifify for a FK ;-) then, as that could be resulting in a partition scan, you could "profit". But, I'm sure that's not your model.
    If we push down some of those values to C and simply use C when filtering I'm wondering what the plans will look >like compared to having to work all the way up from the bottom to the top before it begins qualifying A.So you want to denormalize A,B,C and into one table? With the same access is like '%ACCOUNT' you would get a full scan on an index as before, just the objects would be larger due to redundance and harder to maintain. In the end you would have a bad and slower design.
    Maybe you explain what the problem is.
    Full index scan can not be avoided, but that can be made faster by e.g. parallel query, and then the join to B and C should be a "snip" if you just identify a small subset of rows in these tables.

Maybe you are looking for

  • How to create multiple users in solaris using script

    hi how i can create multiple users (2000 users) using shell script with a common password . useradd is creating one user at a time. thanks

  • Export for Web - not working

    I tried this once and it worked fine, even with other programs running. Now with only QuickTime running the export keeps canceling out (export unexpectedly failed). I tried rebooting and get the same thing. Any ideas?

  • Package information

    The java.lang.Package class contains some version information, that sounds interesting. Now, how can I give my jar-archive an implementation version, that can be retrieved by the Package class?

  • How do I update InDesign CC

    Seems like the creative cloud window has gone away, I would like to check for updates, how do I do this?

  • HSODBC to sqlserver with NT Authentication

    I have setup a connection to serverA ( oracle 8i ) to Sqlserver 2000 using HSODBC where both Oracleservice and Listener are running under LocalSystem. Sqlserver is set to Windows Authentication. Db link ( gram ) is created using xxx as user id and xx