Stories: Too much time between chapters in playlist.

Hi,
There's anyone knows how to eliminate the pause that occur in between chapters in a playlist of a Story? I've been making a long movie dvd with lot of chapters and create more stories from that. At the moment of jump between chapters (as I decided in the playlist) it hang for a half second.. Is not fluid at all. Am I going to put the chapter division only in black frames with no sound?
Thank you

Is the problem happening on just one player or on every one you try? If just the one then its probably a very specific player incompatibility - some of the cheaper players are pretty loose in there adhereance to the DVD spec.
The only work around I can suggest is write scripts for your endjumps and target the script as the story endjump. Sounds a bit weird I know I recently incorrectly wrote a series of scripts for a play all function - it jumped from my play all button, through 12 tracks, and back to the button with barely a flicker so might be worth a try.
Good luck
B

Similar Messages

  • Bought i phone 6 64 GB just 3 days before performance is very bad.Couldn't even open app store it takes too much time for opening a web page.

    bought i phone 6 64 GB just 3 days before performance is very bad.Couldn't even open app store it takes too much time for opening a web page.

    You may need to reset your network settings, making sure the network you're accessing is stable: tap Settings > General > Reset > Reset Network Settings.
    If your iPhone still can't connect to App Store, tap Settings > iTunes & App Store > tap the Apple ID and sign out > later, sign in with your Apple ID.

  • My high school aged child is spending too much time on Facebook, Tumblr to the detriment of home work.  Is there any way I can limit the access to these sites to between 8pm and 10pm?

    My high school aged child is spending too much time on Facebook and Tumblr is there any way I can limit the access time  on these sites to  from 8pm to 10pm?

    System Preferences>Parental Controls has time limits - check out this intro to Parental Controls from Cult of Mac on YouTube.
    Clinton

  • Taking too much time to load application

    Hi,
    I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
    I have another 10g server (same version) in which the same application is loading very fast.
    When I checked the apache error logs found this :-
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
    [Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
    [Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    Please HELP ME...

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • Too much time to mount Data file in Time capsule then error 8085

    its taking too much time to open the folder and some times it gives this error code -8085 .. any idea ?
    Regards

    Use ethernet and see if you can delete the folders.. as long as you have them stored elsewhere.
    The TC is not a NAS.. it is not designed as a file store.. even attempting to use it that way can lead to issues.
    Read the info from our main guru.. Pondini. On why you don't do this. And ways around it.
    Q3 here. http://pondini.org/TM/Time_Capsule.html

  • Why it is taking too much time to kill the process?

    Hi All,
    Today,one of my user ran the calc script and the process is taking too much time, then i kill the process. I am wondering about one thing here even it is taking too long to kill the process, generally it will not take more than 2 sec. I did this through EAS.
    After that I ran Maxl statement
    alter system kill request 552599515;
    there is no use at all.
    Please reply if you have any solutions to kill this process.
    Thanks in advance.
    Ram.

    Hi Ram,
    1. Firstly, How much time does your calculation scripts normally run.
    2. While it was running, you can go to the logs and monitor where exactly the script is taking time .
    3. Sometimes, it does take time to cancel a transaction ( as it might be in between).
    4. Maxl is always good to kill ,as you did . It should be succesful . Check the logs what it says ,and also the "sessions" which might say "terminating" and finish it off.
    5. If nothing works ,and in the worst case scenarion , if its taking time without doing anything. Then log off all the users and stop the databas and start it .
    6. Do log off all the users, so that you dont corrupt any filter related sec file.
    Be very careful , if its production ( and I assume you have latest backups)
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • BPC application is taking too much time to load

    Hi experts!
    I'm facing a very weird problem...
    We've developed a BPC application (app name: USM).
    This application is taking too much time to be loaded  in some computers (around 8 minutes to load).  Yes, in SOME computers.
    There is around 100.000 registers in the database and most coming from material master data.
    If I try to load this USM application in another computer, the process loads smoothly. The computer's hardware is all the same, the server is hyper estimated and everyone is in the same network.
    I talked to infrastructure departament and we made several tests. We run BPC on the server (loaded quickly), on several computers (some loads quickly, others don't), used wireless and cable connection (got all the same result) and checked communication between BW and BPC but it is ok.
    After all, I tried to load APSHEL application in the same enviroment and it loaded intantly. So, I guess it is something wrong with my application. But if was this, I suppose it should happen to all computers and not only with part of them.
    Have anybody ever seen something like this?
    Thank you in advance.
    Rubens
    Edited by: Rubens Massayuki Kumori on May 12, 2011 8:43 PM
    Edited by: Rubens Massayuki Kumori on May 12, 2011 8:46 PM

    Hi Rubens,
    I would try making a couple of test:
    1. I will install the client in a machine that is located in the same network segment, or try using a vpn that comunicates with the server bypassing all security devices, only to see if the network it's the problem.
    2. Making a full optimize of one application to see if maybe the problem it's related to the segmentation of the cubes (i don't think that this is the problem but give it a try).
    It is very wierd that in some computers happends and in others don't... also try to clean up the local cache of the applications in those computers that are giving to you bad performce and retry.
    hope it helps,

  • Page leads to proxy error sometimes or taking too much time to load

    Hi All,
    APEX4.0
    Web server: Apache 1.3.9 (Oracle 9iAS 10.0.1.2.2)
    I am getting  the below error at first time when I try to open a page or the page takes 3 to 5 mins to load. From second time on wards, It takes 5 to 8 sec to open as normal. I debugged the page and checked the log. Logs and execution time are looking normal.  why the page takes too much time to load at first time or it leads to proxy error?? Is anybody got same experience before??
    Proxy Error
    The proxy server received an invalid response from an upstream server.
    The proxy server could not handle the request GET/pls/apex/f.
    Reason : Document contains no data
    Please guide me to find out and  resolve this issue....
    Thanks in Advance
    Lakshmi

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • The queries takes too much time

    Hi,
    I want to compare time of queries between Spatial and PostGIS, but Spatial take too much time. I created spatial indexes in both databases so I have no idea where can be the problem :-(. I'm actually student and I try this for my essay, but I start with Oracle by myself so I'll appreciate some help.
    I have table with cca 7000 polygons and I want to create new table like dissolve these polygons by attribute katuze.
    Query in Spatial takes 8506 s :
    CREATE TABLE dp_diss_p AS
    SELECT KATUZE_KOD, SDO_AGGR_UNION(
    MDSYS.SDOAGGRTYPE(a.geom, 0.005))GEOM
    FROM dp_plochy a GROUP BY a.KATUZE_KOD;
    Query in PostGIS takes 10.149 s :
    CREATE TABLE plochy_diss AS
    SELECT st_union(geom) AS geom
    FROM plochy
    GROUP BY katuze_kod;
    I really don't know what to do, please give somebody me some advice :-(. Please excuse my English.
    Thx Eva

    I tryed use the sdo_aggr_set_union like this:
    CREATE OR REPLACE FUNCTION get_geom_set (table_name VARCHAR2,
    column_name VARCHAR2,
    predicate VARCHAR2 := NULL)
    RETURN SDO_GEOMETRY_ARRAY DETERMINISTIC AS
    type cursor_type is REF CURSOR;
    query_crs cursor_type ;
    g SDO_GEOMETRY;
    GeometryArr SDO_GEOMETRY_ARRAY;
    where_clause VARCHAR2(2000);
    BEGIN
    IF predicate IS NULL
    THEN
    where_clause := NULL;
    ELSE
    where_clause := ' WHERE ';
    END IF;
    GeometryArr := SDO_GEOMETRY_ARRAY();
    OPEN query_crs FOR ' SELECT ' || column_name ||
    ' FROM ' || table_name ||
    where_clause || predicate;
    LOOP
    FETCH query_crs into g;
    EXIT when query_crs%NOTFOUND ;
    GeometryArr.extend;
    GeometryArr(GeometryArr.count) := g;
    END LOOP;
    RETURN GeometryArr;
    END;
    CREATE TABLE dp_diss_p AS
    SELECT KATUZE_KOD, sdo_aggr_set_union (get_geom_set ('dp_plochy', 'geom','CTVUK_KOD <>''1'''), .0005 ) geom
    FROM dp_plochy a GROUP BY a.KATUZE_KOD;
    It takes 21.721 s, but it return type of collection and I can't export it to *.shp because Georaptor export it like 'unknown' . Has anybody idea what can I do with it to export it like polygon?
    Eva

  • SELECT query takes too much time! Y?

    Plz find my SELECT query below:
    select w~mandt
    wvbeln wposnr wmeins wmatnr wwerks wnetwr
    wkwmeng wvrkme wmatwa wcharg w~pstyv
    wposar wprodh wgrkor wantlf wkztlf wlprio
    wvstel wroute wumvkz wumvkn wabgru wuntto
    wawahr werdat werzet wfixmg wprctr wvpmat
    wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
    wbedae wcuobj w~mtvfp
    xetenr xwmeng xbmeng xettyp xwepos xabart
    x~edatu
    xtddat xmbdat xlddat xwadat xabruf xetart
    x~ezeit
    into table t_vbap
    from vbap as w
    inner join vbep as x on xvbeln = wvbeln and
    xposnr = wposnr and
    xmandt = wmandt
    where
    ( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
    ( ( ( erdat > pre_dat and erdat < p_syndt ) or
    ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    w~matnr in s_matnr and
    w~pstyv in s_itmcat and
    w~lfrel in s_lfrel and
    w~abgru = ' ' and
    w~kwmeng > 0 and
    w~mtvfp in w_mtvfp and
    x~ettyp in w_ettyp and
    x~bdart in s_req_tp and
    x~plart in s_pln_tp and
    x~etart in s_etart and
    x~abart in s_abart and
    ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    The problem: It takes too much time while executing this statement.
    Could anybody change this statement and help me out to reduce the DB Access time?
    Thx

    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where conditions you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    2.     For all frequently used Select statements, try to use an index.
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    3.     Using buffered tables improves the performance considerably.
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements  Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements  For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
              Loop at int_cntry.
      Select single * from zfligh into int_fligh
      where cntry = int_cntry-cntry.
      Append int_fligh.
                          Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Matview refresh taking too much time.....

    Hello All,
    I am trying to create Matview using db link, source table contain 30 crores of data.
    And I am picking only 2.5 crores of data from source,and i have used where condition "WHERE ALLOCATION_DATE BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MM'),-1) AND ADD_MONTHS(LAST_DAY(TRUNC(SYSDATE)),6)".
    But it is taking too much time to refresh and i am using atomic_refresh=> false.
    Source table contains following columns
    ASSIGNMENT#
    PROJECT#
    ALLOCATION_DATE
    EFFORTS
    WEEKEND_LEAVE_FLAG
    ANU_YEAR
    LAST_UPDATE
    ALLOCATION_EFFORTS
    and source table is partitioned on ANU_YEAR.
    Can any one Please tell me how to create fast refresh matview.

    953975 wrote:
    Hello All,
    I am trying to create Matview using db link, source table contain 30 crores of data.
    And I am picking only 2.5 crores of data from source,and i have used where condition "WHERE ALLOCATION_DATE BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MM'),-1) AND ADD_MONTHS(LAST_DAY(TRUNC(SYSDATE)),6)".
    But it is taking too much time to refresh and i am using atomic_refresh=> false.
    Source table contains following columns
    ASSIGNMENT#
    PROJECT#
    ALLOCATION_DATE
    EFFORTS
    WEEKEND_LEAVE_FLAG
    ANU_YEAR
    LAST_UPDATE
    ALLOCATION_EFFORTS
    and source table is partitioned on ANU_YEAR.
    Can any one Please tell me how to create fast refresh matview.Please read {message:id=9360003} and follow the advice there.
    Also, please use international English. Crore is not part of International English - use thousands, millions etc.

  • Query Consuming too much time.

    Hi,
    i am using Release 10.2.0.4.0 version of oracle. I am having a query, its taking too much time(~7 minutes) for indexed read. Please help me to understand the reason and workaround for same.
      select *
    FROM a,
             b
       WHERE  a.xdt_docownerpaypk = b.paypk
             AND a.xdt_doctype = 'PURCHASEORDER'
             AND b.companypk = 1202829117
             AND a.xdt_createdt BETWEEN TO_DATE (
                                                                 '07/01/2009',
                                                                 'MM/DD/YYYY')
                                                          AND TO_DATE (
                                                                 '01/01/2010',
                                                                 'MM/DD/YYYY')
    ORDER BY a.xdt_createdt DESC;
    | Id  | Operation                      | Name                    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |   1 |  SORT ORDER BY                 |                         |      1 |      1 |    907 |00:06:45.83 |   66716 |  60047 |   478K|   448K|  424K (0)|
    |*  2 |   TABLE ACCESS BY INDEX ROWID  | a                       |      1 |      1 |    907 |00:06:45.82 |   66716 |  60047 |       |       |          |
    |   3 |    NESTED LOOPS                |                         |      1 |      1 |   6977 |00:06:45.64 |   60045 |  60030 |       |       |          |
    |   4 |     TABLE ACCESS BY INDEX ROWID| b                       |      1 |      1 |      1 |00:00:00.01 |       4 |      0 |       |       |          |
    |*  5 |      INDEX RANGE SCAN          | IDX_PAYIDENTITYCOMPANY  |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |       |       |          |
    |*  6 |     INDEX RANGE SCAN           | IDX_XDT_N7              |      1 |   3438 |   6975 |00:06:45.64 |   60041 |  60030 |       |       |          |
    Predicate Information (identified by operation id):
       2 - filter(("a"."XDT_CREATEDT"<=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "a"."XDT_CREATEDT">=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
       5 - access("b"."COMPANYPK"=1202829117)
       6 - access("XDT_DOCTYPE"='PURCHASEORDER' AND "a"."XDT_DOCOWNERPAYPK"="b"."PAYPK")
           filter("a"."XDT_DOCOWNERPAYPK"="b"."PAYPK")
    32 rows selected.
    index 'idx_xdt_n7' is on (xdt_doctype,action_date,xdt_docownerpaypk).
    index idx_xdt_n7 details are as below.
    blevel   distinct_keys   avg_leaf_blocks_per_key   avg_data_blocks_per_key   clustering_factor       num_rows
    3         868840             1                         47                     24020933               69871000
    But when i am deriving exact value of paypk from table b and applying to the query, its using another index(idx_xdt_n4) which is on index 'idx_xdt_n4' is on (month,year,xdt_docownerpaypk,xdt_doctype,action_date)
    and completes within ~17 seconds. below is the query/plan details.
      select *
      FROM a
        WHERE a.xdt_docownerpaypk = 1202829132
              AND xdt_doctype = 'PURCHASEORDER'
             AND a.xdt_createdt BETWEEN TO_DATE (
                                                                  '07/01/2009',
                                                                  'MM/DD/YYYY')
                                                           AND TO_DATE (
                                                                  '01/01/2010',
                                                                  'MM/DD/YYYY')
    ORDER BY xdt_createdt DESC;
    | Id  | Operation                    | Name                    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |   1 |  SORT ORDER BY               |                         |      1 |   3224 |    907 |00:00:02.19 |    7001 |    339 |   337K|   337K|  299K (0)|
    |*  2 |   TABLE ACCESS BY INDEX ROWID| a                       |      1 |   3224 |    907 |00:00:02.19 |    7001 |    339 |       |       |          |
    |*  3 |    INDEX SKIP SCAN           | IDX_XDT_N4              |      1 |  38329 |   6975 |00:00:02.08 |     330 |    321 |       |       |          |
    Predicate Information (identified by operation id):
       2 - filter(("a"."XDT_CREATEDT"<=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "a"."XDT_CREATEDT">=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
       3 - access("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER')
           filter(("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER'))
    index idx_xdt_n4 details are as below.
    blevel   distinct_keys   avg_leaf_blocks_per_key   avg_data_blocks_per_key   clustering_factor       num_rows
    3         868840             1                         47                     23942833              70224133Edited by: 930254 on Apr 26, 2013 5:04 AM

    the first query uses the predicate "XDT_DOCTYPE"='PURCHASEORDER' to determine the range of the index IDX_XDT_N7 that has to be scanned and uses the other predicates to filter out most of the index blocks. The second query uses an INDEX SKIP SCAN ignoring the first column of the index IDX_XDT_N4 and using the predicates for the following columns ("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER') to get a much more selective access (reading only 330 blocks instead of > 60K).
    I think there are two possible options to improve the performance:
    1. If creating a new index is an option you could define an index on table A(xdt_doctype, xdt_docownerpaypk, xdt_createdt)
    2. If creating a new index is not an option you could use an INDEX SKIP SCAN Hint (INDEX_SS(A IDX_XDT_N4)) to order the CBO to use the second index (without a hint the CBO tends to ignore the option of using a SKIP SCAN in an NL join). But using Hints in production is rarely a good idea... In 11g you could you sql baselines to avoid such hints in the code.
    Regards
    Martin

  • SMQ1 deletion problem too much time larger queue

    Hi Experts
    We have to delete Queues from SMQ1, the problem is the data volume is huge in queues and system is taking too much time to delete these queues.
    To delete 122000 records it took 4 hours to delete the queue from SMQ1.
    Problem is we have to delete total of  51,48,000 records in total and one queue has huge data that is 26,10,000.
    Is there any solution to fasten the process of deletion or any work around SMQ1 to delete the queues faster.
    We are deleting queues in live system I mean the production system is up and running.
    Thanks
    Navneet

    Hi Navnet,
    Use the report RSTRFCQDS to delete the particular queue of SMQ1.
    The report should check the inconsistencies between the tables:
    TRFCQOUT, ARFCSSTATE, QREFTID and ARFCSDATA, therefore it will take time.
    The fastest way to delete them is: you use a sql tool to delete all the entries in TRFCQOUT, ARFCSSATE, QREFTID and ARFCSDATA. In this case, you also delete the entries in SM58 (tRFC).
    See also SAP Note 760113.
    Rgds,
    Colum

  • How much time is on that Playlist-Burning CD

    Is there a better, even slightly more sophisticated way of determining just how much time is on a playlist? For burning to a CD?
    I'm trying to burn a CD that holds 80 minutes, or 700 MB. I know that what that really means is that the CD can accommodate about 74 minutes of recorded time.
    In iTunes, the particular playlist says I've used about 1.3 hours.
    What does that mean?
    How does iTunes calculate the "." something? Does the .1, .2, .3, .4, etc. translate into a defined amount of minutes?
    Does the memory, or size of the particular tune factor in? I didn't think so. I've just been looking at "time." (By the way, all of the tunes on this playlist are AIFF. Didn't bother to convert them to MP3 or AAC.) But I would if I had to. (Just a lot more time and work.)
    There must be a better way of measuring just how much time one has included in a Playlist for burning.
    Any thoughts, advice, suggestions would be very much appreciated.
    P.S. As always, iTunes "Help" has been of no help. Unless they're keeping it a secret somewhere.

    HangTime,
    You're here in iTunes too? You're everywhere. What the heck would I do without you?
    As always, you are right. Problem solved.
    But one more thing, when I went to print out the song list for the CD, the order was all changed. In an order I don't comprehend. How do I get it to print out the songs in the order I have them on my playlist? I need that, to put into the CD sleeve. Have no idea why it came out all jumbled up. Totally, random. Tried to show lists according to Artist, Album, Song, etc. to no avail.

  • ***ERROR *** Too much drift between GetTickCount and Hi Performance (In WEC7)

    Hi 
    I facing an issue on WEC7 when run CTK test: Compare All three Timers Drift - Busy Sleep and OS Sleep. Error as below:
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR *** Too much drift between GetTickCount and Hi Performance.
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR ***  GetTickCount is at 120001 ticks.
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR ***  Hi Performance should be between 120000 and 120003 ticks inclusive. Instead, it is at 120401. These numbers are using GetTickCount's units.
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR *** Too much drift between Hi Performance and Time of day (RTC).
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR ***  Hi Performance is at 229966072999 ticks.
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR ***  Time of day (RTC) should be between 229964000000 and 229969730000 ticks inclusive. Instead, it is at 229201910000. These numbers are using Hi Performance's 
    I was using August 2014 WEC7 release build. Are this is Known Issue on WEC7? 
    However, I can get all passed with same platform setup in WEC2013. Can anyone give me some advise?

    Hi 
    I facing an issue on WEC7 when run CTK test: Compare All three Timers Drift - Busy Sleep and OS Sleep. Error as below:
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR *** Too much drift between GetTickCount and Hi Performance.
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR ***  GetTickCount is at 120001 ticks.
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR ***  Hi Performance should be between 120000 and 120003 ticks inclusive. Instead, it is at 120401. These numbers are using GetTickCount's units.
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR *** Too much drift between Hi Performance and Time of day (RTC).
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR ***  Hi Performance is at 229966072999 ticks.
    Compare All Three Timers Drift - Busy Sleep and OS Sleep:       *** ERROR ***  Time of day (RTC) should be between 229964000000 and 229969730000 ticks inclusive. Instead, it is at 229201910000. These numbers are using Hi Performance's 
    I was using August 2014 WEC7 release build. Are this is Known Issue on WEC7? 
    However, I can get all passed with same platform setup in WEC2013. Can anyone give me some advise?

Maybe you are looking for