Select selecting too much

I guess I have managed to tap a keyboard shortcut by mistake. Because now everytime I selecting something from within a tag dreamweaver tries to help and selects about another 250 words and lines of code that I dont want. This is really annoying. I have been through all the menus but cant find anyway of turning this off permantly. If I click "select parent" just before selecting it does just select what I want. But on next coming select we are back to dreamweaver having a mind of its on.
There was a thread here in october 2009 but the user could not make the expert understand, the expert said "turn off your pc, walk round the neighbourhood, come back and learn the program properly. The user was rude and the expert logged out of the thread.
This is a paste from that thread.
Select it, go to page and paste, oops wrong stuff, go back and repeat sometime 2-3 times until you get the alt key tap right.  Select a text word or phrase to paste over and it turns blue, tap alt key until it is gray, sometimes 1 times sometimes 3 before it works.
I hope I have better luck, I consider myself a dreamweaver expert wannabee and find it easy to learn and use - except for this and its driving my hair mad.
Please help
Rebecka
Sweden

PC. Dw4.
Selecting in code, but as said it selects too much. Guess there is a hidden keyboard shortcut I have managed to toggle on. Now I need to toggle it off.
Example
I have php code inbetween opening and closing php tags,
It opens at line 16 and closes at line 49.
If I select the lines 34+35 with the mouse they and only they remain highlighted for select,
I copy in one of three ways, from menu, from right mouse button menu or with ctrl+c, and regardless of which method the result of the copy is that dreamweaver has now selected the entire php code from line 16 to 49 and placed it in the mouse clip, if I paste somewhere all 34 lines of code are pasted.The lines 16-49 are not selected prior to copy they are selected by the copy command. (Something to do with select parent, I think as if I click once on select parent after marking lines 34+35 and prior to copying then I just get the correct lines 34+35 copied. )
I need to turn this very annoying bug off.
Please help
Please..... this is no good for my hair

Similar Messages

  • I selected manually manage music for my iTouch yet it keeps trying to sync to my entire music library.  I do not want that as it is taking up too much space.  How can I stop my iTouch from continuing to try and sync so I can manually manage my music?

    How do I get my iTouch to manually manage my music?  I selected manually manage, yet every time I hook my iTouch to my computer, it attempts to sync and load my entire library.  I do not want the entire library as it takes up too much space and I don't want all of those songs on my iTouch.  What am I missing?  What else can I do? 

    Oops, sorry, I think you probably already did what I suggested (turning off auto synch).
    I have the same question - how to synch, say, just one playlist.
    There is an option to synch only things that are checked. But everything (in every playlist and item, is checked and there doesn't appear to be a box to "uncheck all", nor can you check each playlist individually.

  • SELECT query takes too much time! Y?

    Plz find my SELECT query below:
    select w~mandt
    wvbeln wposnr wmeins wmatnr wwerks wnetwr
    wkwmeng wvrkme wmatwa wcharg w~pstyv
    wposar wprodh wgrkor wantlf wkztlf wlprio
    wvstel wroute wumvkz wumvkn wabgru wuntto
    wawahr werdat werzet wfixmg wprctr wvpmat
    wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
    wbedae wcuobj w~mtvfp
    xetenr xwmeng xbmeng xettyp xwepos xabart
    x~edatu
    xtddat xmbdat xlddat xwadat xabruf xetart
    x~ezeit
    into table t_vbap
    from vbap as w
    inner join vbep as x on xvbeln = wvbeln and
    xposnr = wposnr and
    xmandt = wmandt
    where
    ( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
    ( ( ( erdat > pre_dat and erdat < p_syndt ) or
    ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    w~matnr in s_matnr and
    w~pstyv in s_itmcat and
    w~lfrel in s_lfrel and
    w~abgru = ' ' and
    w~kwmeng > 0 and
    w~mtvfp in w_mtvfp and
    x~ettyp in w_ettyp and
    x~bdart in s_req_tp and
    x~plart in s_pln_tp and
    x~etart in s_etart and
    x~abart in s_abart and
    ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    The problem: It takes too much time while executing this statement.
    Could anybody change this statement and help me out to reduce the DB Access time?
    Thx

    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where conditions you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    2.     For all frequently used Select statements, try to use an index.
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    3.     Using buffered tables improves the performance considerably.
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements  Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements  For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
              Loop at int_cntry.
      Select single * from zfligh into int_fligh
      where cntry = int_cntry-cntry.
      Append int_fligh.
                          Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Select query taking Much time

    Dear all ,
    I am fetching data from pool table a006.  The select query is mentioned below.
    select * from a005 into table i_a005 for all wntries in it_table
                 where  kappl  = 'V'
                 and      kschl   IN  s_kschl
                 and     vkorg   in   s_vkorg
                 and     vtweg  in   s_vtgew
                 and     matnr   in s_matnr
                 and    knumh  =  it_table-knumh .
    here every fields are primary key fields except one field knumh which is comparing with table it_table. Because of these field this query is taking too much time as KNUMH is not primary key. And a005 is pool table . So , i cant create index for same. If there is alternate solutions , than please let me know..
    Thank You ,
    And in technical setting of table ITS Metioned as Fully buffered and size category is 0 .. But data are around 9000000. Is it issue or What ?  Can somebody tell some genual reason ? Or improvement in my select query.........
    Edited by: TVC6784 on Jun 30, 2011 3:31 PM

    TVC6784 wrote:
    Hi Yuri ,
    >
    > Thanks for your reply....I will check as per your comment...
    > bUT if i remove field KNUMH  From selection condition and also for all entries in it_itab ,  than data fetch very fast As KNUMH is not primary key..
    > .  the example is below
    >
    > select * from a005 into table i_a005
    > where kappl = 'V'
    > and kschl IN s_kschl
    > and vkorg in s_vkorg
    > and vtweg in s_vtgew
    > and matnr in s_matnr.
    >
    > Can you comment anything about it ?
    >
    > And can you please say how can i check its size as you mention that is  2-3 Mb More   ?
    >
    > Edited by: TVC6784 on Jun 30, 2011 7:37 PM
    I cannot see the trace and other information about the table so I cannot judge why the select w/o KNUMH is faster.
    Basically, if the table is buffered and it's contents is in the SAP application server memory, the access should be really fast. Does not really matter if it is with KNUMH or without.
    I would really like to see at least ST05 trace of your report that is doing this select. This would clarify many things.
    You can check the size by multiplying the entries in A005 table by 138. This is (in my test system) the ABAP width of the structure.
    If you have 9.000.000 records in A005, then it would take 1,24 Gb in the buffer (which is a clear sign to unbuffer).

  • Select from (too many) tables

    Hi all,
    I'm a proud Oracle Apex developer. We have developed an Interactive Report that is generated from many joined tables in a remote system. I've read that to improve performances we can do the following:
    1) Create a temporary table on our system that stores the app_user id and the colmun as a result of the query
    2) Create a procedure that does:
    declare
    param1:= :PXX_item
    param2:= :PXY_item.
    param3:= :V('APP_USER')
    insert into <our_table>
    (select param3, <query from remore system>)
    commit;
    3) Rediresct to a query page where IR reads from this temp table
    On "Exit" button there's a procedure that purge table data of that user (delete from temp where user=V('app_user'), so the temp table is only filled with necessary data.
    Do you see any inconvenience? Application will be used from about 500 users, about 50 concurrent users at a time.
    Thank you!

    1) We don't have a control on source syste, we can only perform query on itI was referring to a materialized view on the system where Apex is installed, not on the source database.
    2) There are many tables involvedI don't understand why this is a problem. Too much data I can see, but too many tables... not so much.
    3) Data has to be in real time, with no delayThis would a problem for MV or collections. The collections would store the data as of the initial query. Any IRs using the collection after the fact would be using stale data. If you absolutely have to have the data as of right now every time, then the full query must run on the remote system every time. Tuning that query is the only option to make it faster.
    4) There are many transactions on the source tables (they are the core of the source system) and so MV could not be refreshed so fastProbably could be with fast refresh enabled, but not necessarily practical to do so. As I indicated in 3, you have painted yourself into a corner here. You have indicated a need for a real-time query and that eliminates a number of possibilities for query-once use-many performance solutions.

  • Conditional format with large data fails and show error as "Selection is too large" in Excel 2007

    I am facing a issue in paste special operation using conditional formats for large data in Excel 2007
    I have uploaded a file at below given location. 
    http://sdrv.ms/1fYC9qE
    The file contains two sheets, Sheet "Data" contains the data on which formats are to be applied and sheet "FormatTables" contains the format tables which contains conditional formating.
    There are two table in "FormatTables" sheet. Both have some conditional formats applied on it. 
    Case 1: 
    1. Select the table range of Table1 i.e $A$2:$AV$2
    2. Copy it
    3. Goto Sheet "Data" 
    4. Select data area i.e $A$1:$AV$20664
    5. Perform a paste special operation on full range and select "Formats" option while performing paste special.
    Result:
    It throws error as "Selection is too large"
    Case 2:
    1. Select the table range of Table2 i.e $A$5:$AV$5
    2. Copy it
    3. Goto Sheet "Data" 
    4. Select data area i.e $A$1:$AV$20664
    5. Perform a paste special operation on full range and select "Formats" option while performing paste special.
    Result:
    Formats get applied successfully.
    Both are the same format tables with same no of column and applied to same data range($A$1:$AV$20664) where one of the case works and another fails.
    The only diffrence is Table1 has appliesTo range($A$2:$T$2) as partial of total table range($A$2:$AV$2) whereas the Table2 has appliesTo range($A$5:$AV$5) same as of its total table range($A$5:$AV$5)
    NOTE : This issue is only in Excel 2007

    Excel 2007 No Supporting formating to take a formatting form another if source table has more then 16000 rows and if you want to do that in more then it then you have ot inset 1 more row in your format table to have 3 rows
    like: A1:AV3
    then try to copy that formating and apply
    Solution Case 1: 
    1.Select the table range of Table1 i.e AV21 and drage it down to one row down
    2. Select the table range of Table1 i.e $A$2:$AV$3
    3. Copy it
    4. Goto Sheet "Data" 
    5. Select data area i.e $A$1:$AV$20664
    6. Perform a paste special operation on full range and select "Formats" option while performing paste special

  • BPC 7M SP4 EVDRE missing rows - Error is  "1004-Selection is too large.}

    Hello,
    On a customer who installed BPC 7 Ms SP4 I have on client Exception log the error:
    ===================[System Error Tracing]=====================
    [System Name]   : BPC_ExcelAddin
    [Job Name]         : clsExpand::applyDataRangeFormula
    [DateTime]          : 2009-07-17 09:44:13
    [Exception]
           Detail<sg     : {1004-Selection is too large}
    ===================[System Error Tracing End ]=====================
    When I see this error there is an EVDRE input schedule expanding which have some of the rows missing rows descriptions.
    This means there is a gap of missing row header formulas stating from second row to somewhere in the middle of the report.
    If I reduce the number of the resulting rows for the same EVDRE input schedule results are ok.
    Do you know some setting to fix this?
    I tried increasing the Maximum Expansion Limit for rows and columns in Workbook Options without success.
    Thank you.

    Hi all,
    I have the same problem with 7.0MS SP07. With 59 expanded members it works. With 60 expanded members it fails.
    Mihaela, could you explain us what is the purpose of the parameters you talk about?
    thanks,
    Romuald

  • White MacBook takes too much time to boot up

    Hi there.
    I got the topcase(keyboard) of my MacBook repaired. (White MacBook 2006) and after that it takes nearly 7 up to 10 seconds after pressing the power button that I first see apple logo and it boots up.
    I'm not sure but seems like that detecting the hardwares take too much time .
    any solution?
    PS: I'm running Snow Leopard(10.6.2)

    macbig wrote:
    OK, 32 seconds is not good. I don't think it has anything to do with a keyboard repair. You need to try a little maintenance: Reset Pram - http://docs.info.apple.com/article.html?artnum=2238
    Repair Permissions - http://support.apple.com/kb/HT1452
    Run Disk Utility >applications>utilities>disk utilities>verify disk
    How much free space do you have on your HD: Right click on HD icon and select "get info" in the drop down menu (if you don't have right click capabilities, highlight your HD icon, go file in the Apple menu, and select "get info" in the drop down menu. Let us know your results.
    Yes PRAM was the the problem, I've already tried it from this thread http://discussions.apple.com/thread.jspa?threadID=2303507&tstart=0
    by the way thank you .

  • Taking too much time in Rules(DTP Schedule run)

    Hi,
    I am Scheduling the DTP which have filters to minimize the load data.
    when i run the DTP it is taking too much time in the "rules" (i can see the  DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
    here it is consuming too much time in Rules Mapping.
    what is the problem and any solutions please...
    regards,
    sree

    Hi,
    Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
    Also check ur DTP batch settings, ie how many no. of background processes used to perform  DTP, Job class.
    U can find these :
    goto DTP, select goto menu and select "Settings for Batch Manager".
    In the screen increase no of Processes from 3 to higher no(max 9).
    ChaNGE job class to 'A'.
    If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
    Change these settings and run ur DTP one more time.
    U can observer the difference.
    Reddy

  • Taking too much time incollecting in business content activation

    Hi all,
    I am collecting business content object for activation. I have selected 0fiAA_cha object,while cllecting in the activation but it is taking too much time and then it asks for source
    system authorisation and then throws error maximum run time exceded. i have selected data flow before there.
    What can be the reason for it.
    Please help..

    Hi ,
    You should also always try and have the latest BI content patch installed but I don't think this is the problem. It seems that there
    are alot of objects to collect. Under 'grouping' you can select the option 'only necessary objects', please check if you can
    use this option to  install the objects that you need from content.
    Best Regards,
    Des.

  • [Solved] Dolphin taking too much time to respond

    Dolphin, while used by an ordinary user, takes too much time to respond, especially when firefox is running. The time it takes to initialize is also very lengthy. At times, it does not even open up and even if active, it hangs for quite sometime before being responsive again while trying to select a file or two.
    At the same time, it works rather faster when called as super user through command line using $sudo dolphin
    But then, the console displays lot of error messages as follows.
    sebinaj ~
    $ sudo dolphin
    Error: "/var/tmp/kdecache-sebinaj1Jz46I" is owned by uid 1002 instead of uid 0.
    Error: "/tmp/kde-sebinaj" is owned by uid 1002 instead of uid 0.
    sebinaj ~
    $ Error: "/tmp/ksocket-sebinaj" is owned by uid 1002 instead of uid 0.
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
    Error: alias title requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#title, http://www.semanticdesktop.org/ontologies/2007/03/22/nco#title
    Error: alias comment requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#comment, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#comment
    Error: alias count requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#count, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#count
    Error: alias created requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#created, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#created
    Error: alias description requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#description, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#description
    Error: alias duration requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#duration, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#duration
    Error: alias encoding requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#encoding, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#encoding
    Error: alias role requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#role, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#role
    Error: alias url requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#url, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#url
    Error: alias version requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#version, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#version
    Error: alias bitsPerSample requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#bitsPerSample, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#bitsPerSample
    Error: alias copyright requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#copyright, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#copyright
    Error: alias date requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#date, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#date
    Error: alias dateTime requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#dateTime, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#dateTime
    Error: alias geo requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#geo, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#geo
    Error: alias height requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#height, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#height
    Error: alias width requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#width, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#width
    Error: alias date requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#date, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#date
    Error: alias fileOwner requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#fileOwner, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#fileOwner
    Error: alias language requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#language, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#language
    Error: alias length requested by several properties: http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#length, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#length
    Error: alias publisher requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#publisher, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#publisher
    Error: alias title requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#title, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#title
    Error: alias contributor requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#contributor, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#contributor
    Error: alias created requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#created, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#created
    Error: alias creator requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#creator, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#creator
    Error: alias description requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#description, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#description
    Error: alias identifier requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#identifier, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#identifier
    Error: alias lastModified requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#lastModified, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#lastModified
    Error: alias version requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#version, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#version
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#fileExtension' is not defined in any rdfs ontology database.
    WARNING: field 'http://strigi.sf.net/ontologies/0.9#debugParseError' is not defined in any rdfs ontology database.
    /usr/lib/strigi/strigiea_ics.so
    /usr/lib/strigi/strigiea_jpeg.so
    /usr/lib/strigi/strigiea_vcf.so
    /usr/lib/strigi/strigila_cpp.so
    /usr/lib/strigi/strigila_deb.so
    /usr/lib/strigi/strigila_diff.so
    /usr/lib/strigi/strigila_mobi.so
    /usr/lib/strigi/strigila_namespaceharvester.so
    /usr/lib/strigi/strigila_po.so
    /usr/lib/strigi/strigila_txt.so
    /usr/lib/strigi/strigila_xpm.so
    /usr/lib/strigi/strigita_au.so
    /usr/lib/strigi/strigita_audible.so
    /usr/lib/strigi/strigita_avi.so
    /usr/lib/strigi/strigita_dds.so
    /usr/lib/strigi/strigita_dvi.so
    /usr/lib/strigi/strigita_font.so
    /usr/lib/strigi/strigita_gif.so
    /usr/lib/strigi/strigita_ico.so
    /usr/lib/strigi/strigita_mp4.so
    /usr/lib/strigi/strigita_pcx.so
    /usr/lib/strigi/strigita_rgb.so
    /usr/lib/strigi/strigita_sid.so
    /usr/lib/strigi/strigita_ts.so
    /usr/lib/strigi/strigita_wav.so
    /usr/lib/strigi/strigita_xbm.so
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#usesNamespace' is not defined in any rdfs ontology database.
    WARNING: field 'translation.total' is not defined in any rdfs ontology database.
    WARNING: field 'translation.translated' is not defined in any rdfs ontology database.
    WARNING: field 'translation.untranslated' is not defined in any rdfs ontology database.
    WARNING: field 'translation.obsolete' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.modify_file_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.first_modify_file' is not defined in any rdfs ontology database.
    WARNING: field 'content.format_subtype' is not defined in any rdfs ontology database.
    WARNING: field 'content.generator' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.hunk_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.insert_line_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.modify_line_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.delete_line_count' is not defined in any rdfs ontology database.
    WARNING: field 'translation.fuzzy' is not defined in any rdfs ontology database.
    WARNING: field 'translation.last_translator' is not defined in any rdfs ontology database.
    WARNING: field 'translation.translation_date' is not defined in any rdfs ontology database.
    WARNING: field 'translation.source_date' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#colorCount' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#formatSubtype' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nfo#bitsPerSample' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#audioSampleDataType' is not defined in any rdfs ontology database.
    WARNING: field 'content.mime_type' is not defined in any rdfs ontology database.
    WARNING: field 'audio.title' is not defined in any rdfs ontology database.
    WARNING: field 'audio.artist' is not defined in any rdfs ontology database.
    WARNING: field 'todo.audio.narrator' is not defined in any rdfs ontology database.
    WARNING: field 'media.codec' is not defined in any rdfs ontology database.
    WARNING: field 'todo.audible.user_id' is not defined in any rdfs ontology database.
    WARNING: field 'todo.audible.user_alias' is not defined in any rdfs ontology database.
    WARNING: field 'audio.duration' is not defined in any rdfs ontology database.
    WARNING: field 'content.description' is not defined in any rdfs ontology database.
    WARNING: field 'content.copyright' is not defined in any rdfs ontology database.
    WARNING: field 'content.keyword' is not defined in any rdfs ontology database.
    WARNING: field 'content.creation_time' is not defined in any rdfs ontology database.
    WARNING: field 'content.maintainer' is not defined in any rdfs ontology database.
    WARNING: field 'content.ID' is not defined in any rdfs ontology database.
    WARNING: field 'audio.channel_count' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nfo#colorDepth' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#colorSpace' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#compressionAlgorithm' is not defined in any rdfs ontology database.
    WARNING: field 'font.family' is not defined in any rdfs ontology database.
    WARNING: field 'font.weight' is not defined in any rdfs ontology database.
    WARNING: field 'font.slant' is not defined in any rdfs ontology database.
    WARNING: field 'font.width' is not defined in any rdfs ontology database.
    WARNING: field 'font.spacing' is not defined in any rdfs ontology database.
    WARNING: field 'font.foundry' is not defined in any rdfs ontology database.
    WARNING: field 'content.version' is not defined in any rdfs ontology database.
    WARNING: field 'content.genre' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_trackNumber' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_discNumber' is not defined in any rdfs ontology database.
    WARNING: field 'content.author' is not defined in any rdfs ontology database.
    WARNING: field 'content.comment' is not defined in any rdfs ontology database.
    WARNING: field 'audio.album' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_audio.albumartist' is not defined in any rdfs ontology database.
    WARNING: field 'content.links' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_content.purchaser' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_content.purchasedate' is not defined in any rdfs ontology database.
    WARNING: field 'media.duration' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_video.duration' is not defined in any rdfs ontology database.
    WARNING: field 'av.audio_codec' is not defined in any rdfs ontology database.
    WARNING: field 'av.video_codec' is not defined in any rdfs ontology database.
    WARNING: field 'content.thumbnail' is not defined in any rdfs ontology database.
    WARNING: field 'user.rating' is not defined in any rdfs ontology database.
    WARNING: field 'image.width' is not defined in any rdfs ontology database.
    WARNING: field 'image.height' is not defined in any rdfs ontology database.
    WARNING: field 'media.sample_rate' is not defined in any rdfs ontology database.
    WARNING: field 'media.sample_format' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#artist' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#albumTrackCount' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#musicAlbum' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#genre' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#composer' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#trackNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#setNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#performer' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#internationalStandardRecordingCode' is not defined in any rdfs ontology database.
    WARNING: field 'Product Id' is not defined in any rdfs ontology database.
    WARNING: field 'Events' is not defined in any rdfs ontology database.
    WARNING: field 'Journals' is not defined in any rdfs ontology database.
    WARNING: field 'Todos' is not defined in any rdfs ontology database.
    WARNING: field 'Todos Completed' is not defined in any rdfs ontology database.
    WARNING: field 'Todos Overdue' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#ccdWidth' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#focusDistance' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#targetQuality' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#givenName' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#familyName' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#emailAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homepageContactURL' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#contentComment' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#cellPhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homePhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#workPhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#faxPhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#phoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homePostalAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#workPostalAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#postalAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#honorificPrefix' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#honorificSuffix' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#subject' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#title' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#author' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#description' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#copyright' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#isContentEncrypted' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#contentKeyword' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#paragraphCount' is not defined in any rdfs ontology database.
    WARNING: field 'http://rdf.openmolecules.net/0.9#moleculeCount' is not defined in any rdfs ontology database.
    kDebugStream called after destruction (from void KDirWatchPrivate::removeEntry(KDirWatch*, KDirWatchPrivate::Entry*, KDirWatchPrivate::Entry*) file /home/phil/kdemod/core/kdelibs/src/kdelibs-4.3.3/kio/kio/kdirwatch.cpp line 901)
    Cancelled INotify (fd 9, 1) for "/home/sebinaj/.local/share"
    ^C
    I am using KDEmod + Arch
    Last edited by absolutevoid (2009-11-05 17:23:59)

    There's a large thread around about this Dolphin problem.
    Disabling Nepomuk in System Settings has proved to be the
    cure in many cases.
    Deej

  • Code  taking too much time to output

    Following  code is taking too much time to execute . (some time giving Time_out )
    ind = sy-tabix.
        SELECT SINGLE * FROM mseg INTO mseg
           WHERE bwart = '102' AND
                 lfbnr = itab-mblnr AND
                 ebeln = itab-ebeln AND
                 ebelp = itab-ebelp.
        IF sy-subrc = 0.
          DELETE itab INDEX ind.
          CONTINUE.
    Is there any other way to write this code to reduce the output time.
    Thanks

    Hi,
    I think you are executing this code in a loop which is causing the problem. The rule is "Never put SELECT statements inside a loop".
    Try to rewrite the code as follows:
    * Outside the loop
    SELECT *
    from MSEG
    into table lt_mseg
    for all entries in itab
    where bwart = '102' AND
    lfbnr = itab-mblnr AND
    ebeln = itab-ebeln AND
    ebelp = itab-ebelp.
    Then inside the loop, do a READ on the internal table
    Loop at itab.
    read table lt_mseg with key bwart = '102'. "plus other conditions
    if sy-subrc ne 0.
    delete itab. "index is automatically determined here from SY-TABIX
    endif.
    endloop.
    I think this should optimise performance. You can check your code's performance using SE30 or ST05.
    Hope this helps! Please revert if you need anything else!!
    Cheers,
    Shailesh.
    Always provide feedback for helpful answers!

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • I don't want to write too much code is there a different way of doing this

    I am writing a precedure to check on the max test scores for different codes ('S01','S02'.S03') --there are more
    then I need to insert the table if the record with the best score does not exists for example for b.sortest_tesc_code = 'BSV', I am writing a cursor
    for each code (.sortest_tesc_code = 'S01') is there a way to do this different? so I cant do something like a.sortest_tesc_code in ('S01','S02'.S03') and store in a
    variable then insert, the problem is that you can have a student that have only one test other that have two etc..etc.. is not consistent, also If the b.sortest_tesc_code = 'BSV') is already in the table I don't do an insert I will have to do an update if the sortest_test_score is greater since the student can submit scores more than once... In another words check if the record exists( b.sortest_tesc_code = 'BSV') if is there compare with the new max score and if the new max score is greater then update.. If the score (by code) is not in the table insert
    Hope this is clear, this is what I have, I now it will work but it will be too much code..check for exists and not exists in two different precedures..
    Thank you
    CURSOR get_the_max_scores_S01_cur IS
                SELECT
                 sortest_pidm, a.sortest_test_score, a.sortest_tesc_code,
                 a.sortest_test_date,a.sortest_equiv_ind
                FROM
                saturn.spriden, saturn.sortest a, saturn.stvtesc
               WHERE 
               a.sortest_pidm = spriden_pidm
              AND stvtesc_code = a.sortest_tesc_code
              AND spriden_change_ind IS NULL
           -----and   a.sortest_tesc_code in ('S01','S02'.S03')
           AND a.sortest_tesc_code = 'S01'
           --and spriden_id = p_student_id  --
           ---for test purposes
           AND sortest_pidm = 133999 ----THE WILL BE A PARAMETER
           AND a.sortest_test_score =
                  (SELECT MAX (b.sortest_test_score)
                     FROM saturn.sortest b
                    WHERE a.sortest_tesc_code = b.sortest_tesc_code
                          AND a.sortest_pidm = b.sortest_pidm)
                                AND NOT EXISTS
                  (SELECT 1   FROM    saturn.sortest b
                  WHERE    A.sortest_tesc_code = b.sortest_tesc_code
                          AND a.sortest_pidm = b.sortest_pidm     
                          and   b.sortest_tesc_code = 'BSV');
         BEGIN
                     UTL_FILE.fclose_all;
                     v_file_handle := UTL_FILE.fopen (v_out_path, v_out_file, 'a');
                    UTL_FILE.put_line (v_file_handle,
                          CHR (10) || TO_CHAR (SYSDATE, 'DD-MON-YYYY HH:MI:SS'));
                   UTL_FILE.put_line (v_file_handle, 'sortest_best_sct_scorest');
                   --check for an open cursor before opening
                   IF get_the_max_scores_S01_cur%ISOPEN
                       THEN
                        CLOSE get_the_max_scores_S01_cur;
                   END IF;
                OPEN get_the_max_scores_S01_cur;
                 LOOP
                       FETCH get_the_max_scores_S01_cur
                       INTO v_pidm, v_tscore, v_testcode,
                               v_test_date, v_equiv_ind;
                       EXIT WHEN get_the_max_scores_S01_cur%NOTFOUND;
                   IF  get_the_max_scores_S01_cur%FOUND 
                    THEN
                       INSERT INTO saturn.sortest
                       (sortest_pidm,sortest_tesc_code,sortest_test_date,sortest_test_score,
                        sortest_activity_date,sortest_equiv_ind,sortest_user_id,sortest_data_origin)
                        VALUES
                        v_pidm,
                       'BSV',
                        v_test_date,
                         v_tscore,
                         sysdate, 
                        v_equiv_ind,
                        p_user,
                        'best_test_scores process'
                   END IF;    
                   END LOOP;
                   COMMIT;
                   ---Initialize variables
                    v_pidm := NULL;
                    v_test_date := NULL; 
                    v_tscore  := NULL; 
                    v_equiv_ind :=  NULL;
                    v_testcode  :=  NULL;
                 CLOSE get_the_max_scores_S01_cur;
    ----then another do the same for S02...S03

    Thank you, here is the code, I change the name of the tables, but it is the same concept.what I need is to extract the max score for each code (s01,s02,s03,s04)
    then insert a record with a different code in the same table
    BSM     Best Math SAT (S01)                              
    BSW     Best writing SAT (S04)     
    BSC     Best READING SAT (S03)     
    BSE     Best READING SAT (S02)     
    I need to be able to check if the BS codes are already in the table (BSM...BSC..) IF they are not do an insert and if they are do an update get the maximun score
    again (the students can submit more than one score form the same code and any date) and if the maximun is different(greater) of what is already in the database (with the BSM...BSC.. codes) do an update, IF NOT if is the same or less don't update...
    I need the PERSON table because I need to use the ID as a parameter they (user) can run the process for one ID or all the records in the table TEST
    Thank you, I hope is clear
    create table TEST
    TEST_PIDM                  NUMBER(8)            NOT NULL,
    TEST_TESC_CODE        VARCHAR2(4 CHAR)     NOT NULL,
    TEST_TEST_DATE        DATE                 NOT NULL,
    TEST_TEST_SCORE       VARCHAR2(5 CHAR)     NOT NULL,
    TEST_ACTIVITY_DATE    DATE                 NOT NULL,
    TEST_EQUIV_IND        VARCHAR2(1 CHAR)     NOT NULL
    INSERT INTO TEST
    ( TEST_PIDM, TEST_TESC_CODE,TEST_TEST_DATE, TEST_TEST_SCORE, TEST_ACTIVITY_DATE,TEST_EQUIV_IND)
    SELECT
    128019,'EB' ,TO_DATE( '01-JUN-2004', 'DD-MON-YYYY'),'710',SYSDATE,'N'
    FROM DUAL; 
    INSERT INTO TEST
    ( TEST_PIDM, TEST_TESC_CODE,TEST_TEST_DATE, TEST_TEST_SCORE, TEST_ACTIVITY_DATE,TEST_EQUIV_IND)
    SELECT
    128019,'M2' ,TO_DATE( '01-JUN-2005', 'DD-MON-YYYY'),'710',SYSDATE,'N'
    FROM DUAL; 
    INSERT INTO TEST
    ( TEST_PIDM, TEST_TESC_CODE,TEST_TEST_DATE, TEST_TEST_SCORE, TEST_ACTIVITY_DATE,TEST_EQUIV_IND)
    SELECT
    128019,'S01' ,TO_DATE( '01-JUN-2005', 'DD-MON-YYYY'),'750',SYSDATE,'N'
    FROM DUAL; 
    INSERT INTO TEST
    ( TEST_PIDM, TEST_TESC_CODE,TEST_TEST_DATE, TEST_TEST_SCORE, TEST_ACTIVITY_DATE,TEST_EQUIV_IND)
    SELECT
    128019,'S01' ,TO_DATE( '01-JUN-2005', 'DD-MON-YYYY'),'720',SYSDATE,'N'
    FROM DUAL; 
    INSERT INTO TEST
    ( TEST_PIDM, TEST_TESC_CODE,TEST_TEST_DATE, TEST_TEST_SCORE, TEST_ACTIVITY_DATE,TEST_EQUIV_IND)
    SELECT
    128019,'S02' ,TO_DATE( '01-JUN-2005', 'DD-MON-YYYY'),'740',SYSDATE,'N'
    FROM DUAL; 
    INSERT INTO TEST
    ( TEST_PIDM, TEST_TESC_CODE,TEST_TEST_DATE, TEST_TEST_SCORE, TEST_ACTIVITY_DATE,TEST_EQUIV_IND)
    SELECT
    128019,'S02' ,TO_DATE( '05-JUL-2005', 'DD-MON-YYYY'),'730',SYSDATE,'N'
    FROM DUAL ;
    INSERT INTO TEST
    ( TEST_PIDM, TEST_TESC_CODE,TEST_TEST_DATE, TEST_TEST_SCORE, TEST_ACTIVITY_DATE,TEST_EQUIV_IND)
    SELECT
    128019,'S03' ,TO_DATE( '01-JUN-2005', 'DD-MON-YYYY'),'780',SYSDATE,'N'
    FROM DUAL; 
    INSERT INTO TEST
    ( TEST_PIDM, TEST_TESC_CODE,TEST_TEST_DATE, TEST_TEST_SCORE, TEST_ACTIVITY_DATE,TEST_EQUIV_IND)
    SELECT
    128019,'S03' ,TO_DATE( '05-JUL-2005', 'DD-MON-YYYY'),'740',SYSDATE,'N'
    FROM DUAL; 
    INSERT INTO TEST
    ( TEST_PIDM, TEST_TESC_CODE,TEST_TEST_DATE, TEST_TEST_SCORE, TEST_ACTIVITY_DATE,TEST_EQUIV_IND)
    SELECT
    128019,'S04' ,TO_DATE( '01-JUN-2005', 'DD-MON-YYYY'),'770',SYSDATE,'N'
    FROM DUAL; 
    INSERT INTO TEST
    ( TEST_PIDM, TEST_TESC_CODE,TEST_TEST_DATE, TEST_TEST_SCORE, TEST_ACTIVITY_DATE,TEST_EQUIV_IND)
    SELECT
    128019,'S04' ,TO_DATE( '05-JUL-2005', 'DD-MON-YYYY'),'740',SYSDATE,'N'
    FROM DUAL; 
    CREATE TABLE PERSON
      PERSON_PIDM                NUMBER(8)         NOT NULL,
      PERSON_ID                  VARCHAR2(9 CHAR)  NOT NULL
    INSERT INTO  PERSON
    ( PERSON_PIDM ,   PERSON_ID)
    SELECT
    128019,'003334556'
    FROM DUAL ;
    CREATE TABLE VALTSC
    VALTSC_CODE             VARCHAR2(4 CHAR)     NOT NULL,
      VALTSC_DESC             VARCHAR2(30 CHAR)
    INSERT INTO  VALTSC
    VALTSC_CODE,
      VALTSC_DESC 
    SELECT
    'S01' ,
    'XXS01'
    FROM DUAL; 
      INSERT INTO  VALTSC
    VALTSC_CODE,
      VALTSC_DESC 
    SELECT
    'S02' ,
    'XXS02'
    FROM DUAL 
      INSERT INTO  VALTSC
    VALTSC_CODE,
      VALTSC_DESC 
    SELECT
    'S03' ,
    'XXS03'
    FROM DUAL; 
    INSERT INTO  VALTSC
    VALTSC_CODE,
      VALTSC_DESC 
    SELECT
    'S04' ,
    'XXS04'
    FROM DUAL; 

  • HT1386 The first time I synced my iphone with my mac, I didn't realize that all of my photos from iphoto would transfer over to the phone.   Now, I need to remove some, as they are taking up too much space.  I cannot figure out how to remove them from the

    The first time I synced my iphone 4 with my mac, I didn't realize that all of my photos from the iphoto library would transfer over to the phone (more than 3,000).   Now, I need to remove some, as they are taking up too much space.  I cannot figure out how to remove them from the phone.  I tried to uncheck boxes and sync again, but I get a message that there is no room on the iphone.  I've read as many articles as I can find, but still cannot manage this.  Thanks for any help.

    Open itunes, connect iphone, select what you want, sync

Maybe you are looking for

  • Error while opening the XML file

    Hi all, i'am trying to download data from internal table to XML file with root node and its corresponding child nodes.i have written the program in this way. tables: mara. include bcciixml_decl. include bcciixml_impl. parameters: p_matnr like mara-ma

  • Iphoto 6 ruins my pictures

    When I import jpegs into iphoto 6 the color rendition is terrible when viewed. It gives them this awful purple tinge. Actually, it only does this with one out of three photos that are imported. Most look fine. In fact before I import any photos into

  • "App Error 523" on White Screen

    I have plugged my phone into the charger after it had fully ran out of battery, only to watch the handset turn on and inform me of 4 text messages and 20 missed calls seconds before the white screen with the messaged "App Error 523" appears. After ch

  • Creating SubPartitions

    Hi, I have a table which is almost 1 TB and partitioned daily using interval partitioning. However, we are getting more than 100 million rows per day. So, I would like to create hash sub-partitions based on one of the columns. Since the table is so h

  • Fields on infotype 0001

    what config setting controls graying out the following fields on infotype 0001 (should not be open to edit / change) Personnel Subarea Positin Payroll Area etc (PA / OM components integrated)