Performance Improvement possible due to replacing delete-command?

Dear Specialists,
has anybody of you an idea how i still could improve the performance of the following part of a program?
I was thinking if it could be helpful to avoid the deletion at the end of the pasted coding somehow...
  DATA gt_knb1 TYPE HASHED TABLE OF t_knb1 WITH UNIQUE KEY bukrs akont.
  IF s_bukrs[] IS INITIAL.
    SELECT akont COUNT(*) AS count FROM knb1
      INTO CORRESPONDING FIELDS OF TABLE gt_knb1        "#EC CI_NOFIRST
      GROUP BY akont.                                   "#EC CI_NOWHERE
  ELSE.
    SELECT bukrs akont COUNT(*) FROM knb1 INTO TABLE gt_knb1"#EC CI_NOFIRST
      WHERE bukrs IN s_bukrs
      GROUP BY bukrs akont.                             "#EC CI_NOWHERE
  ENDIF.
  DELETE gt_knb1 WHERE bukrs IN s_bukrs AND akont = space.
Thanks a lot in advance
Best regards
Carsten

Manu D'Haeyer wrote:
Hi,
>
>...The ugliest part for me anyway is this
INTO CORRESPONDING FIELDS OF
... I think a dynamic internal table could help so that gt_knb1 fits exactly to the fetched rows.
>
> kr,
> m.
Hello all,
please note, the "CORRESPONDING FIELDS" is not so relevant nowadays. Few milliseconds are not going to change a picture.
Refer to this thread:
into corresponding fields of table VERSUS into table
Please stop recommending it in EACH AND EVERY thread appearing in this forum.
The correct answer is given by Volker above.
And by the way, are you really sure that your problem is in the DELETE statement? I would actually suspect the SELECT to be a time-consumer.
Regards,
  Yuri

Similar Messages

  • Performance issue possibly due to wrong parameters??

    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit.
    We have a program that runs every two weeks to process 3million records in our Oracle 10g database. Processing normally takes about 6 hours. With no change to the program (which is a java client program), the processing time has gotten longer and longer over the last few weeks. The processor on the database server was upgraded to iTanium and during this upgrade the databases were striped to fix a read/write issue that occurred due to poor configuration (it wasn't using multiple channels for processing so everything ran super slow) when the server hardware was upgraded. Since the last upgrade to the processor, we've noticed many errors being generated on the server when our program runs (java null reference errors). these errors never occurred before the upgrade. The upgrade was in July. In August we noticed the beginning of a degradation in performance - the process went from 6 hours to 10 hours. This month, it is taking 20 hours. Next month I fear it will be 40 hours.
    The program launches multiple sessions that work at once doing an update against the same table. The last time it ran it started at 6AM and by 1PM it was only 11% done. I looked at the session stats and saw the top 5 wait events:
    SQL*Net message from client-->
    totalwaits=11,997
    timewaited=2,070,070
    avgwait=172
    enq: TX - row lock contention-->
    totalwaits=587
    timewaited 65,614
    avgwait=111
    timeouts=180
    latch:cache buffers chains-->
    totalwaits=933
    timewaited=1,815
    avgwait=2
    db file sequential read-->
    totalwaits=1,426
    timewaited=1,519
    avgwait=1
    log file sync-->
    totalwaits=1,422
    totalwaited=2594
    avgwait=2
    It looks like all of these values are way too high and I'm wondering what parameters we could change on the database/server side that might improve performance in these areas.
    I read that increasing the INITRANS VALUE TO SOMETHING GREATER THAN ONE WOULD HELP WITH THE ROW LOCK CONTENTION AND TIMEOUTS.
    I also read that changing DB_CACHE_ADVICE OFF WOULD HELP WITH THE CACHE BUFFER CHAINS ISSUE.
    Are these viable solutions? Changing the program is not an option right now. Any help is greatly appreciated.

    Rakesh jayappa wrote:
    Hi,
    sorry i am not getting your question, i am guessing your question, you can reduce the log file sync by
    COMMIT WRITE BATCH;
    COMMIT WRITE IMMEDIATE;
    or
    The disks or I/O subsystems where the redologs are placed may be too busy.
    - Reduce other I/O activity on the disks containing the redo logs, or use dedicated disks.
    - Move the redo logs to faster disks or a faster I/O subsystem.
    - Move the redolog files from RAID 5 devices. RAID 5 is not efficient for writes.
    - Alternate redo logs on different disks to minimize the effect of the archiver on the log writer.
    Kind Regards,
    Rakesh JayappaMy point is that even if you eliminate it completely you have only eliminated a tiny fraction of the total wait time -- it may look like low hanging fruit, but it's a very small piece of fruit indeed.

  • Performance Degradated  Possibly due to CPU Time

    Hi Gurus,
    There is a utility in our application with which we can upload an excel sheet containing data and schedule the timing of the job, now when the job is executed, each row in the excel sheet leads to dml operations on multiple tables finally leading to generation of a transaction no. Now at the start around 100-120 transaction nos were generated which goes down drastically to around 30-35 after 6-7 hours. AWR report at the two instances shows that CPU time has decreased considerably in the 2nd case.
    I would like you experts to check the awr reports and suggest me the probable reason for the decrease in performance.
    Brief AWR Report When Performance was OK
    Snap Id Snap Time Sessions Curs/Sess
    Begin Snap: 2151 14-Dec-10 16:32:57 26 3.7
    End Snap: 2152 14-Dec-10 17:31:04 40 16.7
    Elapsed: 58.13 (mins)
    DB Time: 55.37 (mins)
    Cache Sizes
    ~~~~~~~~~~~ Begin End
    Buffer Cache: 436M 444M Std Block Size: 8K
    Shared Pool Size: 120M 120M Log Buffer: 6,968K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 27,541.56 1,747.07
    Logical reads: 49,830.97 3,160.97
    Block changes: 181.79 11.53
    Physical reads: 1,270.12 80.57
    Physical writes: 2.81 0.18
    User calls: 119.95 7.61
    Parses: 200.94 12.75
    Hard parses: 29.29 1.86
    Sorts: 91.80 5.82
    Logons: 0.03 0.00
    Executes: 457.16 29.00
    Transactions: 15.76
    % Blocks changed per Read: 0.36 Recursive Call %: 96.36
    Rollback per transaction %: 0.01 Rows per Sort: 270.64
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 100.00 Redo NoWait %: 100.00
    Buffer Hit %: 97.45 In-memory Sort %: 100.00
    Library Hit %: 90.18 Soft Parse %: 85.42
    Execute to Parse %: 56.05 Latch Hit %: 100.00
    Parse CPU to Parse Elapsd %: 98.04 % Non-Parse CPU: 94.98
    Shared Pool Statistics Begin End
    Memory Usage %: 72.65 84.55
    % SQL with executions>1: 71.49 75.08
    % Memory for SQL w/exec>1: 84.79 85.25
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 2,541 76.5
    db file scattered read 284,992 410 1 12.3 User I/O
    log file parallel write 31,188 145 5 4.4 System I/O
    TCP Socket (KGAS) 24 131 5459 3.9 Network
    log file sync 8,617 46 5 1.4 Commit
    Time Model Statistics DB/Inst: ABCTEST/abctest Snaps: 2151-2152
    -> Total time in database user-calls (DB Time): 3322.4s
    -> Statistics including the word "background" measure background process
    time, and so do not contribute to the DB time statistic
    -> Ordered by % or DB time desc, Statistic name
    Statistic Name Time (s) % of DB Time
    sql execute elapsed time 3,176.8 95.6
    DB CPU 2,541.1 76.5
    PL/SQL execution elapsed time 288.5 8.7
    parse time elapsed 278.7 8.4
    hard parse elapsed time 254.6 7.7
    PL/SQL compilation elapsed time 28.9 .9
    failed parse elapsed time 4.9 .1
    hard parse (sharing criteria) elapsed time 1.3 .0
    sequence load elapsed time 1.1 .0
    repeated bind elapsed time 1.1 .0
    connection management call elapsed time 0.7 .0
    hard parse (bind mismatch) elapsed time 0.3 .0
    DB time 3,322.4 N/A
    background elapsed time 197.1 N/A
    background cpu time 5.6 N/A
    Wait Class DB/Inst: ABCTEST/abctest Snaps: 2151-2152
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc
    Avg
    %Time Total Wait wait Waits
    Wait Class Waits -outs Time (s) (ms) /txn
    User I/O 292,720 .0 427 1 5.3
    System I/O 37,408 .0 190 5 0.7
    Network 272,062 .0 132 0 4.9
    Commit 8,617 .0 46 5 0.2
    Configuration 4 .0 2 593 0.0
    Application 3,212 .0 0 0 0.1
    Other 280 .4 0 0 0.0
    Concurrency 247 .0 0 0 0.0
    Wait Events DB/Inst: ABCTEST/abctest Snaps: 2151-2152
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    %Time Total Wait wait Waits
    Event Waits -outs Time (s) (ms) /txn
    db file scattered read 284,992 .0 410 1 5.2
    log file parallel write 31,188 .0 145 5 0.6
    TCP Socket (KGAS) 24 .0 131 5459 0.0
    log file sync 8,617 .0 46 5 0.2
    db file parallel write 4,215 .0 29 7 0.1
    db file sequential read 7,634 .0 16 2 0.1
    control file parallel write 1,202 .0 16 13 0.0
    Streams AQ: enqueue blocked 1 .0 2 2055 0.0
    control file sequential read 795 .0 1 1 0.0
    Data file init write 48 .0 0 9 0.0
    SQL*Net message to client 266,802 .0 0 0 4.9
    log file switch completion 3 .0 0 106 0.0
    SQL*Net break/reset to clien 3,212 .0 0 0 0.1
    SQL*Net more data to client 4,789 .0 0 0 0.1
    direct path write 23 .0 0 3 0.0
    rdbms ipc reply 67 .0 0 1 0.0
    kksfbc child completion 1 100.0 0 47 0.0
    latch: shared pool 213 .0 0 0 0.0
    latch: library cache 26 .0 0 1 0.0
    log file single write 4 .0 0 7 0.0
    log file sequential read 4 .0 0 5 0.0
    db file single write 3 .0 0 5 0.0
    os thread startup 3 .0 0 4 0.0
    enq: JS - queue lock 4 .0 0 3 0.0
    LGWR wait for redo copy 207 .0 0 0 0.0
    library cache pin 1 .0 0 6 0.0
    SQL*Net more data from clien 447 .0 0 0 0.0
    library cache load lock 1 .0 0 2 0.0
    latch: cache buffers chains 1 .0 0 0 0.0
    latch: row cache objects 1 .0 0 0 0.0
    direct path read 20 .0 0 0 0.0
    latch free 1 .0 0 0 0.0
    cursor: mutex S 1 .0 0 0 0.0
    SQL*Net message from client 266,789 .0 64,143 240 4.9
    Streams AQ: qmn slave idle w 124 .0 3,488 28127 0.0
    Streams AQ: qmn coordinator 257 51.4 3,488 13571 0.0
    virtual circuit status 116 100.0 3,480 29999 0.0
    Streams AQ: waiting for time 5 60.0 745 148902 0.0
    jobq slave wait 52 96.2 155 2987 0.0
    PL/SQL lock timer 16 100.0 16 995 0.0
    class slave wait 1 100.0 5 4995 0.0
    Background Wait Events DB/Inst: ABCTEST/abctest Snaps: 2151-2152
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    %Time Total Wait wait Waits
    Event Waits -outs Time (s) (ms) /txn
    log file parallel write 31,188 .0 145 5 0.6
    db file parallel write 4,215 .0 29 7 0.1
    control file parallel write 1,193 .0 16 13 0.0
    Streams AQ: enqueue blocked 1 .0 2 2055 0.0
    control file sequential read 691 .0 0 1 0.0
    db file sequential read 66 .0 0 5 0.0
    direct path write 23 .0 0 3 0.0
    log file single write 4 .0 0 7 0.0
    log file sequential read 4 .0 0 5 0.0
    events in waitclass Other 211 .0 0 0 0.0
    os thread startup 3 .0 0 4 0.0
    db file scattered read 1 .0 0 13 0.0
    latch: shared pool 5 .0 0 0 0.0
    direct path read 20 .0 0 0 0.0
    latch: library cache 1 .0 0 0 0.0
    rdbms ipc message 34,411 32.3 30,621 890 0.6
    Streams AQ: qmn slave idle w 124 .0 3,488 28127 0.0
    Streams AQ: qmn coordinator 257 51.4 3,488 13571 0.0
    pmon timer 1,235 100.0 3,486 2822 0.0
    smon timer 19 47.4 3,460 182099 0.0
    Streams AQ: waiting for time 5 60.0 745 148902 0.0
    class slave wait 1 100.0 5 4995 0.0
    Operating System Statistics DB/Inst: ABCTEST/abctest Snaps: 2151-2152
    Statistic Total
    AVG_BUSY_TIME 81,951
    AVG_IDLE_TIME 266,698
    AVG_SYS_TIME 10,482
    AVG_USER_TIME 71,389
    BUSY_TIME 328,163
    IDLE_TIME 1,067,144
    SYS_TIME 42,281
    USER_TIME 285,882
    RSRC_MGR_CPU_WAIT_TIME 0
    VM_IN_BYTES 1,625,600,000
    VM_OUT_BYTES 145,162,240
    PHYSICAL_MEMORY_BYTES 3,755,851,776
    NUM_CPUS 4
    NUM_CPU_CORES 1
    Brief AWR Report When Performance* Deteriorated.
    Snap Id Snap Time Sessions Curs/Sess
    Begin Snap: 2168 15-Dec-10 08:31:05 32 18.4
    End Snap: 2169 15-Dec-10 09:30:56 32 18.3
    Elapsed: 59.85 (mins)
    DB Time: 17.97 (mins)
    Cache Sizes
    ~~~~~~~~~~~ Begin End
    Buffer Cache: 448M 448M Std Block Size: 8K
    Shared Pool Size: 116M 116M Log Buffer: 6,968K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 10,503.58 1,792.02
    Logical reads: 17,583.21 2,999.87
    Block changes: 68.60 11.70
    Physical reads: 472.37 80.59
    Physical writes: 1.54 0.26
    User calls: 39.12 6.67
    Parses: 53.32 9.10
    Hard parses: 7.99 1.36
    Sorts: 13.84 2.36
    Logons: 0.00 0.00
    Executes: 130.30 22.23
    Transactions: 5.86
    % Blocks changed per Read: 0.39 Recursive Call %: 94.39
    Rollback per transaction %: 0.00 Rows per Sort: 691.64
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 100.00 Redo NoWait %: 100.00
    Buffer Hit %: 97.31 In-memory Sort %: 100.00
    Library Hit %: 92.41 Soft Parse %: 85.02
    Execute to Parse %: 59.08 Latch Hit %: 100.00
    Parse CPU to Parse Elapsd %: 100.28 % Non-Parse CPU: 95.35
    Shared Pool Statistics Begin End
    Memory Usage %: 88.40 88.48
    % SQL with executions>1: 76.15 80.48
    % Memory for SQL w/exec>1: 86.82 88.85
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 918 85.1
    db file scattered read 113,003 127 1 11.7 User I/O
    log file parallel write 11,978 52 4 4.8 System I/O
    db file parallel write 3,089 16 5 1.4 System I/O
    control file parallel write 1,217 15 13 1.4 System I/O
    Time Model Statistics DB/Inst: ABCTEST/abctest Snaps: 2168-2169
    -> Total time in database user-calls (DB Time): 1078.1s
    -> Statistics including the word "background" measure background process
    time, and so do not contribute to the DB time statistic
    -> Ordered by % or DB time desc, Statistic name
    Statistic Name Time (s) % of DB Time
    sql execute elapsed time 1,032.1 95.7
    DB CPU 917.6 85.1
    parse time elapsed 71.8 6.7
    hard parse elapsed time 52.4 4.9
    PL/SQL execution elapsed time 7.2 .7
    PL/SQL compilation elapsed time 6.2 .6
    failed parse elapsed time 1.8 .2
    sequence load elapsed time 0.4 .0
    repeated bind elapsed time 0.3 .0
    connection management call elapsed time 0.1 .0
    hard parse (sharing criteria) elapsed time 0.0 .0
    hard parse (bind mismatch) elapsed time 0.0 .0
    DB time 1,078.1 N/A
    background elapsed time 89.4 N/A
    background cpu time 6.4 N/A
    Wait Class DB/Inst: ABCTEST/abctest Snaps: 2168-2169
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc
    Avg
    %Time Total Wait wait Waits
    Wait Class Waits -outs Time (s) (ms) /txn
    User I/O 122,810 .0 133 1 5.8
    System I/O 17,013 .0 83 5 0.8
    Commit 3,129 .0 14 5 0.1
    Network 90,186 .0 0 0 4.3
    Configuration 2 .0 0 63 0.0
    Application 1,120 .0 0 0 0.1
    Other 112 .0 0 0 0.0
    Concurrency 2 .0 0 6 0.0
    Wait Events DB/Inst: ABCTEST/abctest Snaps: 2168-2169
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    %Time Total Wait wait Waits
    Event Waits -outs Time (s) (ms) /txn
    db file scattered read 113,003 .0 127 1 5.4
    log file parallel write 11,978 .0 52 4 0.6
    db file parallel write 3,089 .0 16 5 0.1
    control file parallel write 1,217 .0 15 13 0.1
    log file sync 3,129 .0 14 5 0.1
    db file sequential read 9,753 .0 6 1 0.5
    control file sequential read 725 .0 0 0 0.0
    Data file init write 32 .0 0 7 0.0
    SQL*Net message to client 88,906 .0 0 0 4.2
    log file switch completion 2 .0 0 63 0.0
    SQL*Net break/reset to clien 1,120 .0 0 0 0.1
    rdbms ipc reply 4 .0 0 8 0.0
    direct path write 10 .0 0 3 0.0
    SQL*Net more data to client 1,120 .0 0 0 0.1
    db file single write 2 .0 0 6 0.0
    os thread startup 2 .0 0 6 0.0
    log file single write 2 .0 0 4 0.0
    log file sequential read 2 .0 0 3 0.0
    SQL*Net more data from clien 160 .0 0 0 0.0
    LGWR wait for redo copy 108 .0 0 0 0.0
    direct path read 10 .0 0 0 0.0
    SQL*Net message from client 88,906 .0 55,500 624 4.2
    virtual circuit status 120 100.0 3,588 29900 0.0
    Streams AQ: qmn slave idle w 127 .0 3,550 27949 0.0
    Streams AQ: qmn coordinator 260 51.2 3,550 13652 0.0
    class slave wait 2 100.0 10 4994 0.0
    SGA: MMAN sleep for componen 9 22.2 0 4 0.0
    Background Wait Events DB/Inst: ABCTEST/abctest Snaps: 2168-2169
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    %Time Total Wait wait Waits
    Event Waits -outs Time (s) (ms) /txn
    log file parallel write 11,978 .0 52 4 0.6
    db file parallel write 3,089 .0 16 5 0.1
    control file parallel write 1,211 .0 15 13 0.1
    db file scattered read 175 .0 0 1 0.0
    control file sequential read 33 .0 0 2 0.0
    db file sequential read 53 .0 0 1 0.0
    direct path write 10 .0 0 3 0.0
    os thread startup 2 .0 0 6 0.0
    log file single write 2 .0 0 4 0.0
    log file sequential read 2 .0 0 3 0.0
    events in waitclass Other 108 .0 0 0 0.0
    direct path read 10 .0 0 0 0.0
    rdbms ipc message 19,991 57.4 31,320 1567 0.9
    pmon timer 1,208 100.0 3,590 2972 0.1
    Streams AQ: qmn slave idle w 127 .0 3,550 27949 0.0
    Streams AQ: qmn coordinator 260 51.2 3,550 13652 0.0
    smon timer 12 100.0 3,302 275149 0.0
    SGA: MMAN sleep for componen 9 22.2 0 4 0.0
    Operating System Statistics DB/Inst: ABCTEST/abctest Snaps: 2168-2169
    Statistic Total
    AVG_BUSY_TIME 30,152
    AVG_IDLE_TIME 328,781
    AVG_SYS_TIME 4,312
    AVG_USER_TIME 25,757
    BUSY_TIME 120,981
    IDLE_TIME 1,315,433
    SYS_TIME 17,612
    USER_TIME 103,369
    RSRC_MGR_CPU_WAIT_TIME 0
    VM_IN_BYTES 353,361,920
    VM_OUT_BYTES 163,041,280
    PHYSICAL_MEMORY_BYTES 3,755,851,776
    NUM_CPUS 4
    NUM_CPU_CORES 1
    Request you to help me.
    Thanks in Advance,
    Rajesh

    Hi CKPT,
    Thanks for your reply.
    The main finding that I have got from addm report (in both the cases i.e when performance was good initially vis a vis when performance deteriorated is the same -
    FINDING 1: 100% impact (3234 seconds)
    Significant virtual memory paging was detected on the host operating system.
    RECOMMENDATION 1: Host Configuration, 100% benefit (3234 seconds)
    ACTION: Host operating system was experiencing significant paging but no
    particular root cause could be detected. Investigate processes that
    do not belong to this instance running on the host that are consuming
    significant amount of virtual memory. Also consider adding more
    physical memory to the host.
    I still am unable to find out the reasons ... pls help.
    Thanks
    Rajesh

  • Performance slow on DELETE command on global temporary table!

    Hi,
    I have a delete on a global temporary table that is taking long time!.
    Anyone have a clue about how to improve delete command's against global temporary table??
    Tks,
    Paulo Portugal

    Same problem here!
    <QUOTE>
    SELECT DISTINCT PDT_CHILD.SUP_ID, PDT_CHILD.SUB_ID,
    PDT_CHILD.SUB_LEAF_FLAG_ID
    FROM
    PJI_FP_AGGR_RBS_T PDT_CHILD WHERE 1=1 AND PDT_CHILD.SUP_ID = :B2 AND
    PDT_CHILD.SUP_ID <> PDT_CHILD.SUB_ID AND PDT_CHILD.WORKER_ID = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 88561 20.71 20.23 0 0 0 0
    Fetch 90269 926.19 906.80 45 45164134 0 176545
    total 178831 946.91 927.03 45 45164134 0 176545
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (APPS) (recursive depth: 1)
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 HASH (UNIQUE)
    0 TABLE ACCESS (FULL) OF 'PJI_FP_AGGR_RBS_T' (TABLE (TEMP))
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    latch: row cache objects 1 0.00 0.00
    direct path write temp 3 0.00 0.00
    direct path read temp 3 0.00 0.00
    </QUOTE>
    The fetch is too high for TEMP table... Any help would be much appreciated!
    Note: Please teach me on how we can format the above in my future posts in OTN forums.
    ===

  • "Message from Webpage (error) There was an error in the browser while setting properties into the page HTML, possibly due to invalid URLs or other values. Please try again or use different property values."

    I created a site column at the root of my site and I have publishing turned on.  I selected the Hyperlink with formatting and constraints for publishing.
    I went to my subsite and added the column.  The request was to have "Open in new tab" for their hyperlinks.  I was able to get the column to be added and yesterday we added items without a problem. 
    The problem arose when, today, a user told me that he could not edit the hyperlink.  He has modify / delete permissions on this list.
    He would edit the item, in a custom list, and click on the address "click to add a new hyperlink" and then he would get the error below after succesfully putting in the Selected URL (http://www.xxxxxx.com), Open
    Link in New Window checkbox, the Display Text, and Tooltip:
    "Message from Webpage  There was an error in the browser while setting properties into the page HTML, possibly due to invalid URLs or other values. Please try again or use different property values."
    We are on IE 9.0.8.1112 x86, Windows 7 SP1 Enterprise Edition x64
    The farm is running SharePoint 2010 SP2 Enterprise Edition August 2013 CU Mark 2, 14.0.7106.5002
    and I saw in another post, below with someone who had a similar problem and the IISreset fixed it, as did this problem.  I wonder if this is resolved in the latest updated CU of SharePoint, the April 2014 CU?
    Summary from this link below: Comment out, below, in AssetPickers.js
    //callbackThis.VerifyAnchorElement(HtmlElement, Config);
    perform IISReset
    This is referenced in the item below:
    http://social.technet.microsoft.com/Forums/en-US/d51a3899-e8ea-475e-89e9-770db550c06e/message-from-webpage-error-there-was-an-error-in-the-browser-while-setting?forum=sharepointgeneralprevious
    TThThis is possibly the same information that I saw, possibly from the above link as reference.
    http://seanshares.com/post/69022029652/having-problems-with-sharepoint-publishing-links-after
    Again, if I update my SharePoint 2010 farm to April 2014 CU is this going to resolve the issue I have?
    I don't mind changing the JS file, however I'd like to know / see if there is anything official regarding this instead of my having to change files.
    Thank you!
    Matt

    We had the same issue after applying the SP2 & August CU. we open the case with MSFT and get the same resolution as you mentioned.
    I blog about this issue and having the office reference.
    Later MSFT release the Hotfix for this on December 10, 2013 which i am 100% positive should be part of future CUs.
    So if you apply the April CU then you will be fine.
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Skip the DELETE command on logical standby

    Hi All,
    I want to skip the DELETE command on logical standby.
    DB Version - 10.2
    OS - Linux
    Primary DB and logical standby DB .
    In our DB schema some transaction tables. We delete data from those tables by delete commands.
    Delete command, also delete data from logical standby DB. But we want to skip on logical standby DB .
    I use following for that and get error.
    ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    EXECUTE DBMS_LOGSTDBY.SKIP (stmt =>'DELETE TABLE', schema_name =>'TEST',object_name =>'TRANS',proc_name => null);
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    But I got error
    ERROR at line 1:
    ORA-06550: line 1, column 7:
    PLS-00306: wrong number or types of arguments in call to 'SKIP'
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    When I change stmt =>'DELETE TABLE' to stmt =>'DML', no error happen
    Please help me to solve this issue . This is urgent.
    Thanks in advance.
    Regards

    Dear aditi2,
    Actually it is so simple to understand the problem. Please read the following documentation and try to understand the SKIP procedure.
    http://download.oracle.com/docs/cd/B14117_01/appdev.101/b10802/d_lsbydb.htm#997290
    *SKIP Procedure*
    Use the SKIP procedure to define filters that prevent the application of SQL statements on the logical standby database.
    By default, all SQL statements executed on a primary database are applied to a logical standby database.
    If only a subset of activity on a primary database is of interest for application to the standby database,
    you can use the SKIP procedure to define filters that prevent the application of SQL statements on the logical standby database.
    While skipping (ignoring) SQL statements is the primary goal of filters,
    it is also possible to associate a stored procedure with a DDL filter so that runtime determinations can be made whether to skip the statement,
    execute this statement, or execute a replacement statement.
    Syntax
    DBMS_LOGSTDBY.SKIP (
         stmt                      IN VARCHAR2,
         schema_name               IN VARCHAR2,
         object_name               IN VARCHAR2,
         proc_name                 IN VARCHAR2,
         use_like                  IN BOOLEAN,
         esc                       IN CHAR1);Hope That Helps.
    Ogan
    Edited by: Ogan Ozdogan on 30.Tem.2010 13:03

  • Performance improvement in OBIEE 11.1.1.5

    Hi all,
    In OBIEE 11.1.1.5 reports takes long time to load , Kindly provide me some performance improvement guides.
    Thanks,
    Haree.

    Hi Haree,
    Steps to improve the performance.
    1. implement caching mechanism
    2. use aggregates
    3. use aggregate navigation
    4. limit the number of initialisation blocks
    5. turn off logging
    6. carry out calculations in database
    7. use materialized views if possible
    8. use database hints
    9. alter the NQSONFIG.ini parameters
    Note:calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV(Materialized views).
    and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    This is the latest version for OBIEE11g.
    http://blogs.oracle.com/pa/resource/Oracle_OBIEE_Tuning_Guide.pdf
    Report level:
    1. Enable cache -- change nqsconfig instead of NO change to YES.
    2. GO--> Physical layer --> right click table--> properties --> check cacheable.
    3. Try to implement Aggregate mechanism.
    4.Create Index/Partition in Database level.
    There are multiple other ways to fine tune reports from OBIEE side itself:
    1) You can check for your measures granularity in reports and have level base measures created in RPD using OBIEE utility.
    http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
    This will pick your aggr tables and not detailed tables.
    2) You can use Caching Seeding options. Using ibot or Using NQCMD command utility
    http://www.artofbi.com/index.php/2010/03/obiee-ibots-obi-caching-strategy-with-seeding-cache/
    http://satyaobieesolutions.blogspot.in/2012/07/different-to-manage-cache-in-obiee-one.html
    OR
    http://hiteshbiblog.blogspot.com/2010/08/obiee-schedule-purge-and-re-build-of.html
    Using one of the above 2 methods, you can fine tune your reports and reduce the query time.
    Also, on a safer side, just take the physical SQL from log and run it directly on DB to see the time taken and check for the explain plan with the help of a DBA.
    Hope this help's
    Thanks,
    Satya
    Edited by: Satya Ranki Reddy on Aug 12, 2012 7:39 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:12 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:20 PM

  • MV Refresh Performance Improvements in 11g

    Hi there,
    the 11g new features guide, says in section "1.4.1.8 Refresh Performance Improvements":
    "Refresh operations on materialized views are now faster with the following improvements:
    1. Refresh statement combinations (merge and delete)
    2. Removal of unnecessary refresh hint
    3. Index creation for UNION ALL MV
    4. PCT refresh possible for UNION ALL MV
    While I understand (3.) and (4.) I don't quite understand (1.) and (2.). Has there been a change in the internal implementation of the refresh (from a single MERGE statement)? If yes, then which? Is there a Note or something in the knowledge base, about these enhancements in 11g? I couldn't find any.
    Considerations are necessary for migration decision to 11g or not...
    Thanks in advance.

    I am not quit sure, what you mean. You mean perhaps, that the MVlogs work correctly when you perform MERGE stmts with DELETE on the detail tables of the MV?
    And were are the performance improvement? What is the refresh hint?
    Though I am using MVs and MVlogs at the moment, our app performs deletes and inserts in the background (no merges). The MVlog-based fast refresh scales very very bad, which means, that the performance drops very quickly, with growing changed data set.

  • Why GN_INVOICE_CREATE has no performance improvement even in HANA landscape?

    Hi All,
    We have a pricing update program which is used to update the price for a Material Customer combination(CMC).This update is done using the FM 'GN_INVOICE_CREATE'.
    The logic is designed to loop on customers, wherein this FM will be called passing all the materials valid for that customer.
    This process is taking days(Approx 5 days) to get executed and updated for CMC of 100 million records.
    Hence we are planning to move towards HANA for better improvement in performance.
    We designed the same programs in the HANA landscape and executed it in both systems for 1 customer and 1000 material combination.
    Unfortunately, both the systems gave same runtimes around 27 seconds for execution.
    This is very disappointing thinking the performance improvement we should have on HANA landscape.
    Could anyone throw light on any areas where we are missing out and why no performance improvement was obtained ?
    Also is there any configuration related changes to be done on HANA landscape for better performance.?
    The details regarding both the systems are as below.
    Suite on HANA:
    SAP_BASIS : 740
    SAP_APPL  : 617
    ECC
    SAP_BASIS : 731
    SAP_APPL  : 606
    Also see the below screenshots of the system details.
    HANA:
    ECC:
    Thanks & regards,
    Naseem

    Hi,
    just to fill in on Lars' already exhaustive comments:
    Migrating to HANA gives you lots of options to replace your own functionality (custom ABAP code) wuth HANA artifacts - views or SQLscript procedures. This is where you can really gain on performance. Expecting ABAP code to automatically run faster on HANA may be unrealistic, since it depends on the functionality of the code and how well it "translates" to a HANA environment. The key to really minimize run time is to replace DB calls with specific HANA views or procedures, then call these from your code.
    I wrote a blog on this; you might find it useful as a general introduction:
    A practical example of ABAP on HANA optimization
    When it comes to SAP standard code, like your mentioned FM, it is true that SAP is migrating some of this functionality to HANA-optimized versions, but this doesn't mean everything will be optimized in one go. This particular FM is probably not among those being initially selected for "HANAification", so you basically have to either create your own functionality (which might not be advisable due to the fact that this might violate data integrity) or just be patient.
    But again, the beauty of HANA lies in the brand new options for developers to utilize the new ways of pushing code down to the DB server. Check out the recommendations from Lars and you'll find yourself embarking on a new and exciting journey!
    Also - as a good starting point - check out the HANA developer course on open.sap.com.
    Regards,
    Trond

  • UCS manager blade memory "Delete" command

    I have a question, and I cannot seem to find an answer in documentation or other questions on the support community.  In UCS Manager, if you click a blade server, click the inventory tab, click the memory subtab, you see a graphical display of the memory modules.  If you click on a moduleto select it then right-click it, the menu available includes a "Delete" command.  Does this shut down that particular module, or does it just keep UCS Manager from monitoring it?  Or is it something else more damaging or permanent?  Is it possible to "Add" the module back once that command was issued?  If possible, please provide any documentation or links that would answer this definitively.  Thanks!
    Gil                

    Thanks for the information!  I work with a group that is performing some failure testing, and we were trying to figure a way to fail a memory module while the blade is running.  We were not sure of the purpose of the mentioned command in UCS manager and did not want to try it without knowing if it had serious consequences.

  • Delete command is not deleting all rows

    Hi All,
    Database version 10.2.0.2
    Delete command is not deleting all rows and deleting some subset of rows which it should delete, ever time I delete and do the roll back, next time it will delete some random rows, count is different everytime with in the range but not complete in anyway. see the following -
    select count(*) from test where evt_id in (select evt_id from test1);
    COUNT(*)
    27105
    delete from test where evt_id in (select evt_id from test1);
    16045 rows deleted.
    select count(*) from test where evt_id in (select evt_id from test1);
    11060
    rollback;
    Againg the same procedure -
    select count(*) from test where evt_id in (select evt_id from test1);
    COUNT(*)
    27105
    delete from test where evt_id in (select evt_id from test1);
    14320 rows deleted.
    select count(*) from test where evt_id in (select evt_id from test1);
    COUNT(*)
    12785
    why its not deleting all the 27k rows in one shot? Is there any bug related to that?
    Thanks
    Abhinav

    Odd that what looked like identical statements produced different results, both the counts and the deletes. The most likely cause of that is your data is changing - as Fahd suggested perhaps a simultaneous load taking place.
    The delete issue is probably not due to a bug. Possible but unlikely.
    If any evt_id values are NULL they won't be deleted with the subquery - a NULL in test.evt_id will never match a NULL in test1.evt_id.
    Have you tried alternative subqueries - a correlated EXISTS subquery for instance?

  • RMAN-03009: failure of delete command on ... ORA-19606: Cannot copy or rest

    one server using 11.2.0.1.0 under Suse Linux
    configured catalog db, main db & jobs & ... almost everything with enterprise manager
    keep backups 14 days
    To make 14 full online dumps I had to aktive the archive mode and try to get rid of thoose unwanted additional files.
    Additional I make every night (less til no db activity) a dump and compress it myself.
    After some days the backupjob complain that it can not delete old files.
    EM / manage all backups / crosscheck all(
    CROSSCHECK BACKUPSET;
    CROSSCHECK COPY;
    sucessful
    EM / manage all backups / delete old backups
    DELETE NOPROMPT OBSOLETE;
    failed.
    script result
    Recovery Manager: Release 11.2.0.1.0 - Production on Tue Sep 7 17:20:55 2010
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    RMAN>
    connected to target database: <SID> (DBID=773091283)
    RMAN>
    connected to recovery catalog database
    RMAN>
    echo set on
    RMAN> DELETE NOPROMPT OBSOLETE;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to recovery window of 14 days
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=196 device type=DISK
    Deleting the following obsolete backups and copies:
    Type Key Completion Time Filename/Handle
    Control File Copy 3831 03-AUG-10 /opt/oracle/base/product/11gR1/db/dbs/snapcf_<SID>.f
    Backup Set 18750 23-AUG-10
    Backup Piece 18754 23-AUG-10 /srv/ora/data/flash_recovery_area/<SID>/backupset/2010_08_23/o1_mf_nnndf_BACKUP_<SID>CH0_673c0hbp_.bkp
    Backup Set 18751 23-AUG-10
    Backup Piece 18755 23-AUG-10 /srv/ora/data/flash_recovery_area/<SID>/backupset/2010_08_23/o1_mf_nnndf_BACKUP_<SID>CH0_673c0hbo_.bkp
    Backup Set 19479 24-AUG-10
    Backup Piece 19482 24-AUG-10 /tmp/o0lm3qh9_1_1
    Backup Set 19490 24-AUG-10
    Backup Set 20087 24-AUG-10
    Backup Piece 20089 24-AUG-10 /srv/ora/data/flash_recovery_area/<SID>/autobackup/2010_08_24/o1_mf_s_727891232_677n40r3_.bkp
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of delete command on ORA_DISK_1 channel at 09/07/2010 17:20:57
    ORA-19606: Cannot copy or restore to snapshot control file
    exit;
    Recovery Manager complete.
    google found some questions, but not fittung answers.
    So far:
    - I checked the folder & users rights.
    - I found some /tmp/ files - yes I "backup" the backuped archivelogs to-delete files in /tmp - I only active archivelogmode so I can onlinebackup
    - I managed to login via rman and execute DELETE OBSOLETE manual - result above.
    - Actual I delete the set one by one to find the problem set. (delete backupset 12345 ) from the list manually.
    Some a good idea what went wrong?
    additional: Is there a way to let oracle delete the empy archvielog-directories after deleting the logs within?
    18:00- the command "RMAN> BACKUP CURRENT CONTROLFILE" also fails.
    Edited by: 793286 on 07.09.2010 09:00

    Meanwhile I managed to delete all backupsets one by one.
    The problem with $ORACLE_HOME/dbs/snapcf_<SID>.f persists.
    cd $ORACLE_HOME/dbs
    mv snapcf_<SID>.f snapcf_<SID>.f.bak
    # replaced actual dbsidname with <SID>
    RMAN> delete obsolete;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to recovery window of 14 days
    using channel ORA_DISK_1
    Deleting the following obsolete backups and copies:
    Type Key Completion Time Filename/Handle
    Control File Copy 3831 03-AUG-10 /opt/oracle/base/product/11gR1/db/dbs/snapcf_TARMED1P.f
    Do you really want to delete the above objects (enter YES or NO)? y
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of delete command on ORA_DISK_1 channel at 09/08/2010 09:44:07
    ORA-19606: Cannot copy or restore to snapshot control file
    mv snapcf_<SID>.f.bak snapcf_<SID>.fRMAN> backup current controlfile;
    Starting backup at 08-SEP-10
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    channel ORA_DISK_1: starting piece 1 at 08-SEP-10
    channel ORA_DISK_1: finished piece 1 at 08-SEP-10
    piece handle=/srv/ora/data/flash_recovery_area/TARMED1P/backupset/2010_09_08/o1_mf_ncnnf_TAG20100908T095000_68gj1b2m_.bkp tag=TAG20100908T095000 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 08-SEP-10
    Starting Control File and SPFILE Autobackup at 08-SEP-10
    piece handle=/srv/ora/data/flash_recovery_area/TARMED1P/autobackup/2010_09_08/o1_mf_s_729165003_68gj1d1b_.bkp comment=NONE
    Finished Control File and SPFILE Autobackup at 08-SEP-10
    RMAN> delete obsolete;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to recovery window of 14 days
    using channel ORA_DISK_1
    Deleting the following obsolete backups and copies:
    Type Key Completion Time Filename/Handle
    Control File Copy 3831 03-AUG-10 /opt/oracle/base/product/11gR1/db/dbs/snapcf_<SID>.f
    Do you really want to delete the above objects (enter YES or NO)? y
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of delete command on ORA_DISK_1 channel at 09/08/2010 09:52:37
    ORA-19606: Cannot copy or restore to snapshot control file
    No different effect, if I changed the filename via 2nd terminal to .bak or not.
    RMAN> delete controlfilecopy 3831;
    will fail, if the file exists or not.
    Any chance to reset/kill that file?
    Is there a need to restart the dbms after rename the file?

  • Photoshop CS6 Extended Load/Replace Texture Command Issues

    In the 3d Materials Panel in Adobe Photoshop Extended CS6, the REPLACE TEXTURE and LOAD TEXTURE commands for the 3d materials seem to create a single  "smart" texture that all other textures created via the "replace/load texture" commands link to.
    For instance, to texture a sphere using a Photoshop file, I first created a file called "horizontal stripes for 3d peel.psd" and saved it on my desktop.
    I went to the Materials Panel>Diffuse>(Clicked icon to right of "Diffuse") and chose "Load Texture>"horizontal stripes for 3d peel.psd," to apply it to my sphere.  I then edited this diffuse file by right-clicking the icon to the right of "Diffuse," choosing "Edit" and adding a color layer at the top of the layer stack, then saving (not saving as) and closing the Diffuse .PSB smart object  file.
    I then loaded a texture to the Opacity material of my sphere using the same Photoshop file on my desktop called "horizontal stripes for 3d peel.psd",
    going to Materials Panel>Opacity>(Clicking the  icon to the right of "Opacity") and choosing "Load Texture>"horizontal stripes for 3d peel.psd,"  When I opened this file to edit it by right-clicking the icon to right of "Opacity" and choosing "Edit," I found I'd loaded not the original file on my desktop that I had chosen, but my edited "Diffuse" file. Furthermore, these Diffuse and Opacity files appear not to be independently editable--they behave like linked Smart Objects, and when one is edited, the other is updated with the changes.
    Is this a bug? I can't find any mention of this behavior in the Photoshop User Manual. I'm using Windows 7, and I get the same behavior on 2 different computers with Photoshop.
    http://helpx.adobe.com/photoshop/using/3d-panel-settings-photoshop-extended.html
    I had hoped to use the Load/Replace texture command as a quick way to load up independently editable versions of the same file to the different 3d material attributes (opacity, shine, etc.), but that doesn't seem possible.   Or is there a better way to do this?
    Thanks!
    Jeff Combs

    Postscript:
    I just started working with Extruded 3d shapes, and 3d text, and, as it turns out, this is actually a dandy feature, since these 3d objects each contain multiple elements to apply textures to (ex. Front Inflation Material, Front Bevel Material, Extrusion Material, etc.) that would be pain to edit individually, rather than via a single Smart Layer, especially if you plan to use the same materials for each.
    So, anyway, onwards to more adventures as I climb the Photoshop 3d learning curve. I've been a wee-bit frustrated that this learning curve seems to be more difficult because the Adobe Photoshop_CS6 reference PDF file seems to reference mostly CS5 and, more often than not, references CS5 interfaces. For instance, in the CS6 manual—the sections on 3d concepts and Tools, 3d Panel Settings, 3D rendering and saving—to name a few—all explicitly refer to Photoshop CS5 Extended! So, if anyone knows of a detailed source of CS6 information, I’d be grateful.

  • Mapping for ND_FORM to ND_FORM not possible due to recursion

    Hi Experts,
    I am trying to define external context mapping...with the steps..say there are two components ZOMP1(in which i am using other component) and ZCOMP2(this is used in zcomp1)
    1. IN the component zcomp2  I define ZCOMP1 under the used components
    2.in the component controller of ZCOMP2 under properties tab I create controller usage of zcomp1 and two entries are created
    3.I go to the context tab of component controller of zcomp2 and drag and drop the ND_FORM  node from ZCOMP1 to context node of ZCOMP2
    when I do a check I get the error
         Mapping for ND_FORM to ND_FORM not possible due to recursion
    Don't understand why am I getting this error though the node I am trying to map is not recursive?
    This is what I see in long text of error:
    Message no. SWDP_WB_TOOL263
    Diagnosis
    Mapping from ND_FORM to ND_FORM is not permitted, as ND_FORM has its own mapping that refers directly or indirectly to ND_FORM.
    System Response
    The mapping can neither be created nor used.
    Procedure
    If you receive this error message when you check a context or when you update the mapping to context node ND_FORM, delete the mapping to ND_FORM using the context menu function with the same name.
    Please help,
    Anubhav

    Hi Anubhav,
    following restrictions apply on recursion nodes:
    1. You cannot nominate a recursive node to act as the data source in a context mapping relationship. Recursive node structures are restricted to the scope of a single controller.
    2. The root node of a context cannot be used for a recursion.
    Please check this...
    http://help.sap.com/saphelp_nw04s/helpdata/en/47/45641e80f81962e10000000a114a6b/content.htm
    Also Cehck This,..
    Enhancement FPM of Trip application
    Cheers,
    Kris.

  • Delete Command button doesn't take more than one parameter while update command does

    Hi,
    Does anybody have an idea WHY sharepoint does not send the parameter information to a delete command while the exact same parameter is being sent to the Update command?, the data is being pulled from an asp:TextBox bound  to the 'comments' field in
    the data source which happens to be the field I need to update, the code works for Update commands but not for Delete commands. Unfortunately I have to use sharepoint designer because SP is restricted at work, so I can't write code behind the scenes. I would
    appreciate any help, here's my code
    <%@ Page Language="C#" masterpagefile="../_catalogs/masterpage/v4.master" title="Test" inherits="Microsoft.SharePoint.WebPartPages.WebPartPage, Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" meta:progid="SharePoint.WebPartPage.Document" meta:webpartpageexpansion="full" %>
    <%@ Register tagprefix="SPSWC" namespace="Microsoft.SharePoint.Portal.WebControls" assembly="Microsoft.SharePoint.Portal, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    <%@ Register tagprefix="cc2" namespace="Microsoft.SharePoint.WebControls" assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    <%@ Register tagprefix="WebUI" namespace="Microsoft.Office.InfoPath.Server.Controls.WebUI" assembly="Microsoft.Office.InfoPath.Server, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    <%@ Register tagprefix="WebPartPages" namespace="Microsoft.SharePoint.WebPartPages" assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    <asp:Content id="Content1" runat="server" contentplaceholderid="PlaceHolderMain">
    <asp:SqlDataSource runat="server" ProviderName="System.Data.SqlClient" UpdateCommand="sp_updateStartedApprovals" ID="SqlDataSource2" ConnectionString="Data Source=MCARLOSJ2;User ID=sa;Password=****;Initial Catalog=MyDB;" SelectCommand="SELECT * FROM mainView " __designer:customcommand="true" UpdateCommandType="StoredProcedure" DeleteCommand="sp_rejectApprovals" DeleteCommandType="StoredProcedure">
    <UpdateParameters>
    <asp:Parameter Name="comments" Type="String"/>
    <asp:parameter Name="id" Type="Int32" />
    </UpdateParameters>
    <DeleteParameters>
    <asp:Parameter Name="comments" Type="String"/>
    <asp:parameter Name="id" Type="Int32"/>
    </DeleteParameters>
    </asp:SqlDataSource>
    <asp:GridView runat="server" id="GridView1" AutoGenerateColumns="False" DataSourceID="SqlDataSource2" DataKeyNames="id" GridLines="None" ForeColor="#333333" CellPadding="4">
    <RowStyle BackColor="#F7F6F3" ForeColor="#333333" />
    <Columns>
    <asp:boundfield DataField="description" HeaderText="Status" ReadOnly="True" SortExpression="description">
    </asp:boundfield>
    <asp:boundfield DataField="Employee Last Name" HeaderText="Employee Last Name" ReadOnly="True" SortExpression="Employee Last Name">
    </asp:boundfield>
    <asp:boundfield DataField="Employee First Name" HeaderText="Employee First Name" ReadOnly="True" SortExpression="Employee First Name">
    </asp:boundfield>
    <asp:boundfield DataField="Pending approval" HeaderText="Pending approval" ReadOnly="True" SortExpression="Pending approval">
    </asp:boundfield>
    <asp:boundfield DataField="Atnmt %" HeaderText="Atnmt %" ReadOnly="True" SortExpression="Atnmt %">
    </asp:boundfield>
    <asp:boundfield DataField="Country" HeaderText="Country" ReadOnly="True" SortExpression="Country">
    </asp:boundfield>
    <asp:boundfield DataField="comments" HeaderText="comments" ReadOnly="True" SortExpression="Comments">
    </asp:boundfield>
    <asp:boundfield DataField="processStartedDate" DataFormatString="{0:MM/dd/yyyy}" HeaderText="Date Opened" ReadOnly="True" SortExpression="processStartedDate">
    </asp:boundfield>
    <asp:boundfield DataField="Due Date" DataFormatString="{0:MM/dd/yyyy}" HeaderText="Due Date" ReadOnly="True" SortExpression="Due Date">
    </asp:boundfield>
    <asp:templatefield>
    <ItemTemplate>
    <asp:TextBox runat="server" id="comments" Text='<%# Bind("comments") %>'/>
    <asp:LinkButton runat="server" Text="Approve" id="Button1" CommandName="Update" CausesValidation="False" />
    <asp:LinkButton runat="server" Text="Reject" id="Button2" CommandName="Delete" CausesValidation="false"/>
    </ItemTemplate>
    </asp:templatefield>
    </Columns>
    <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" />
    <PagerStyle HorizontalAlign="Center" BackColor="#284775" ForeColor="White" />
    <SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" />
    <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" />
    <EditRowStyle BackColor="#999999" />
    <AlternatingRowStyle BackColor="White" ForeColor="#284775" />
    </asp:GridView>
    </asp:Content>

    Hi,
    you have multiple options here:
    1) upload as a script:
    a) save the statements in a file
    b) go to sql workshop > sql scripts
    c) upload script and run the script
    2) run the script line by line in the sql commands window directly:
    a) go to sql workshop > sql commands
    b) copy all statements there
    c) highlight the first statement with the mouse
    d) click "run" or press <ctrl>+enter
    3) use sql developer
    a) go to http://www.oracle.com/technology/products/database/sql_developer/index.html
    b) download and install
    c) connect to XE
    d) run the statements there
    Regards,
    ~Dietmar.

Maybe you are looking for

  • Mystery with itunes music store

    has anyone ever seen the following error message: iTunes could not connect to the Music Store. An unknown error ocurred (-9807) I can connect to hear the sound bites of songs but get this message when I try to purchase anything. I don't have any prob

  • Impact on Packet delay and Jitter due to IPSec

    We are planning to use IPSec between two 7604 routers. And IPSec actually adds more overhead to the packet there will be impact on the traffic. We would like to know the impact on Packet delay and Jitter due to IPsec on 7604 or 7606 routers.

  • SOUND ONLY Black screen on windows movie maker 8.1

               I  have sound only and  a black screen on my video   in windows movie maker 8.1

  • IOS 5 is available!

    iOS 5 is available. I am dowloading it at this very moment!

  • 10.5.6 and DiskWarrior

    Hi all, Does anyone know if there is a version of DiskWarrior out that works with OS 10.5.6? I tried someone's version 4.1 and my MacPro won't boot from that disk. I notice on versiontracker that there is a version 4.1.1. Anyone tried it? Thanks. Den