Delete takes long time

Hi Experts,
I have tried the following delete statement but it is taking long time,and it's not giving any result.
DELETE FROM hs_table WHERE sno=1234 and effdt='25-MAY-10';
The records in the table are 90000.
And we are deleting only one report.
Thanks in advance ,
Please help it's very urgent.

hoek wrote:
Sorry, typo, it's 'dd-mon-yyyy' and not 'dd-mon=yyyy'...I corrected thatNo need to be sorry, :-)
Actually, Oracle is pretty forgiving about delimiters in dates. Essentially, as long as ther is a single character that cannot be mistaken for a part of a valid date in the string or format mask it should work.
SQL> select to_date('25!MAY@2010', 'dd%mon=yyyy')
  2  from dual;
TO_DATE('25
25-May-2010John

Similar Messages

  • Deleting takes long time

    Hi,
    I have a table with around 12 million rows. It takes a very long time, more than 200 seconds even deleting just one row and passing primary id in where clause. This is a partitioned table. But the query runs fine on other tables similarly partitioned with more number of rows. Can any one please help.
    Thanks
    SC

    Hi,
    I will turn on the trace to see the results. Here is the result from plan table:
    SQL> SELECT * from plan_table;
    STATEMENT_ID PLAN_ID TIMESTAMP REMARKS OPERATION OPTIONS OBJECT_NODE OBJECT_OWNER OBJECT_NAME OBJECT_ALIAS OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER SEARCH_COLUMNS ID PARENT_ID DEPTH POSITION COST CARDINALITY BYTES OTHER_TAG PARTITION_START PARTITION_STOP PARTITION_ID OTHER DISTRIBUTION CPU_COST IO_COST TEMP_SPACE ACCESS_PREDICATES FILTER_PREDICATES PROJECTION TIME QBLOCK_NAME
    8091 2/24/2009 1 DELETE STATEMENT ALL_ROWS 0 0 4 4 1 58 36157 4 1
    8091 2/24/2009 1 <remark><info type='plan_hash'>3964552726</info></remark> DELETE PLANT PLANT_SAMPLE 1 0 1 1 DEL$1
    8091 2/24/2009 1 INDEX UNIQUE SCAN PLANT PS_PK_IND T@DEL$1 INDEX (UNIQUE) ANALYZED 1 2 1 2 1 3 1 58 28686 3 "T"."PA_ID"=1062054265771 "T".ROWID[ROWID,10], "T"."PA_ID"[NUMBER,22], "T"."PA_PS_ID"[NUMBER,22], 1 DEL$1
    There are not table level or row level locks on this table and also 'on delete cascade' option is off.
    Thanks
    SC

  • Client Deletion Takes long time

    Hi All,
    Need help, Every month in our quality system we do client refresh so we delete existing data from quality and take export from production and Import into quality.Client Import takes 15 hrs but to delete client  it almost takes 36 hrs. There are PCL4/2/1 tables which takes more time to delete the data even not sure they will delete all the data.Looks like these are cluster tables so what can be do to have more optimize client deletion.
    To delete client from OS level is is safer side?  What can be other ways...?

    Hi
    PCL4 is not a cluster, at least on Ehp6 at DB level it is a transparent table, that means that the table does exist at both SAP & DB level.
    There is a way to process deletion faster, at least for Oracle (but it could also work for other DB)
    Create a new temporary table table with a CTAS command (copy table as select) to only include the data for the clients you want to keep.
    You can then drop the original table and rename the temporary table to the original name.
    Watch out CTAS does not copy indexes, constraints and default values.
    If you are using Oracle DB datapump query parameter can be the best option.
    You could export the data for the client you want to keep (setting a "query" option like query=" where MANDT in ('200', '300')") , truncate the table and import back the data.
    The point here is that copying / inserting new records is faster than deleting.
    Regards

  • Delete Index in Process Chain Takes long time after SAP BI 7.0 SP 27

    After upgrading to SAP BI 7.0 SP 27 Delete index Process & Create index process in Process chain takes long time.
    For example : Delete index for 0SD_C03 takes around 55 minutes.
    Before SP upgrade it takes around 2 minutes to delete index from 0SD_C03.
    Regards
    Madhu P Menon

    Hi,
    Normally  index  creation or deletion can take long time in case  your database statistics are not updated properly, so can check  stat  after your data loading is completed and index generation is done,  Do creation of database statistics.
    Then try to recheck ...
    Regards,
    Satya

  • BPM Process chain takes long time to process

    We have BI7, Netweaver 2004s on Oracle and SUN Solaris
    There is a process chain (BPM) which pulls data from the CRM system into BW. The scheduled time to run this chain is 0034 hrs. This chain should ideally complete before / around 0830 Hrs. <b>Now the problem is that every alternate day this chain behaves normally and gets completed well before 0830 hrs but every alternate day this chain fails…</b> there are almost 40 chains running daily. Some are event triggered (dependent with each other) or some run in parallel. In this, (BPM) process chain, usually there are 5 requests with 3 Delta and 2 full uploads (Master Data). The delta uploads finishes in 30 minutes without any issues with very few record transfers. The first full upload is from 0034 hrs to approximately 0130 hrs and the 2nd upload is from 0130 hrs to 0230 hrs. Now if the 1st upload gets delayed then the people who are initiating these chains, stop the 2nd full upload and continue it after all the process chains are completed. Now this entire BPM process chain sometimes takes 17 -18 hrs to complete!!!!!
    No other loads in CRM or BW when these process chains are running
    CRM has background jobs to push IDOCS to BW which run every 2 minutes which runs successfully
    Yesterday this chain got completed successfully (well within stipulated time) with over 33,00,000 records transferred but sometimes it has failed to transfer even 12,00,000 records!!
    Attaching a zip file, please refer the “21 to 26 Analysis screen shot.doc” from the zip file
    Within the zip file, attaching “Normal timings of daily process chains.xls” – the name explains it….
    Also within the zip file refer “BPM Infoprovider and data source screen shot.doc” please refer this file as the infopackage (page 2) which was used in the process chain is not displayed later on in page number 6 BUT CHAIN GOT SUCESSFULLY COMPLETED
    We have analyzed:--
    1)     The PSA data for BPM process chain for past few days
    2)     The info providers for BPM process chain for past few days
    3)     The ODS entries for BPM process chain for past few days
    4)     The point of failure of BPM process chain for past few days
    5)     The overall performance of all the process chains for past few days
    6)     The number of requests in BW for this process chain
    7)     The load on CRM system for past few days when this process chain ran on BW system
    As per our analysis, there are couple of things which can be fixed in the BW system:--
    1)     The partner agreement (transaction WE20) defined for the partner LS/BP3CLNT475 mentions both message types RSSEND and RSINFO: -- collect IDOCs and pack size = 1 Since the pack size = 1 will generate 1 TRFC call per IDOC, it should be changed to 10 so that less number of TRFCs will be generated thus less overhead for the BW server resulting in the increase in performance
    2)     In the definition of destination for the concerned RFC in BW (SM59), the “Technical Setting” tab says the “Load balancing” option = “No”. We are planning to make it “Yes”
    But we believe that though these changes will bring some increase in performance, this is not the root cause of the abnormal behavior of this chain as this chain runs successfully on every alternate day with approximately the same amount of load in it.
    I was not able to attach the many screen shots or the info which I had gathered during my analysis. Please advice how do I attach these files
    Best Regards,

    Hi,
    Normally  index  creation or deletion can take long time in case  your database statistics are not updated properly, so can check  stat  after your data loading is completed and index generation is done,  Do creation of database statistics.
    Then try to recheck ...
    Regards,
    Satya

  • SELECT statement takes long time

    Hi All,
    In the following code, if the T_QMIH-EQUNR contains blank or space values ,SELECT statement takes longer time to acess the data from OBJK table. If it T_QMIH-EQUNR contains values other than blank, performance is good and it fetches data very fast.
    Already we have indexes for EQUNR in OBJK table.
    Only for blank entries , it takes much time.Can anybody tell why it behaves for balnk entries?
    if not T_QMIH[] IS INITIAL.
            SORT T_QMIH BY EQUNR.
            REFRESH T_OBJK.
            SELECT EQUNR OBKNR
              FROM OBJK INTO TABLE T_OBJK
              FOR ALL ENTRIES IN T_QMIH
              WHERE OBJK~TASER = 'SER01' AND
             OBJK~EQUNR = T_QMIH-EQUNR.
    Thanks
    Ajay

    Hi
    You can use the field QMIH-QMNUM with OBJK-IHNUM
    in QMIH table, EQUNR is not primary key, it will have multiple entries
    so to improve the performance use one dummy internal table for QMIH  and sort it on EQUNR
    delete adjacent duplicates from d_qmih and use the same in for all entries
    this will improve the performance.
    Also use the fields in sequence of the index and primary keys also in select
    if not T_QMIH[] IS INITIAL.
    SORT T_QMIH BY EQUNR.
    REFRESH T_OBJK.
    SELECT EQUNR OBKNR
    FROM OBJK INTO TABLE T_OBJK
    FOR ALL ENTRIES IN T_QMIH
    WHERE  IHNUM =  T_QMIH-QMNUM
    OBJK~TASER = 'SER01' AND
    OBJK~EQUNR = T_QMIH-EQUNR.
    try this and let me know
    regards
    Shiva

  • Payables Account Analysis report takes long time to produce xml output

    Hi,
    I am trying to get xml data for the Payables Account Analysis report. I have changed the output format of the concurrent program to XML.
    The report takes long time to produce the xml data irrespective of the number of rows fetched. But the same report with Text output runs very fast.
    Any reason why the xml output takes long time?
    thanks in advance
    Malathi.

    Hi,
    Thanks for the reply.
    As mentioned above, i deleted the Q_FLEXDATA and ran the report. it takes less time.
    But Will the report data not affected when we delete Q_FLEXDATA Group? and Why this flexdata group affects the running time?
    Thanks,
    Malathi.

  • Reactivating Aggregates takes long time

    Hi All,
    Last week, we were in the process of removing some data from an Infocube, which lied outside its retention period (still attempting to setup an archive)...
    Anyways, we would deactivate the aggregate on the cube, and then do a selective delete on several different date ranges.  After the deletes are completed, we would rebuild or fill the aggregate.  The first time it took around 4.5 hours to rebuild, which seems long.  Then after deactivating the aggregate again, and doing more deletes on the same cube the day after next, we tried rebuilding the aggregate once more.  This took around 7 hours to complete, with less data in the infocube.
    Any ideas???

    Hi,
    Several reasons to it. Something which I know of would be the following reasons.
    1. The aggregate which you are deactiviating and filling is having the date field in it along with another date / time field which takes longer time for filling once adjusted with some deletion.
    2. Check the size of the aggregates in terms of records being added everytime.
    3. Under Manage Aggregates there is somehting called as "Aggregate Tree" under Goto -> Aggregate Tree. Here check the hiearchy of the aggregate you are filling. Go in the given sequence. As that will always be faster, regardless of how the aggregates are placed in the manage tab.
    Hope these helps.
    Thanks,
    Pradip.

  • SL takes longer time to start up and shut down after "IceClean".

    Greetings,
    After "IceClean" (both maintenance and cleanup), SL seems (obviously) to take longer time to start up and shut down.
    Any idea why and how can I resolve it?
    Thank you so much.
    Cheers.

    Hi,
    I've finally able to solve the problem. The problem was due to start up item. There were 2 trial application (expired long ago) in the folder. After clearing the application, start up and shut down are fast as light.
    I think I've "awaken" the start up application after "IceClean" as IceClean has deleted their log file or something like that and therefore the applications have to run again.
    In order to be fair to IceClean, it is my fault to be blamed.
    Thanks again.
    Cheers

  • The Application Takes long time to Start

    Hello All,
    We ar eon Unix->64 Bit-> Essbase 11.1.1.3.
    Problem Description : The application is taking long to start up. around 5 to 6 minutes. This is very first time it is happening.
    There were no specific changes done to the application in the recent releases.
    I have tried all options 1. Compacating outline, 2. Purging the application log etc. all other applications respond good on this host except this. Usually any application should not take more than 1 to 2 minutes to start up.
    There are no specific errors or XCP files recorded in the logs and folders.
    Appreciate your suggestions
    MS

    Thanks Jitendra and Prabhas,
    I know i have posted this thread sometime back and later I had to jump on a New release, so did not get time to check your inputs.
    Well I am back on this issue again. I have been working on various option to get this issues solved " start of App takes long time"
    Here are some Details. We are on SunOs 64 Bit, has 12CU with dual core,  with Essbase 11.1.1.3 running on it. This is an ASO application and has just 7 dimensions, Out of which the ORGANIZATION Dimension is pretty huge with Multiple Hierarchies enabled ( Both Stored and Dynamic ) and has more than 20,00,000 members including the alternate hierarchies ( Shared members)
    I did a smoke test by building dimension by dimension the app was startiung up in just *40* seconds. and when i reached the ORG dimension and added more than 70,000 memebrs . there i fall sick. the app now gets back to its old issue ( Takes more than 10 mainutes to start).
    CPU Usage ranges between 3.1 % to 4 %
    PID USER NLWP PRI NI VSZ RSS S STIME ELAPSED %CPU COMMAND
    4424 user1 1 59 20 1608 1032 S 18:13:33 00:00 0.0 grep COMMAND
    4428 user1 1 59 20 1608 1032 S 18:13:33 00:00 0.0 grep ESS
    4766 user1 88 55 20 6814168 5684200 O 17:37:48 35:45 3.1 /path/xyz/masked/ASO_APP hgfedc NOCREAT.
    But My question here is, in the last moth cube i still similar number of members in the cube and nothing really had changed.
    Essbase GURU's Please give me some Hint to think out of box now.
    Thanks
    MS

  • Report takes long time for few records

    hi frends,
    I m facing one problem with my Web based erp application which is developed in .net , in my application when i open the  report from my applicaiton , in my temp folder there one file gets created name is "rpt conmgr cache"
    bcoz of this for few records also my report takes too much time and opens very slow and it takes long time, and it happens in some of the reports only , other reports are working cool and its not creating any file in temp folder,,, so can u guide me whats this file and what can be the solution for it,
    Thanks
    Mithun

    hi sabhajit,
    i have already checked the sql query it is taking less then seconds.
    any other steps u want me to check then pls let me know?
    thanks mithun

  • Photo both takes long time to open, what can i do?

    photo both takes long time to open, what can i do?

    SO do you think we can get a aunser? i just bout my imac u know!!

  • MVIEW refresh takes long time

    Materialized view takes long time to refresh but when i tried select & insert into table it's very fast.
    i executed SQL and it takes ust 1min ( total rows is 447 )
    but while i refresh the MVIEW it takes 1.5 hrs ( total rows is 447 )
    MVIEW configration :-
    CREATE MATERIALIZED VIEW EVAL.EVALSEARCH_PRV_LWC
    TABLESPACE EVAL_T_S_01
    NOCACHE
    NOLOGGING
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    REFRESH FORCE ON DEMAND
    WITH PRIMARY KEY
    Not sure why so much diffrence

    infant_raj wrote:
    Materialized view takes long time to refresh but when i tried select & insert into table it's very fast.
    i executed SQL and it takes ust 1min ( total rows is 447 )
    but while i refresh the MVIEW it takes 1.5 hrs ( total rows is 447 )A SELECT does a consistent read.
    A MV refresh does that and also writes database data.
    These are not the same thing and cannot be directly compared.
    So instead of pointing at the SELECT execution time and asking why the MV refresh is not as fast, look instead WHAT the refresh is doing and HOW it is doing that.
    Is the execution plan sane? What events are the top ones for the MV refresh? What are the wait states that contributes most to the processing time of the refresh?
    You cannot use the SELECT statement's execution time as a direct comparison metric. The work done by the refresh is more than the work done by the SELECT. You need to determine exactly what work is done by the refresh and whether that work is done in a reasonable time, and how other sessions are impacting the refresh (it could very well be blocked by another session).

  • Takes Long time for Data Loading.

    Hi All,
    Good Morning.. I am new to SDN.
    Currently i am using the datasource 0CRM_SRV_PROCESS_H and it contains 225 fields. Currently i am using around 40 fields in my report.
    Can i hide the remaining fields in the datasource level itself (TCODE : RSA6)
    Currently data loading takes more time to load the data from PSA to ODS (ODS 1).
    And also right now i am pulling some data from another ODS(ODS 2)(LookUP). It takes long time to update the data in Active data table of the ODS.
    Can you please suggest how to improve the performance of dataloading on this Case.
    Thanks & Regards,
    Siva.

    Hi....
    Yes...u can hide..........just Check the hide box for those fields.......R u in BI 7.0 or BW...........whatever ........is the no of records is huge?
    If so u can split the records and execute............I mean use the same IP...........just execute it with different selections.........
    Check in ST04............is there are any locks or lockwaits..........if so...........Go to SM37 >> Check whether any Long running job is there or not.........then check whether that job is progressing or not............double click on the Job >> From the Job details copy the PID..............go to ST04 .....expand the node............and check whether u r able to find that PID there or not.........
    Also check System log in SM21............and shortdumps in ST04........
    Now to improve performance...........u can try to increase the virtual memory or servers.........if possiblr........it will increase the number of work process..........since if many jobs run at a time .then there will be no free Work prrocesses to proceed........
    Regards,
    Debjani......

  • The 0co_om_opa_6 ip in the process chains takes long time to run

    Hi experts,
    The 0co_om_opa_6 ip in the process chains takes long time to run around 5 hours in production
    I have checked the note 382329,
    -> where the indexes 1 and 4 are active
    -> index 4 was not "Index does not exist in database system ORACLE"- i have assgined to " Indexes on all database systems and ran the delta load in development system, but guess there are not much data in dev it took 2-1/2 hrs to run as it was taking earlier. so didnt find much differnce in performance.
    As per the note Note 549552 - CO line item extractors: performance, i have checked in the table BWOM_SETTINGS these are the settings that are there in the ECC system.
    -> OLTPSOURCE -  is blank
       PARAM_NAME - OBJSELSIZE
       PARAM_VALUE- is blank
    -> OLTPSOURCE - is blank
       PARAM_NAME - NOTSSELECT
       PARAM_VALUE- is blank
    -> OLTPSOURCE- 0CO_OM_OPA_6
       PARAM_NAME - NOBLOCKING
       PARAM_VALUE- is blank.
    Could you please check if any other settings needs to be done .
    Also for the IP there is selction criteris for FISCALYEAR/PERIOD from 2004-2099, also an inti is done for the same period as a result it becoming difficult for me to load for a single year.
    Please suggest.

    The problem was the index 4 was not active in the database level..it was recommended by the SAP team to activate it in se14..however while doing so we face few issues se14 is a very sensitive transaction should be handled carefully ... it should be activate not created.
    The OBJSELSIZE in the table BWOM_SETTINGS has to be Marked 'X' to improve the quality as well as the indexe 4 should be activate at the abap level i.e in the table COEP -> INDEXES-> INDEX 4 -> Select the  u201Cindex on all database systemu201D in place of u201CNo database indexu201D, once it is activated in the table abap level you can activate the same indexes in the database level.
    Be very carefull while you execute it in se14 best is to use db02 to do the same , basis tend to make less mistake there.
    Thanks Hope this helps ..

Maybe you are looking for

  • Blog Web Template

    Hi, I am working on a web template based on a blog. I built a project in visual studio using the web template feature. I am able to deploy the solution and create a site based off the blog template. However, I am wanting modify the default.aspx page.

  • Mapping Best Practice Doubt

    Dear SDN, I have a best practice doubt. For an scenario where it is needed to mapping a value to another value, but the conversion is based on certain logic over R/3 data what is the recommended implementation: 1.  Use Value Mapping Replication for M

  • Posting Error When I try to Rlease Invoice in Accounting.VF02

    Hello , Can anybody help me I have a invoice.When I am trying to release it to accounting I am getting a error "System error in routine FI_TAX_GET_TXJCD_LEVELS error code 2 function builder TAX2".Can anybody help me how to solve it.This invoice is cr

  • E7 regional characters doesn't display

    Hello, I went from the e72 to the e7 and I've noticed that when you're in the physical keyboard and press sym you don't get any regional special characters, like vocals with accents. With that to write in catalan is a pain. Any suggestions? Solved! G

  • Employment

    Hello, my name is Christopher *** and I went to your tulsa hiring event on the 14th and I passed all of the testing and went to the interview and even got a conditional job offer. They said I would receive an email telling me where I need to go to ta