Spend a lot of time to insert

Hi, I have a Insert which spend a lot of time.
Is a simple insert:
INSERT INTO TFE002
(X_NOMCORRECTO,C_TITULAR,X_APPCORRECTO,X_ANOVAL,
C_MODULAR,X_APMCORRECTO,C_USUARIO,C_POBLACION,
C_FORMATO,C_LOTE,C_SECUENCIA,C_PERIODO,
X_T,X_NI,X_GRADO,X_SEXO,X_LOTEJUNTOS,
X_ESTADO,X_CONTROL,F_REGISTRO,X_UBIGEO,
N_ORDEN, X_CONSISTENCIA, C_DEPA, C_PROV, C_DIST
)VALUES
(LS_NOMBRES, LS_TITULAR, LS_APPATERNO, W_ANIO,
LS_CENTROEDUCA, LS_APMATERNO, USER, LS_POBLACION,
LS_FORMATO, LS_LOTE, LS_SECUENCIAL, LS_PERIODO,
NULL, NULL, TO_NUMBER(LS_GP), LS_SEXO, NULL,
'1', NULL, NULL, LS_UBIGEO,
LN_CONT,'0', SUBSTR(LS_UBIGEO,1,2), SUBSTR(LS_UBIGEO,3,2), SUBSTR(LS_UBIGEO,5,2));
I search for some trigger in insert, but it wasn't the problem. Tablespaces are not problem , they have enough espace.
What else could be???

Define "a lot of time". Exactly how long is this running?
Are you running a singleton INSERT, or are you calling this repeatedly in a loop?
Have you traced the session? Where is the time being spent?
"I search for some trigger in insert, but it wasn't the problem" Does that mean there are no triggers on this table?
Justin

Similar Messages

  • I spend a lot of time arranging my tab groups in the tab group view. But Panorama keeps rearranging them whenever I open up firefox. How can I get it to keep the last arrangement I left the tab groups in?

    My laptop resolution changes many times a day because I use a docking station. I suspect that this could be causing Panorama to rearrange my tab groups. None the less, I would like it to stop! Anyone know how I can get it to comply?

    Hello. I have just tried this add-on you suggested, and no change.
    And no, there was no some other add-on before, not that I know about it. Mainly, I need this option while I'm on a forum (IP board powered), it is not really something so important, but it's neat function for me.
    So, to be more detailed - I read a certain topic on this forum, and some of the posts from other members contain YouTube screens (looking same as on the original YouTube page), showing the frame of a video.
    In earlier versions of Firefox, I used to click on a small YouTube logo button (right side, down, between "watch on full screen" and "watch later" buttons). Clicking on this button automatically opened new tab, leaving forum tab as it is, and in new tab original YouTube page, with that certain video. In the same time, in Google search page, when clicking on a link, all links were opening in the same tab.
    Now, after some of the latest updates of Firefox, I noticed that some default settings were changed - in Google search, clicking on a link always opens a link in a new tab. I have found out here on your forum how to set this back in about:config feature, and now I have again that links in a Google results page are opened in the same tab (that's the way I like it).
    But YouTube screens on this forum are now also being opened in the same tab, and that is what I don't like, and trying to change back.
    So, what I need: all links to open in the same tab after clicking, as I have it now, and YouTube screens (e. g. on this forum) to be opened in a new tab (by clicking on a small YouTube logo ("watch this video on YouTube").
    Pedantic I am, I know that, and I know that this may look silly to some people, but if there is a way to do something, please tell me.
    Thanks for answering, and best regards.
    P.S.
    Is it possible that forum updates changed my setting, and not the Firefox? Thanks again...

  • Inserting into a base table of a materialized view takes lot of time.......

    Dear All,
    I have created a materialized view which refreshes on commit.materialized view is enabled query rewrite.I have created a materialized view log on the base table also While inserting into the base table it takes lot of time................Can u please tell me why?

    Dear Rahul,
    Here is my materialized view..........
    create materialized view mv_test on prebuilt table refresh force on commit
    enable query rewrite as
    SELECT P.PID,
    SUM(HH_REGD) AS HH_REGD,
    SUM(INPRO_WORKS) AS INPRO_WORKS,
    SUM(COMP_WORKS) AS COMP_WORKS,
    SUM(SKILL_WAGE) AS SKILL_WAGE,
    SUM(UN_SKILL_WAGE) AS UN_SKILL_WAGE,
    SUM(WAGE_ADVANCE) AS WAGE_ADVANCE,
    SUM(MAT_AMT) AS MAT_AMT,
    SUM(DAYS) AS DAYS,
    P.INYYYYMM,P.FIN_YEAR
    FROM PROG_MONTHLY P
    WHERE SUBSTR(PID,5,2)<>'PP'
    GROUP BY PID,P.INYYYYMM,P.FIN_YEAR;
    Please help me if query enable rewrite does any performance degradation......
    Thanks & Regards
    Kris

  • I spend lots of time staring at LP7...Glossy or Matte Screen??

    Hey everybody. OK, I know this isn't exactly a LP7 forum issue so much as it is a MacBook forum issue bought thought it applies to all of us who are investing and have invested in the new intel technology provided by Apple.
    Glossy or matte screen? That's what I'm trying to decide on and I know there are some opinions floating around this forum that I'd like to hear! I'm spending more and more time in front of my LP7 and want to know, as I am purchasing a new MacBook Pro, if the glossy screen would help, hinder or really make no difference to my working environment. It seems enticing and easy on the eyes and I'm planning on ordering the glossy but don't want to regret my decision because I'm already accustomed the the matte screen on my titanium.
    Any thoughts? Please share! Thanks

    I had the same question but decided to go for the matte screen. The glossy screen is nice for photos, but the reflections get in the way. Whatever gains you get with the glossy screen in terms of color is not very relevant for Logic or audio software in general, isn't it?
    Hens Zimmerman
    http://soundsgoodpodcast.com

  • Loading in infopackage takes lot of time

    Hi Friends,
    When i schedule and activate the process chain everyday, it takes lot of time to get loaded in the infopackages, around 5 to 6 hours.
    in st13 when i click on the Log id name to see the process chain.
    in the LOAD DATA infopackage it shows green, after double clicking on that, process moniter
    i see request still running and its yellow but 0 from 0 records.
    and this will be the same for hours
    Its very slow.
    need your guidence on this
    Thanks for your help.

    Hi,
    I will suggest you to check a few places where you can see the status
    1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
    Also see if there is any 'sysfail' for any datapacket in SM37.
    2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
    3) RSMO see what is available in details tab. It may be in update rules.
    4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
    5) Check in SM58 and BD87 for pending tRFCs and IDOCS.
    Once you identify you can rectify the error.
    If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
    If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
    If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
    SM21 - System log can also be helpful.
    Regards,
    Habeeb

  • How to find an error at the time of inserting at particular column.

    Hi,
    I'm loading the data into a table by using procedure . At the time of inserting data i got a precision error or value too large error. Is there any way to find out at which column the error has occured.
    Thanks a lot your help in advance.
    Thanks & Regards,
    Ramana.

    Hi
    Do you know how the data to be inserted is queried in the pocedure? Is there a cursor or an 'insert..select..' statement?
    Ott Karesz
    http://www.trendo-kft.hu

  • Minus takes a lot of time

    Hi all,
    My problem is as follows.
    I made a mapping that in less then a minute populated on of my tables. Then I change it to, before inserting, make a minus with target table, it works fine and takes almost the same time to populate the table, except when the target table is empty, in this case it takes four hours.
    If I take the minus it runes ok!
    What could be the problem?
    Owb 9.2
    Oracle database 9.2
    Thanks,
    Vitor.

    I have a splitter control that makes some records to be inserted in a table and some other's to be updated (complex condition to update needed), after the splitter on the flow that will insert i'm making a Minus between this records and the target table (selection MINUS target). When target table is empty it takes a lot of time.
    I edited the mapping PL/SQL and try to execute the cursor created to this same flow and it takes the same time. I guess my problem is that the original select is made just for the records that doesn't exist in target table (that's for insert, uses all record's for update). When the target table has most of the records it will select few on the source (to insert) and minus run much faster.
    What is odd is that just Original select goes in a few seconds and the minus takes all the time, but i just got 20000 records with 4 small columns, two of them numbers. And target table is empty!!!
    I have made several other MINUS instruction in some other databases much more heavy to the system and worked real fast.
    Could it be database configuration?
    Thanks,
    Vitor

  • Query is taking too much time for inserting into a temp table and for spooling

    Hi,
    I am working on a query optimization project where I have found a query which takes hell lot of time to execute.
    Temp table is defined as follows:
    DECLARE @CastSummary TABLE (CastID INT, SalesOrderID INT, ProductionOrderID INT, Actual FLOAT,
    ProductionOrderNo NVARCHAR(50), SalesOrderNo NVARCHAR(50), Customer NVARCHAR(MAX), Targets FLOAT)
    SELECT
    C.CastID,
    SO.SalesOrderID,
    PO.ProductionOrderID,
    F.CalculatedWeight,
    PO.ProductionOrderNo,
    SO.SalesOrderNo,
    SC.Name,
    SO.OrderQty
    FROM
    CastCast C
    JOIN Sales.Production PO ON PO.ProductionOrderID = C.ProductionOrderID
    join Sales.ProductionDetail d on d.ProductionOrderID = PO.ProductionOrderID
    LEFT JOIN Sales.SalesOrder SO ON d.SalesOrderID = SO.SalesOrderID
    LEFT JOIN FinishedGoods.Equipment F ON F.CastID = C.CastID
    JOIN Sales.Customer SC ON SC.CustomerID = SO.CustomerID
    WHERE
    (C.CreatedDate >= @StartDate AND C.CreatedDate < @EndDate)
    It takes almost 33% for Table Insert when I insert the data in a temp table and then 67% for Spooling. I had removed 2 LEFT joins and made it as JOIN from the above query and then tried. Query execution became bit fast. But still needs improvement.
    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables?? Please suggest.
    -Pep

    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables??
    I suggest you start with index tuning.  Specifically, make sure columns specified in the WHERE and JOIN columns are properly indexed (ideally clustered or covering, and unique when possible).  Changing outer joins to inner joins is appropriate
    if you don't need outer joins in the first place.
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Startup taking a LOT of Time

    Hi All,
    We have our application on Weblogic Server 8.1 SP5 and its taking a lot of time while starting up.
    What I found that these lines are the ones which are taking up majority of the time.
    Kindly Suggest how we can improve on this.
    ####<Jun 15, 2009 4:14:59 PM GMT+05:30> <Info> <Deployer> <sunbom4> <cgServer> <main> <<WLS Kernel>> <> <BEA-149060> <Module uddi of application uddi successfully transitioned from unprepared to prepared on server cgServer.>
    This line took more than 2 min.
    ####<Jun 15, 2009 4:16:50 PM GMT+05:30> <Info> <Deployer> <sunbom4> <cgServer> <main> <<WLS Kernel>> <> <BEA-149060> <Module MLAppCoreWL610.jar of application IntellectMM successfully transitioned from unprepared to prepared on server cgServer.>
    ####<Jun 15, 2009 4:18:23 PM GMT+05:30> <Info> <Deployer> <sunbom4> <cgServer> <main> <<WLS Kernel>> <> <BEA-149060> <Module OpenFXWrapperEJB.jar of application IntellectMM successfully transitioned from unprepared to prepared on server cgServer.>
    The above 2 lines are taking 1.5min each.

    I suggest that you take some thread dumps 10-15 secs apart during the time it's slow. The log timing alone won't tell you where the threads are spending time.

  • Caluclate time for insertion

    hi ,
    Heads up : oracle 10g
    I have insert SQL script for insertion into table , It takes long time to insert into table.
    I have tried loading by disable indexes , but some how want to discard that process and want to narrow down on how a batch of 100 rows take to insert from the insert SQL script . unfortunately I cant use wonderful features of SQLLDR . as the data come in excel format.
    please help !

    804282 wrote:
    Yes , I make a repetitive (manual insert stmnts) SQL scripts. This means that each of these SQL statements need to be compiled (hard parsed). This is very expensive and consumes a lot of CPU. A better method is soft parsing - create a single SQL statement with bind variables. This compiled SQL can then be used and re-used again and again for inserting different row values.
    Basically this requires a hard parse. Each of these SQL statements requires the SQL engine to create a cursor. A 100,000 SQL statements need to be executed, then a 100,000 cursors are created.
    insert into foo_tab values( 1 );
    insert into foo_tab values( 2 );
    insert into foo_tab values( 100000 );The correct method is using bind variables. Your create a single SQL that contains bind variables. The SQL statement looks as follows:
    insert into foo_tab values( :var );The +:var+ is a the bind variable - and this cursor is executed again and again by supplying different values, thereby inserting new rows. One cursor used for inserting a 100,000 rows.
    Here's an practical example of the performance difference:
    SQL> create table foo_tab( id number );
    Table created.
    SQL> var maxrows number
    SQL> exec :maxrows := 100000;
    PL/SQL procedure successfully completed.
    SQL> -- insert using hard parsing
    SQL> declare
      2          t1      number;
      3  begin
      4          t1 := dbms_utility.get_cpu_time;
      5          for i in 1..:maxrows
      6          loop
      7                  execute immediate 'insert into foo_tab values( '||i||' )';
      8          end loop;
      9          dbms_output.put_line( 'time: '||to_char(dbms_utility.get_cpu_time-t1) );
    10          dbms_output.put_line( 'rows/sec: '||to_char (:maxrows/(dbms_utility.get_cpu_time-t1),9990.0));
    11          rollback;
    12  end;
    13  /
    time: 4144
    rows/sec:    24
    PL/SQL procedure successfully completed.
    SQL> -- inserting using soft parsing
    SQL> declare
      2          t1      number;
      3  begin
      4          t1 := dbms_utility.get_cpu_time;
      5          for i in 1..:maxrows
      6          loop
      7                  execute immediate 'insert into foo_tab values( :0 )' using i;
      8          end loop;
      9          dbms_output.put_line( 'time: '||to_char(dbms_utility.get_cpu_time-t1) );
    10          dbms_output.put_line( 'rows/sec: '||to_char (:maxrows/(dbms_utility.get_cpu_time-t1),9990.0));
    11          rollback;
    12  end;
    13  /
    time: 304
    rows/sec:   329
    PL/SQL procedure successfully completed.
    SQL> Using bind variables in the above example enables us to insert 329 rows per seconds. Not using bind variables and hard parsing instead, reduces the insert rate to 24 rows per second. That is more than a 100x slower!
    So how do you do it using your data? You cannot with a SQL script - the SQL*Plus client is a bit primitive and does not provide the ability to turn a script containing a 100,000 insert statements into a bind variable statement. You can force cursor sharing for that Oracle session - this work-around will improve performance a little bit, but it will still be significantly slower than explicitly using bind variables.
    Instead, you should look at using an external table or SQL*Loader and load the data in CSV format. These approaches will ensure that proper SQL is used with bind variables. It can also load in parallel using Oracle's Parallel Query feature.

  • Taking lot of time for loading

    Hi,
      We are loading data from HR to BI. Connection between HR and BI has been done recently. When I am trying to load data from HR  to BI it taking lot of time--for example to load 5recs its taking 8hrs. Its same for every datasource. Should we change anything in settings to makes IDOCS work proper? Thanks

    You have to isolate the part that is taking the time.
    - Is R/3 extraction quick? (You can check with RSA3 and see how long does it take.)
    - If R/3 extraction is slow, then is the selection using an index? How many records are there in the table / view?
    - Is there a user exit? Is user exit slow?
    You can find out using monitor screen:
    - After R/3 extraction completed, how long did it take to insert into PSA?
    - How long did it take in the update rules?
    - How long did it take to activate?
    Once you isolate the problem area, post your findings here and someone will help you.

  • Urgen!! Query takes lots of time to execute and the production is in effect

    Hi,
    We have some data loading script. This scripts takes lots of time to execute. As Iam new to tunning please do let me know what is the wrong with the query !!
    Thanks In advance
    Query:
    =========
    INSERT /*+ PARALLEL */ INTO ris.ris_pi_profile
    (ID,COUNTRY_OF_CITIZENSHIP,IMMIGRATION_STATUS,SSN,DOB,GENDER,
    ETHNICITY,RACE,DEPARTMENT,DIVISION,INSTITUTION_ID,INST_EMAIL,EFFECT_DATE,ACADEMIC_TITLE,ACADEMIC_POSITION,
    OTH_PER_DATA,PCT_RESEARCH,PCT_TEACHING,PCT_CLINICAL,PCT_ADMIN,PCT_OTHER,PCT_TRAINING)
    SELECT
    ap.id,
    p.citizen_cd,
    decode(p.visa_cd,'CV',0,'F1',1,'H1',2,'H1B',3,'H2',4,'J1',5,'J2',6,'O1',7,'PR',8,'PRP',9,'TC',10,'TN',11,'TNN',12),
    (select n.soc_sec_num from sa.name n where n.name_id = p.name_id),
    (select n.birth_date from sa.name n where n.name_id = p.name_id),
    (select decode(n.gender_cd,'F',1,'M',2,0) from sa.name n where n.name_id = p.name_id),
    (select decode(n.ethnic_cd,'H',1) from sa.name n where n.name_id = p.name_id),
    (select decode(n.ethnic_cd,'A',2,'B',3,'I',1,'P',4,'W',5) from sa.name n where n.name_id = p.name_id),
    a.dept_name,
    a.div_name,
    a.inst_id,
    (select n.email from sa.name n where n.name_id = p.name_id),
    a.eff_date,
    ac.acad_pos_desc,
    p.acad_pos_cd,
    0,
    p.research_pct,
    p.teach_pct,
    p.patient_pct,
    p.admin_pct,
    p.other_pct,
    p.course_pct
    FROM
    appl1 ap,
    sa.personal_data p,
    sa.address a,
    sa.academic_pos_cd ac,
    profile_pi f
    WHERE
    p.project_role_cd='PI'
    and ap.appl_id=f.appl_id
    and p.appl_id=f.appl_id
    and p.name_id=f.name_id
    and a.addr_id=f.addr_id
    and p.acad_pos_cd=ac.acad_pos_cd
    AND EXISTS (select 1 from ris.ris_institution i WHERE i.id = a.inst_id)
    AND EXISTS (select 1 from sa.academic_pos_cd acp WHERE acp.acad_pos_cd = p.acad_pos_cd);
    In the execution PLan I see lots of Nested loop, Hash Join
    Index( Unique scan)
    Table Access by ( Index rowid)
    This query is fast in Test DB but ver very slow in prod DB. Need your help Urgent.
    Minaz

    When your query takes too long...
    When your query takes too long ...

  • Transform activity is taking a lot of time

    Hi All,
    I have a process that gets data from a file and insert it into a DB.
    I'm using a transformation activity between the file & DB adapters.
    When tranform few records I hva eno problem. When transforming 30,000 records it takes about 25 minutes, and it's a lot.
    Some one maybe knows what can be the reson???
    The data is very simple(string/numbers). With another tool it takes few sec'
    Thanks
    Riko

    Hiii Riko, I don't know the exact solution to your problem. Some of my thoughts on your issue here.
    I too have production server and I must say 25 mins for 30k records is a reasonable time considering it logs the instance processes/records in the DB in background.
    The transformation in SOA Suite takes a lot of time as while doing transformation it keeps the whole payload (which consists of 30K records in memory/RAM). As it is obvious that it would be transforming one record at a time, but keeping all records in memory for their processing.
    If your requirement is only routing and transformation why don't you go for an ESB. I suggest that you use Oracle Service Bus for that. It doesn't keep the whole payload in memory and execution would be much faster. Still it would not be in few seconds( I dont know which tool you are testing with).
    If you are going to have many transactions where you need to pass 30-50K records frequently and many a times in a day I suggest you go with Oracle Data Integrator (ODI).

  • 0CO_PC_PCP_20 extraction lot of time due to delta incapability

    Hi experts,
    We are extracting data from 0CO_PC_PCP_20 which is product costing analysis, and pulling data into 0COPC_C08 Info cube.
    the data source 0CO_PC_PCP_20 is not supporting delta, but it has got huge volume of data.
    if i extract everyday it is taking lot of time through process chain (inserted into metachain), and because of this the the other local chains are getting delayed and for the business reports are not available ontime.
    How to make 0CO_PC_PCP_20 as delta capable or is there any other way to extract the data based on this data source, so that i can minimize the time by extracting deltas.
    Regards
    venuscm

    Hi Venu,
    This data source doesn't support delta and being standard data source I am not sure whether you will be able to make it delta, but what you can do is, try to split your data load by different selections using multiple full load infopackages and schedule important infopackages first and then once all the loading is completed schedule other infopackages.
    Even if you don't want to do that then execute all these infopackages in parallel in a process chain, it will still reduce your lot of source system data extraction time.
    Just make sure that your selection will extract all the data. 
    Regards,
    Durgesh.

  • I bought new Macbook Pro 13" around two months before .My Apple ID is working on all other things except app store . It is buffering for a lot of time and lastly coming on screen " can not connect to app store " Please help me

    I bought new Macbook Pro 13" around two months before .My Apple ID is working on all other things except app store . It is buffering for a lot of time and lastly coming on screen " can not connect to app store " Please help me

    Have you tried repairing disk permissions : iTunes download error -45054

Maybe you are looking for

  • Connect macbook pro to Apple TV

    How do I connect my MacBook Pro to Apple TV?

  • HANDLING UNITS ERROR

    Hi  All we are working in HU and SU managed warehouse scenario , where for packing artiacles we are using HU units for carton and SU for  pallet , HU and SU configuration is  carried out and Put away is perfect successful with out any error  , but wh

  • Connect iPod Touch to TV using Component AV cable

    Hi I am having a problem in viewing my photos or movies from my iPod Touch to our TV using Component AV cable. But first I want to ask if it is ok to connect my AV cable to a 480i component port of my TV? It splits in half when I view it on TV. I hop

  • No LTE after updating to iOS 8.2

    Is anyone else experiencing this?

  • Need GLDv3 headers for Solaris 10u7

    Hello, for an experimental project of mine I need to compile Masa's GLDv3 driver for Marvell Yukon ethernet on Solaris 10 update 7. I have read dozens of blog postings so I do know that GLDv3 isn't stable and that's why the GLDv3 headers are not ship