Inserting 320 millions rows...the speed change whilst the query is running

Hi guys, I got a strange behavior in my data warehouse. When I insert data in a table I start to take note of the speed. So in one minute, at the beginning the speed was 3 millions every minute (it expected 312 millions rows) but now, after five hours the
speed is 83.000 row per minute and the table has already 234 millions rows. Now, I'm wondering if this behavior is right and how I can improve the performance (if I can whilst the insert is running).
Many Thanks

change Database recovery mode to bulklogged(Preferably)/ simple.
No - that will not solve the problem because INSERT INTO isn't bulk logged (automatically). To force bulk logged operation the target table need to be locked exclusively!
Greg Robidoux has a great matrix for that!
http://www.mssqltips.com/sqlservertip/1185/minimally-logging-bulk-load-inserts-into-sql-server/
@DIEGOCTIN,
I assume nobody can really give you an answer because it could have so much reasons for it! As Josh has written GROWTH could have been a problem if Instant File Initialization isn't setup. But 320 mil records in one transaction could cause a heavy growth
of the log file, too. This cannot participate from Instant File Initialization.
Another good tip came from Olaf, too!
I would suggest to insert the data with a tablock. In this case the transaction is minimally logged and the operation will not copy rows but pages into the target table. I've written a WIKI about that topic here:
http://social.technet.microsoft.com/wiki/contents/articles/20651.sql-server-delete-a-huge-amount-of-data-from-a-table.aspx
Wish you all a merry christmas and a happy new year!
MCM - SQL Server 2008
MCSE - SQL Server 2012
db Berater GmbH
SQL Server Blog (german only)

Similar Messages

  • Inserting 10 million rows in to a table hangs

    HI through toad iam using a simple for loop to insert 10 million rows into a table by saying
    for i in 1 ......10000000
    insert.................
    It hangs ........ for lot of time
    is there a better way to insert the rows in to the table....?
    i have to test for performance.... and i have to insert 50 million rows in its child table..
    practically when the code moves to production it will have these many rows...(may be more also) thats why i have to test for these many rows
    plz suggest a better way for this
    Regards
    raj

    Must be a 'hardware thing'.
    My ancient desktop (pentium IV, 1.8 Ghz, 512 MB), running XE, needs:
    MHO%xe> desc t
    Naam                                      Null?    Type
    N                                                  NUMBER
    A                                                  VARCHAR2(10)
    B                                                  VARCHAR2(10)
    MHO%xe> insert /*+ APPEND */ into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:04:09.71
    MHO%xe> drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:31.50
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:01.04
    MHO%xe> insert into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:02:44.12
    MHO%xe>  drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:09.46
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:00.15
    MHO%xe> insert /*+ APPEND */ into t
      2   with my_data as (
      3   select level n, 'abc' a, 'def' b from dual
      4   connect by level <= 10000000
      5   )
      6   select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:01:03.89
    MHO%xe>  drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:27.17
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:01.15
    MHO%xe> insert into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:01:56.10Yea, 'cached' it a bit (ofcourse ;) )
    But the append hint seems to knibble about 50 sec off anyway (using NO indexes at all) on my 'configuration'.

  • When the page onload  the query must run.

    I put bind variables in a view objects. When the page onload the query must run. i added ExecuteWithParams , invokeaction in page definition. And i added in java bean follow codes. And then i added this code in Page definition like this
    <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel"
    version="10.1.3.40.66" id="evrakPageDef"
    Package="view.pageDefs" ControllerClass="#{evrak}">
    evrak java code :
    String evr= "20070002362";
    String krm= "02030000";
    String prt= "B";
    String prs= "60711630";
    AdfFacesContext faces= AdfFacesContext.getCurrentInstance();
    faces.getProcessScope().put("evrak",evr);
    faces.getProcessScope().put("kurumcode",krm);
    faces.getProcessScope().put("pertur",prt);
    faces.getProcessScope().put("sicil",prs);
    When the *.jspx page onload , the query is not running. the items are empty.
    What can i do anymore?
    Thanks

    Hi,
    - create the binding fro the ExecuteWithParams
    - go to the binding and expand it
    - select the argument items and use EL to provide the values (if not they are static)
    - create a invokeAction in the executable section of the binding
    - point the invoke action to the execute with params method
    - run the page
    Frank

  • The query just run the first time, the second doesnt run

    The query just run the first time, the second doesnt run, i checked it in the SM50 and appear this msg:
    CL_RSR_CACHE_BO_FF============CP
    its not a problem of indexes, i repaired it but its really weird cause the first time the query run ok, the second doesnt, it seems that the cache is confused or i dont know, help guys, i'll really appreciate it

    I was watching the notes, its really weird when i load the cube and execute the query for the first time, its ok, to the second stay in cache, but i was watching that the cube doesnt allow me activat the BD statistics, i dont know if this is necesary to improve the performance of the query, what i know is this:
    1- before i create a Hierarchy for an infoobject thta is in the cube ,  the query used to run ok. Now when i load the first time and execute its ok, but the second stay in cache.
    2- now i can activate the statistics for the cube, the indexes are ok, i checked for the RSRV and all is ok less the statistics.
    what can i do help friends...

  • Tuning an insert sql that inserts a million rows doing a full table scan

    Hi Experts,
    I am on Oracle 11.2.0.3 on Linux. I have a sql that inserts data in a history/archive table from a main application table based on date. The application table has 3 million rows in it. and all rows that are older then 6 months should go into a history/archive table. this was recently decided and we have 1 million rows that satisfy this criteria. This insert into archive table is taking about 3 minutes. The explain plan shows a full table scan on the main table - which is the right thing as we are pulling out 1 million rows from main table into history table.
    My question is that, is there a way I can make this sql go faster?
    Here is the query plan (I changed the table names etc.)
       INSERT INTO EMP_ARCH
       SELECT *
    FROM EMP M
    where HIRE_date < (sysdate - :v_num_days);
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute      2     96.22     165.59      92266     147180    8529323     1441230
    Fetch        0      0.00       0.00          0          0          0           0
    total        4     96.22     165.59      92266     147180    8529323     1441230
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 166
    Rows     Row Source Operation
    1441401   TABLE ACCESS FULL EMP (cr=52900 pr=52885 pw=0 time=21189581 us)
    I heard that there is a way to use opt_param hint to increase the multiblock read count but didn't seem to work for me....I will be thankful for suggestions on this. also can collections and changing this to pl/sql make it faster?
    Thanks,
    OrauserN

    Also I wish experts share their insight on how we make full table scan go faster (apart from parallel suggestion I mean).
    Please make up your mind about what question you actually want help with.
    First you said you want help making the INSERT query go faster but the rest of your replies, including the above statement, imply that you are obsessed with making full table scans go faster.
    You also said:
    our management would like us to come forth with the best way to do it
    But when people make suggestions you make replies about you and your abilities:
    I do not have the liberty to do the "alter session enable parallel dml".  I have to work within this constraings
    Does 'management' want the best way to do whichever question you are asking?
    Or is is just YOU that want the best way (whatever you mean by best) based on some unknown set of constraints?
    As SB already said, you clearly do NOT have an actual problem since you have already completed the task of inserting the data, several times in fact. So the time it takes to do it is irrevelant.
    There is no universal agreement on what the word 'best' means for any given use case and you haven't given us your definition either. So how would we know what might be 'best'?
    So until you provide the COMPLETE list of constraints you are just wasting our time asking for suggestions that you refute with a comment about some 'constraint' you have.
    You also haven't provided ANY information that indicates that it is the full table scan that is the root of the problem. It is far more likely to be the INSERT into the table and a simple use of NOLOGGING with an APPEND hint might be all that is needed.
    IMHO the 'best' way would be to partition both the source and target tables and just use EXCHANGE PARTITION to move the data. That operation would only take a millisecond or two.
    But, let me guess, that would not conform to one of your 'unknown' constraints would it?

  • The query which runs in 1 minute in oracle 8 takes hours in oracle 10.

    I have a query which takes longer time in oracle 10. I have looked at the parameters and it all seems to be right. I dont know what other parameter has to be changed in order to make it work. The strange thing is that if I addr another criteriain the where clause then the sql runs in less than one minute. I added AND J.DEPTID Between '00000' and '99999' in the where class which is not really required as without that also the query returns data for all depts as the dept number falls between 00000 and 99999. Following is the sql which takes long time to run.
    SELECT
    L.HRS_ADJUST_UNPROC,L.HRS_ADJUST_YTD,L.HRS_CARRYOVER,J.DEPTID,L.COMPANY, L.EMPLID, J.HOURLY_RT, P.NAME
    FROM
    PS_LEAVE_ACCRUAL L, PS_JOB J, PS_LOCATION_TBL LOC, PS_PERSONAL_DATA P
    WHERE
    (((L.PLAN_TYPE = '50') AND
    (L.ACCRUAL_PROC_DT = (SELECT MAX(L2.ACCRUAL_PROC_DT)
    FROM PS_LEAVE_ACCRUAL L2
    WHERE L2.EMPLID=L.EMPLID
    AND L2.EMPL_RCD_NBR=L.EMPL_RCD_NBR
    AND L2.PLAN_TYPE=L.PLAN_TYPE)) AND
    (J.EMPL_STATUS IN( 'A', 'L', 'P')) AND
    ((L.HRS_ADJUST_UNPROC+L.HRS_ADJUST_YTD+L.HRS_CARRYOVER+L.HRS_EARNED_YTD-L.HRS_TAKEN_UNPROC-L.HRS_TAKEN_YTD) <> 0) AND
    (L.COMPANY = 'LHS') AND
    ((J.EFFSEQ= (SELECT MAX(INNERALIAS.EFFSEQ)
    FROM PS_JOB INNERALIAS
    WHERE INNERALIAS.EMPL_RCD_NBR = J.EMPL_RCD_NBR
    AND INNERALIAS.EMPLID = J.EMPLID
    AND INNERALIAS.EFFDT = J.EFFDT)
    AND
    J.EFFDT = (SELECT MAX(INNERALIAS.EFFDT)
    FROM PS_JOB INNERALIAS
    WHERE INNERALIAS.EMPL_RCD_NBR = J.EMPL_RCD_NBR
    AND INNERALIAS.EMPLID = J.EMPLID
    AND INNERALIAS.EFFDT <= SYSDATE))) AND
    ((LOC.EFFDT = (
    SELECT MAX(INNERALIAS.EFFDT)
    FROM PS_LOCATION_TBL INNERALIAS
    WHERE INNERALIAS.LOCATION = LOC.LOCATION
    AND INNERALIAS.EFFDT <= SYSDATE)))))
    AND
    (L.EMPL_RCD_NBR = J.EMPL_RCD_NBR ) AND (L.EMPLID = J.EMPLID ) AND (J.REPORTING_LOCATION = LOC.LOCATION ) AND (L.EMPLID = P.EMPLID )
    ORDER BY
    J.DEPTID, P.NAME

    In fact I had tried with /*+RULE*/ hint and the query ran in 1 minute just like in Oracle 8 though the explain plan was almost same as without hintSounds like you are close to finding out what is happening differently and why it makes such a big difference. I think that was the point of Pete's suggestion, not to hope that the RULE hint would be a solution in itself. I sometimes try increasing the dynamic sampling level for the same reason.
    btw please post code and execution plans in tags (see FAQ for details).
    Also you can post the {noformat}<{noformat}> operator by placing {noformat} tags around the opening "<" symbol.
    Edited by: William Robertson on Sep 9, 2009 10:41 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Inserting 2 million rows (MySQL)

    See original post:
    http://forum.java.sun.com/thread.jsp?forum=48&thread=555586

    thats not only a double post this is even a crosspost .
    BTW i call insertsing that many rows something for someone new to Java
    So its not only a double post and a crosspost its even an off topic post
    Keep up the good work !

  • Insert/select one million rows at a time from source to target table

    Hi,
    Oracle 10.2.0.4.0
    I am trying to insert around 10 million rows into table target from source as follows:
    INSERT /*+ APPEND NOLOGGING */ INTO target
    SELECT *
    FROM source f
    WHERE
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);There is a unique index on target table on col1,col2
    I was having issues with undo and now I am getting the follwing error with temp space
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPI believce it would be easier if I did bulk insert one million rows at a time and commit.
    I appriciate any advice on this please.
    Thanks,
    Ashok

    902986 wrote:
    NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);
    I don't know if it has any bearing on the case, but is that WHERE clause on purpose or a typo? Should it be:
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.COL1 and f.col2 = m.col2);Anyway - how much of your data already exists in target compared to source?
    Do you have 10 million in source and very few in target, so most of source will be inserted into target?
    Or do you have 9 million already in target, so most of source will be filtered away and only few records inserted?
    And what is the explain plan for your statement?
    INSERT /*+ APPEND NOLOGGING */ INTO target
    SELECT *
    FROM source f
    WHERE
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);As your error has to do with TEMP, your statement might possibly try to do a lot of work in temp to materialize the resultset or parts of it to maybe use in a hash join before inserting.
    So perhaps you can work towards an explain plan that allows the database to do the inserts "along the way" rather than calculate the whole thing in temp first.
    That probably will go much slower (for example using nested loops for each row to check the exists), but that's a tradeoff - if you can't have sufficient TEMP then you may have to optimize for less usage of that resource at the expense of another resource ;-)
    Alternatively ask your DBA to allocate more room in TEMP tablespace. Or have the DBA check if there are other sessions using a lot of TEMP in which case maybe you just have to make sure your session is the only one using lots of TEMP at the time you execute.

  • Million rows insert

    Hi
    I want to insert 10 millions rows oracle 8.1 to 10g.
    Which way can i followed? (Bulk insert or SQL loader)
    Thanks
    SS

    Johan Stf wrote:
    Ok, slap me if this is wrong.
    create sequence CustID
    increment by 1
    start with 1
    insert into customers (id)
    select CustID.nextval
    from dual
    connect by level <= 1000000
    The question is why do you need to have the sequence then?
    You could simply write the following using the LEVEL pseudo column:
    insert /*+ append */ into customers (id)
    select level as id
    from dual
    connect by level <= 1000000;I've added the append hint for using a direct-path insert if possible. Note that this locks the object for any other DML activity and you can't read from the object afterwards until you commit, so use with care.
    Note that using the CONNECT BY LEVEL <= n uses a significant amount of memory if n is large.
    For more information, see here:
    http://blog.tanelpoder.com/2008/06/08/generating-lots-of-rows-using-connect-by-safely/
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
    Edited by: Randolf Geist on Oct 28, 2008 2:17 PM
    Added the memory consumption caveat

  • Insert data using row selector

    Hi :
    In my applicaion i need to insert only the selected data ie row
    the query i am using is
    select
    html_db_item.text(1,pobj.name,20,25) "object Name",
    html_db_item.text(1,null,20,25) "Incomming Hours ",
    html_db_item.text(1,null,20,25) "Design Hours "
    from projects pro.
    project_objects pobj
    where pro.id = pobj.pro_id and
    pro.id = :p10_projects
    here p10_projects is the project where for each project there would be 10 to 100 objects ie pobj.name
    here i need to insert these object names and hours into anothere table where i need a row selector to select only selected objects and insert please suggest me
    so as i select the rows only that need to be inserted
    Thanks
    Sudhir

    Randy :
    please try this query
    SELECT
    HTMLDB_ITEM.DISPLAY_AND_SAVE(2,NULL) "Project Objects",
    HTMLDB_ITEM.DATE_POPUP(3,rownum,null,'dd-mon-yyyy',12,15) "date"
    FROM
    DUAL
    UNION
    SELECT
    HTMLDB_ITEM.DISPLAY_AND_SAVE(2,SYSDATE) "Project Objects",
    HTMLDB_ITEM.DATE_POPUP(3,rownum,null,'dd-mon-yyyy',12,15) "date"
    FROM
    DUAL
    first change the query to sql updatable form and then add row selector to this report and try to select the 2nd row date picker ull find the problem wht i am facing
    i need row selector becouse to select the date indivisually and insert them
    please give me some soloution
    thanks
    sudhir

  • How to speed up SQL spatial query (spatial index error)

    Hi,
    Im trying to split polylines by points which have a small buffer around them. Currently i have over 370,000 lines and 320,000 nodes and the query is running really slowly (ive left it for 3 days and it still hasnt completed). I have tried forcing a spatial
    index using with (Index(SI_tempPD)) but i get the following error:
    "The query processor could not produce a query plan for a query with a spatial index hint. Reason: Could not find required binary spatial method in a condition. Try removing the index hints or removing SET FORCEPLAN."
    below is the snippet of code that im trying to run when i get the error:
    BEGIN INSERT INTO TempLines ( [linenum] ,[ogr_geometry] ) SELECT lines.[linenum] ,lines.[ogr_geometry].STDifference(points.[ogr_geometry].STBuffer(0.005)) AS ogr_geometry FROM dbo.TemplineData AS lines with(Index(SI_tempPD)) INNER JOIN dbo.[TemplineNodes] AS points ON lines.[ogr_geometry].STIntersection(points.[ogr_geometry]).STDistance(points.[ogr_geometry]) < 1 WHERE (lines.[linenum] <> points.[linenum]) END
     is there anyway i can speed up the query? (I also have a clustered primary key) the execution plan shows that a filter takes up 36% of the cost and the insert takes up 64%
    Any help would be greatly appreciated! (im using SQL Server 2008 (SQL server Management studio 10.50.1600.1))

    SQL Server spatial indexes don't support STDifference or STIntersection, see
    https://technet.microsoft.com/en-us/library/bb895373(v=SQL.105).aspx, which is why you get the error. 
    Your query is doing multiple expensive spatial operations against some fairly good-size tables in a join. You might get better results by either breaking the query into multiple simpler queries (SELECT into a temporary table and move the STDifference and
    STBuffer and INSERT off to a separate statement), and (if possible) rearchitecting the query to use STIntersects (which does support spatial index) rather than STIntersection.
    Hope this helps, Bob

  • Using variables in the Query Designer

    Hi,
    I have 2 variables 'Z_COMP_CODE' and 'Z_BUS_AREA'. The type of 'Z_COMP_CODE' is single value ,customer exit and ready for input . The type of  'Z_BUS_AREA' is selection option,customer exit and not ready for input. I want the 2 variables to work like this: when 'Z_COMP_CODE' has the value '3000', 'Z_BUS_AREA' should have all the values excluding '2106'. Now in CMOD I have written the following codes for 'Z_BUS_AREA', But it does not work. 'Z_BUS_AREA' get no value while the query is running.Can somebody tell me how to correct it?thx.
    WHEN 'Z_BUS_AREA'.
    IF i_step = 2.
             READ TABLE i_t_var_range INTO l_s_range_v
                WITH KEY vnam = 'Z_COMP_CODE'.
             IF sy-subrc EQ 0.
               IF l_s_range_v-low = '3000'.
                 l_s_range-low = '2106'.
                 l_s_range-sign = 'E'.
                 l_s_range-opt = 'EQ'.
                 INSERT l_s_range INTO TABLE e_t_range.
               ENDIF.
             ENDIF.
           ENDIF.

    It should be Customer Exit only.
    Your code looks like ok to me. Have you tried to debug it? is l_s_range taking values?
    Also i don't know much abt insert statement...may be same as append...but just try with append statement also.
    Message was edited by:
            KJ (Kamaljeet)

  • Can i modify the query in discoverer

    Hi
    I am working on discoverer vierwer, is it possible to change the query and run the query myself. I can access the query but i can't make any changes.
    The problem i am having is i want to show the sum of a variable but not show the count, is this possible ?
    any help would be appreciated
    thanks
    nimesh

    Hi Nimesh
    Presumably the workbook you are referring to is not owned by you and has only been shared with you? In this case, while you can make changes to the query while it is in memory, providing you have the necessary privileges that is, you will not be able to save over the original. This is because only the owner can do this and you are not the owner.
    What you can do though is to use Save As and save ourself a variant of the report. Of course this opens up another spoke in the wheel of reinvented queries, something that most organizations would like to avoid.
    Therefore, perhaps the best option would be to contact the owner of the workbook and request that it be modified for your needs.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • Monitor progress of the query

    I ran an explain on a SQL and it shows the cost of the query to be 20k, but when I run the query it goes on running forever (I killed it after 30mins). I have another SQL whose cost shows up to same 20K and it completes in 10secs.
    I ran a trace on it and it shows its doing tons of IOs and there are many db sequential waits.
    1. What could be the reason for the query1 to run for so long when the cost is not high.
    2. Is there any way to identify which step a query is currently running, I mean how to identify which step of the explain is the query running now? I want to know what part of the query is the database running currently.
    3. I don't see any thing in the v$session_longops for this sql, where do I monitor the progress of this run.
    4. Is there a way or tool which will give me the list of all the tables a particular query is using. There are views used in the queries, which in turn are built on top of other views. So I want to find the list of all the tables for a given query, so that I can find if the stats are outdated for any one of them which might be causing the query to run forever.
    Please help....
    Thanks

    907426 wrote:
    1. What could be the reason for the query1 to run for so long when the cost is not high.That's why it's generally advisable to ignore the cost that shows up in a query plan. It is much more important to focus on the cardinality because that allows you to compare whether the optimizer's estimate of the number of rows that are returned at each step is accurate. If the optimizer estimates the cardinality incorrectly, whether its estimates are high or low, it is likely to pick a plan that will perform poorly.
    2. Is there any way to identify which step a query is currently running, I mean how to identify which step of the explain is the query running now? I want to know what part of the query is the database running currently.What version of Oracle are you using? What edition?
    3. I don't see any thing in the v$session_longops for this sql, where do I monitor the progress of this run.Are there long-running operations in the query plan (i.e. table scans)? Or is it doing a large number of short operations (i.e. nested loop joins that use indexes)?
    4. Is there a way or tool which will give me the list of all the tables a particular query is using. There are views used in the queries, which in turn are built on top of other views. So I want to find the list of all the tables for a given query, so that I can find if the stats are outdated for any one of them which might be causing the query to run forever.The query plan should tell you every object (table or index) that will be accessed.
    Justin

  • BW Analysis Process - Extended Settings in the Query

    Hello,
    I am trying to find out more about the effect of different parameter settings in an APD. Against a query, properties, there is extended settings, with a number of parameters, Partitioning Characteristic, Package Size and Server Group.
    When running the query in parallel as part of a process chain, I am finding that the process starts off lots of work processes, and as a result there are a lot of memory problems that are occurring. I think what is needed is for the query to run fewer work processes at any given time, and to keep the size of those work processes down as well.
    Is there a clear write up anywhere on how to achieve this?
    Thanks.

    Cote,
    it is a while ago now, and I don't have access to a BW 7 system anymore, but I did figure out what needed to be done in the end.
    There is a transaction called RZ12. Here you can define server groups, and also restrict how much resource that they take.
    In the APD you define which server group you want to use when running process in parallel.
    I think I would advise experimenting with this in a test environment, and find out which settings restrict the number of processes that the APD starts up to the optimum level.

Maybe you are looking for

  • What is the best way to separate functionality ?

    I have this project where I need to develop a component  for other flash developers to use. The scenario if you are the one who's going to use my "component" is like this: 1) you create a new project 2) you load the provided swf (or include the provi

  • HP s5610f PC

    My HP s5610f PC has no video signal going to my monitor (connection to VGA jack). Can I connect the monitor to the DVI jack using a VGA to DVI adapter? Which model DVI is needed? My hope is that the DVI  output has circuitry independent from the VGA

  • Imovie is freezing up my entire computer when it gets to generating thumbnails.

    i have tried rebooting, but it won't let me close out of any other programs. is there any way to get out of this? thanks.

  • Row to Column XSL Transform in BLS

    I have 11.5, sr3. I was going to use the XSLTransformation action to swap rows and columns of a data set using /Illuminator/stylesheets/RowToColumnTransform.xsl. I cannot get anything but the following as an output: <?xml version="1.0" encoding="UTF-

  • Adobe Edge CC 2014.1 bugs!!!

    Looks like 2014.1 has some bugs that weren't there before (well not for me anyhow). When I adjust the order of layers in the Element panel or go to delete certain keyframes I'm getting these errors (which is very frustrating)...