Order by taking too much time. Tried using all_rows hints no use.Pls advice

I run the query without order by and finishes in less than 30 sec. Once you have order by,query just hangs. Pls advice. Thanks.
select col4,col6
from
table a , table b, table c
where a.col1 = b.col2
and a.col1 = c.col3
order by col1,col2 desc

If you put the \ tag (6 characters, all lower case) around the text where you want to preserve the formatting, the query plans you post are going to be much more readable.
The optimizer's estimates appear to be correct and the plan is the same other than the ORDER BY.  I would, therefore, be highly inclined to believe as I was discussing earlier that the "30 seconds" figure that you're discussing is the time to fetch the first few rows, not the time to fetch the last row.
Can you define "hang"?  How long do you let the query run before killing it?  Minutes?  Hours?  Days?
What are the wait events associated with the query when it is running?  If the optimizer is correct that you're generating 5.4 million rows and a little over a GB of data that needs to be sorted, that should be expensive but that shouldn't take hours.
What is the business problem you're trying to solve?  Sorting implies that you are trying to return results from a report to a user.  But, obviously, no user is ever going to page through 5.4 million rows of results.
Justin                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Matview refresh taking too much time.....

    Hello All,
    I am trying to create Matview using db link, source table contain 30 crores of data.
    And I am picking only 2.5 crores of data from source,and i have used where condition "WHERE ALLOCATION_DATE BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MM'),-1) AND ADD_MONTHS(LAST_DAY(TRUNC(SYSDATE)),6)".
    But it is taking too much time to refresh and i am using atomic_refresh=> false.
    Source table contains following columns
    ASSIGNMENT#
    PROJECT#
    ALLOCATION_DATE
    EFFORTS
    WEEKEND_LEAVE_FLAG
    ANU_YEAR
    LAST_UPDATE
    ALLOCATION_EFFORTS
    and source table is partitioned on ANU_YEAR.
    Can any one Please tell me how to create fast refresh matview.

    953975 wrote:
    Hello All,
    I am trying to create Matview using db link, source table contain 30 crores of data.
    And I am picking only 2.5 crores of data from source,and i have used where condition "WHERE ALLOCATION_DATE BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MM'),-1) AND ADD_MONTHS(LAST_DAY(TRUNC(SYSDATE)),6)".
    But it is taking too much time to refresh and i am using atomic_refresh=> false.
    Source table contains following columns
    ASSIGNMENT#
    PROJECT#
    ALLOCATION_DATE
    EFFORTS
    WEEKEND_LEAVE_FLAG
    ANU_YEAR
    LAST_UPDATE
    ALLOCATION_EFFORTS
    and source table is partitioned on ANU_YEAR.
    Can any one Please tell me how to create fast refresh matview.Please read {message:id=9360003} and follow the advice there.
    Also, please use international English. Crore is not part of International English - use thousands, millions etc.

  • Taking too much time using BufferedWriter to write to a file

    Hi,
    I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
    Thanks in advance.
    public String extractItems() throws InternalServerException{
    try{
                   String extractFileName = getExtractFileName();
                   FileWriter fileWriter = new FileWriter(extractFileName);
                   BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
                   CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
    System.out.println("Before -1");
                   CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
    System.out.println("After -1");
              PrintWriter out = new PrintWriter(bufferWrt);
    System.out.println("Before -2");
              TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
    System.out.println("After -2");
    XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
    Enumeration allitems = itemSet.allItems();
    System.out.println("the batch size : " +itemSet.getBatchSize());
    XDForm frm = itemSet.getXDForm();
    XDFormProperty[] props = frm.getXDFormProperties();
    System.out.println("Before -3");
    bufferWrt.newLine();
    long startTime ,startTime1 ,startTime2 ,startTime3;
    startTime = System.currentTimeMillis();
    System.out.println("time here is--before-while : " +startTime);
    while(allitems.hasMoreElements()){
    String aRow = "";
    XDItem item = (XDItem)allitems.nextElement();
    for(int i =0 ; i < props.length; i++){
         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --new: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    startTime3 = System.currentTimeMillis();
    System.out.println("time here is--after-while : " +startTime3);
                   out.close();//added by rosmon to check extra time taken for extraction//
    bufferWrt.close();
    fileWriter.close();
    System.out.println("After -3");
    return extractFileName;
    catch(Exception e){
                   e.printStackTrace();
    throw new InternalServerException(e.getMessage());

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • Sites Taking too much time to open and shows error

    hi, 
    i've setup sharepoint 2013 environement correctly and created a site collection everything was working fine but suddenly now when i am trying to open that site collection or central admin site it's taking too much time to open a page but most of the time
    does not open any page or central admin site and shows following error
    event i go to logs folder under 15 hive but nothing useful found please tell me why it takes about 10-12 minutes to open a site or any page and then shows above shown error. 

    This usually happens if you are low on hardware requirements.  Check whether your machine confirms with the required software and hardware requirements.
    https://technet.microsoft.com/en-us/library/cc262485.aspx
    http://sharepoint.stackexchange.com/questions/58370/minimum-real-world-system-requirements-for-sharepoint-2013
    Please remember to up-vote or mark the reply as answer if you find it helpful.

  • Taking too much time to load application

    Hi,
    I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
    I have another 10g server (same version) in which the same application is loading very fast.
    When I checked the apache error logs found this :-
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
    [Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
    [Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    Please HELP ME...

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • BPC application is taking too much time to load

    Hi experts!
    I'm facing a very weird problem...
    We've developed a BPC application (app name: USM).
    This application is taking too much time to be loaded  in some computers (around 8 minutes to load).  Yes, in SOME computers.
    There is around 100.000 registers in the database and most coming from material master data.
    If I try to load this USM application in another computer, the process loads smoothly. The computer's hardware is all the same, the server is hyper estimated and everyone is in the same network.
    I talked to infrastructure departament and we made several tests. We run BPC on the server (loaded quickly), on several computers (some loads quickly, others don't), used wireless and cable connection (got all the same result) and checked communication between BW and BPC but it is ok.
    After all, I tried to load APSHEL application in the same enviroment and it loaded intantly. So, I guess it is something wrong with my application. But if was this, I suppose it should happen to all computers and not only with part of them.
    Have anybody ever seen something like this?
    Thank you in advance.
    Rubens
    Edited by: Rubens Massayuki Kumori on May 12, 2011 8:43 PM
    Edited by: Rubens Massayuki Kumori on May 12, 2011 8:46 PM

    Hi Rubens,
    I would try making a couple of test:
    1. I will install the client in a machine that is located in the same network segment, or try using a vpn that comunicates with the server bypassing all security devices, only to see if the network it's the problem.
    2. Making a full optimize of one application to see if maybe the problem it's related to the segmentation of the cubes (i don't think that this is the problem but give it a try).
    It is very wierd that in some computers happends and in others don't... also try to clean up the local cache of the applications in those computers that are giving to you bad performce and retry.
    hope it helps,

  • Taking too much time for saving PO with line item more than 600

    HI
    We are trying to save PO with line items more than 600, but it is taking too much time to save. It is taking more than 1 hour to save the PO.
    Kindly let me know is there any restriction for no. of line items in the PO. Pls guide
    regards
    Sanjay

    Hi,
    I suggest you to do a trace (tcode ST05) to identify the bottleneck.
    You can know some reasons in Note 205005 - Performance composite note: Purchase order.
    I hope this helps you
    Regards
    Eduardo

  • Spatial query with sdo_aggregate_union taking too much time

    Hello friends,
    the following query taking too much time for execution.
    table1 contains around 2000 records.
    table2 contains 124 rows
    SELECT
    table1.id
    , table1.txt
    , table1.id2
    , table1.acti
    , table1.acti
    , table1.geom as geom
    FROM
    table1
    WHERE
    sdo_relate
    table1.geom,
    select sdo_aggr_union(sdoaggrtype(geom, 0.0005))
    from table2
    ,'mask=(ANYINTERACT) querytype=window'
    )='TRUE'
    I am new in spatial. trying to find out list of geometry which is fall within geometry stored in table2.
    Thanks

    Hi Thanks lot for your reply.
    But It is not require to use sdo_aggregate function to find out whether geomatry in one table is in other geomatry..
    Let me give you clear picture....
    What I trying to do is, tale1 contains list of all station (station information) of state and table2 contains list of area of city. So I want to find out station which is belonging to city.
    For this I thought to get aggregation union of city area and then check for any interaction of that final aggregation result with station geometry to check whether it is in city or not.
    I hope this would help you to understand my query.
    Thanks
    I appreciate your efforts.

  • Delete query taking too much time

    Hi All,
    my delete query is taking too much time. around 1hr 30 min for 1.5 lac records.
    Though I have dropped mv log on the table & disabled all the triggers on it.
    Moreover deletion is based on primary key .
    delete from table_name where primary_key in (values)
    above is dummy format of my query.
    can anyone please tell me what could be other reason that query is performing that slow.
    Is there anything to check in DB other than triggers,mv log,constraints in order to improve the performance?
    Please reply asap.

    Delete is the most time consuming operation, as the whole record data has to be stored at the undo segments. On the other hand, there is a part of the query used to select records to delete that probably is adding an extra overhead to the process, the in (values) clause. It would be nice on your side to post another dummy from this (values) clause. I could figure out this is a subquery, and in order for you to obtain this list you have to run a inefficient query.
    You can gather the execution plan so you can see where the most heavy part of th query is. This way a better tuning approach and a more accurate diagnostic can be issued.
    ~ Madrid.

  • Import taking too much time

    Hi all
    I'm quite new to database administration.my problem is that i'm trying to import dump file but one of the table taking too much time to import .
    Description::
    1 Export taken from source database which is in oracle 8i character set is WE8ISO8859P1
    2 I am taking import in 10 g with character set utf 8 and national character set is also same.
    3 dump file is about 1.5 gb.
    4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
    5 while taking a import some table get import very fast bt at perticular table it get very slow
    please help me thanks in advance.......

    Hello,
    4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
    5 while taking a import some table get import very fast bt at perticular table it get very slow For the point *4* it's typically due to the CHARACTER SET conversion.
    You export data in WE8ISO8859P1 and import in UTF8. In WE8ISO8859P1 characters are encoded in *1 Byte* so *1 CHAR = 1 BYTE*. In UTF8 (Unicode) characters are encoded in up to *4 Bytes* so *1 CHAR > 1 BYTE*.
    For this reason you'll have to modify the length of your CHAR or VARCHAR2 Columns, or add the CHAR option (by default it's BYTE) in the column datatype definition of the Tables. For instance:
    VARCHAR2(100 CHAR)The NLS_LENGTH_SEMANTICS parameter may be used also but it's not very well managed by export/Import.
    So, I suggest you this:
    1. set NLS_LENGTH_SEMANTICS=CHAR on your target database and restart the database.
    2. Create from a script all your Tables (empty) on the target database (without the indexes and constraints).
    3. Import the datas to the Tables.
    4. Import the Indexes and constraints.You'll have more information on the following Note of MOS:
    Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]For the point *5* it may be due to the conversion problem you are experiencing, it may also due to some special datatype like LONG.
    Else, I have a question, why do you choose UTF8 on your Target database and not AL32UTF8 ?
    AL32UTF8 is recommended for Unicode uses.
    Hope this help.
    Best regards,
    Jean-Valentin

  • Query taking too much time with dates??

    hello folks,
    I am trying pull some data using the date condition and for somereason its taking too much time to return the data
       and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1     --If i use this its takes too much time
      and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
       and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
    How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??

    Presumably you've got an index on activity_date.
    If you apply a function like TRUNC to activity_date, you can no longer use the index.
    Post execution plans to verify.
    and al.activity_date >= TRUNC (SYSDATE, 'DD') - 1
    and al.activity_date < TRUNC (SYSDATE, 'DD')

  • Import  taking too much time in oracle 7i

    Hi Guys
    I am trying to import a table of around 9600000 rows. on oracle 7i it is taking too much time. any suggestion or way i can speedup the process?
    Thanks in advance
    Khurana

    Ok.
    Note that it is "_disable_logging" not "disable_logging", but I don't have an Oracle 7 database to confirm if it works.
    Been a long time since I used Oracle 7. Any reason why you have not upgraded ? Import should be much faster with 11g ...
    For further tuning you would need to look at OS and DB performance to find bottlenecks. E.g run bstat / estat.
    Other things to look at are disk performance, increase db_block_buffers, increase db_writer etc.

  • Performence?? pulling billing # with sales doc #, taking too much time???

    Hi Experts,
    Am looing to pull vbrp-vbeln i.e. billing doc #, based on the VGBEL i.e. sales doc #
    i.e.
    select single * from vbrp into wa_vbrp
    where vgbel = wa_vbap-vbeln
    and posnr = wa_vbap-posnr.
    but, as there is no secondary index in vbrp for vgbel and there r tonns of recs in vbrp, its taking too much time?
    so, wht is the alternative that i can find billing doc # with my sales doc #?
    thanq

    Mr. Srinivas,
    Just a suggestion, if you need only the header details, then why not extract data from VBRK (header for billing doc) & VBAK (header for sales doc). These 2 tables contain only single line per billing or sales doc & hence the performance should be better.
    If my suggestion is not what you are looking for, then apologies for the same.
    Regards,
    Vivek
    Alternatively as Mr. Eric suggests, you can use VBFA
    VBFA-VBELN = VBRK-VBELN
    VBFA-VBELV = VBAK-VBELN
    Logic is VBFA-VBELN is the subsequent document & VBFA-VBELV is the preceding document.
    Hope it helps. (but be sure, the document created after sales order is billing document, there might be cases where there could be delivery documents after sales order & before billing document, so be careful)
    Edited by: Vivek on Jan 29, 2008 11:11 PM

  • Taking too much time in Rules(DTP Schedule run)

    Hi,
    I am Scheduling the DTP which have filters to minimize the load data.
    when i run the DTP it is taking too much time in the "rules" (i can see the  DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
    here it is consuming too much time in Rules Mapping.
    what is the problem and any solutions please...
    regards,
    sree

    Hi,
    Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
    Also check ur DTP batch settings, ie how many no. of background processes used to perform  DTP, Job class.
    U can find these :
    goto DTP, select goto menu and select "Settings for Batch Manager".
    In the screen increase no of Processes from 3 to higher no(max 9).
    ChaNGE job class to 'A'.
    If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
    Change these settings and run ur DTP one more time.
    U can observer the difference.
    Reddy

  • Taking too much time incollecting in business content activation

    Hi all,
    I am collecting business content object for activation. I have selected 0fiAA_cha object,while cllecting in the activation but it is taking too much time and then it asks for source
    system authorisation and then throws error maximum run time exceded. i have selected data flow before there.
    What can be the reason for it.
    Please help..

    Hi ,
    You should also always try and have the latest BI content patch installed but I don't think this is the problem. It seems that there
    are alot of objects to collect. Under 'grouping' you can select the option 'only necessary objects', please check if you can
    use this option to  install the objects that you need from content.
    Best Regards,
    Des.

Maybe you are looking for

  • I installed itunes, and when it starts, it says the program has stopped working.  I uninstalled and reinstalled, same thing

    I recently installed Itunes on my laptop, and was able to add music to it, but then it would always pop up a window that said itunes had stopped working anytime I opened it.  I uninstalled it, and reinstalled it, and it's doing the same thing.  Pleas

  • IPhoto iPad Syncing problems

    iPad and iPhoto can't sync large numbers of faces or events or albums. I have many (677) log files (mostly LowMemory crashes) in <Username>/Library/Logs/CrashReporter/MobileDevice/<username>iPad... They kinda look like this...Incident Identifier: B38

  • Strange characters displayed in xcelisus

    Dear Experts , While loading data into xcelsius using universe Query , the arabic data is displayed as strange characters . could anyone tell me how to get the data as it's in arabic format ??!!

  • Excise Problem

    Hi guru's i have one problem after the process of ME21N,MIGO,MIRO,Extraction of Register iam trying to do Post the excise invoice but the system giving the error is company code is not defined, Till the extraction  and print priview part is fine, iam

  • Run Update to 9.4.7 but updates to 9.4.6.252

    Using Windows 7 Pro SP1 Secunia PSI flags my PC as vulnerable, so I constantly haver to open it to check that this is the only problem. Have run the downloaded update file more thn once, to no effect, and if I check for updates withing Reader it says