Middleware- Taking long time for generation of Runtime objects- SMOGTOTAL

Hi Experts,
I am doing middleware settings for connecting CRM 2007 with R/3 4.7.
When i am generating all the required objects ( Replication objects, publications....) using the transaction code SMOGTOTAL, system is taking very long time for generating the objects. Generally it takes 4 to 6 hours but in our case it has already took more than 36 hours and still its running.
Can anybody tell me what i need to do to make the generation process faster.
Regards
Nadh

What I read in the best practice:
It is not required for a new installation.
Typically this activity has already been executed during the system installation or upgrade.
Use transaction SMOGLASTLOG  to check if an initial generation has already been executed. In this case you can skip this activity.
I checked transaction SMOGLASTLOG, and in our case the initial generation was not yet executed. I also couldn't continue with the next steps.
That's why I started up the job, it is finally finished after 104 hours..
Thanks for your fast reply.
Jasper.

Similar Messages

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • Impdp taking long time for only few MBs data...

    Hi All,
    I have one query related to impdp. I have one expdp file and size is 47M. When I restore this dmp using impdp it will take more time. Also initially table_data loaded finsih very fast but then later on alter function/procedure/view taking a lot time almost 4 to 5 hrs.
    I have no idea why its taking long time... Earlier I can see one DB link is failed and it given error for "TNS name could not resolved" so I create DB link also before run impdp but the same result. Can any one suggest what could be the cause to taking long time for only 47MB data?
    Note - Both expdp and impdp database version is 11.2.0.3.0. If I import the same dmp file into 11.2.0.1.0 then its done in few mins.
    Thanks...

    Also Read
    Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [ID 453895.1]
    DataPump Import (IMPDP) is Very Slow at Object/System/Role Grants, Default Roles [ID 1267951.1]

  • Program SAPLSBAL_DB taking long time for BALHDR table entries

    Hi Guys,
    I am running a Z program in Quality and Production system both which is uploading data from Desktop.
    In Quality system the Z program is successfully uploading datas but in production system its taking very long time even sometime getting time out.
    As per trace analysis, Program SAPLSBAL_DB taking long time for BALHDR table entries.
    Can anybody provide me any suggestion.
    Regards,
    Shyamal.

    These are QA screen shots where no issue, but we are getting very long time in CRP.
    Regards,
    Shyamal

  • We are running a report ? it is  taking long time for  execution. what step

    we are running a report ? it is  taking long time for  execution. what steps will we do to reduce the execution time?

    Hi ,
    the performance can be improved thru many ways..
    First try to select based on the key fields if it is a very large table.
    if not then create a Secondary index for selection.
    dont perform selects inside a loop , instead do FOR ALL ENTRIES IN
    try to perform may operations in one loop rather than calling different loops on the same internal table..
    All these above and many more steps can be implemented to improve the performance.
    Need to look into your code, how it can be improved in your case...
    Regards,
    Vivek Shah

  • You are running a report. It is taking long time for

    You are running a report. It is taking long time for
    execution. What steps will you do to reduce the
    execution time.
        plx explain clearly

    Avoid loops inside loops.
    Avoid select inside loops.
    Select only the data that is required instead of using select *
    Select the field in the sequence as they are present in the database, and also specify the fields in the where clause in the same sequence.
    When ur using for all entries in the select statement, check whether the internal table to which ur refering is not initial.
    Remove select... endselect instead use into table
    Avoid Select Single inside the loop, instead select all the data before the loop and read that internal table inside the loop using binary search.
    Sort the Internal tables where ever necessary.

  • Hyperion System 9.3.1 reports taking longer time for the very first time

    We are on Hyperion System 9.3.1. The Financial reports are taking longer time (like 2 to 3 minuter) for the very first time for each login. The subsequest reports are does work faster.
    The behaviour is same for the Production and Development environments.
    All the reporting services have given enough JVM heap size.
    FYI, Reporting and Workspace runngin on the same server. Workspace/Reporting are clusted in two servers. HFM app is running on different server. HFM web is on different server. Shared Services is also on running on different server.
    Any help would be greately appreciated.
    Thanks.

    The reason they run quicker the subsequent times, is because the data has already been cached in the system.
    You could try the usual tricks to speed the report up:
    - move items into POV
    - have children and parent in the same row
    - arrange dimensions in inverse outline order
    - remove excessive formatting
    - push report calculations back to the data source
    We have found that using lots of dynamically calculated members also slows down reports, so try and limit the number of these.
    Hope this helps. If not maybe give us an idea of how the report is created to see if other changes could be made.

  • Crystal report from MSC taking long time for the first time

    Hello Friends,
    I have a report designed in Crystal Reports to display the BP details. It is giving some performance issues. In the analysis it is found that it is taking long time immediately after i restart the machine. From the next time it is not that worse.
    If anybody is facing the same performance issue while using the crystal reports, kindly let me know the reasons for this. I would like to know the ways to increse the performance.
    Thanks for all your support.
    Best regards,
    Swarna Seeta

    Hello Swarna
    Reporting is quiet a heavy component which interacts with many other dlls.
    What happens when you say generate report from the mobile application for the first time, UI Framework knows that it is a reporting call and it will instantiate Reporting Manager. Reporting Manager is the single point of contact for the entire reporting functionality. During the initialization of reporting manager,
    it needs to instantiate all other working components like Error Hanlder, Data handler, Resoure component and crystal reports etc..Resource component is a COM component and so there will be lot of Marshallign/Unmanrshalling from Dotnet calls. Since all these sub components are singleton classes, initialized only once for the first time and reuse the same instance in the subsequent attempts.
    Hope I had made it bit clear
    Best Regards
    Shankar

  • OBIEE 10g taking long time for login

    Hello Experts,
    I am using     Oracle BI applications 7.9.6.2 with OBIEE 10.1.3.4.2.
    When any user tries to login it is taking much time for the process(50-60 sec) but after login other query responce is good.
    Can you suggest me how can i reduce this login interval ?
    Thank You

    Pls. disable unwanted VARIABLES in repository & check for any Init block initialization taking more time.

  • Every 3rd data package taking long time for execution

    Hi Everyone
    We are facing a strange situation. Our scenario involves doing a full load from DSO to CUBE.
    Start routines are not very database intensive and care has been taken to write them in a optimized way.
    But strangely every 3rd data package is taking exceptionally longer time than other data packages.
    a) DTP is having 3 parallal processes.
    b)time spent in extraction , rule, and updation is constant for every data package.
    c)start routine time is larger for every 3rd data package and keeps on increasing. for e.g. 5 mins, 10 mins, 24 mins, 33 mins etc it increases by each 3rd package.
    I tried to anlayze the data which was taking so much time but found no difference in terms of data in normal and longer time taking DTP (i.e. there was not logical difference in data for start routine to behave like this).
    I was wondering what can be the possible reasons for it and may be some other external system factors can be responsible for it. If someone can help in this regard that will be highly appreciated.

    Hi Hemanth,
    In your start routine, are you by any chance adding or multiplying the number of records to the source_package? Something like copy source package into an internal table, add records to internal table and then copy it back to source package? If some logic of this sorts is in your start routine, you need to refresh your internal table. Otherwise, the internal table records goes on increasing with every data package. So, the processing time might increase as the load progresses. This is one common mistake I have seen. Please check your code if you have something like that and refresh the internal tables. See if this makes any difference.
    Thanks and Regards
    Subray Hegde

  • FileUpload using JAX-WS webservice on weblogic 10.3.5 is taking long time for files greater than 10MB

    I am trying to upload a file using JAX-WS webservice which is deployed on the weblogic 10.3.5 server.Even before the code reaches the Service Endpoint lot of time is being spent at the weblogic layer. for files less than 10MB the performance is good but for files greater than 10 MB it takes around 3 mins to complete the request. I did take thread dumps  and I see the thread servicing the requests is taking lot of time when executing SAX2DOMEx.characters  it consumes around 80 -85 % of time here. Is there anything that I can do to improve the performance here ?
    "[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'" id=37 idx=0x90 tid=16848 prio=5 alive, suspended, native_blocked, daemon
                    at jrockit/vm/Allocator.allocLargeObjectOrArray(JIZ)Ljava/lang/Object;(Native Method)
                    at jrockit/vm/Allocator.allocObjectOrArray(Allocator.java:349)[optimized]
                    at jrockit/vm/StringMaker.toString(StringMaker.java:188)[inlined]
                    at com/sun/org/apache/xerces/internal/dom/CharacterDataImpl.appendData(CharacterDataImpl.java:191)[optimized]
                    at com/sun/xml/bind/marshaller/SAX2DOMEx.characters(SAX2DOMEx.java:218)[inlined]
                    at com/sun/xml/bind/marshaller/SAX2DOMEx.characters(SAX2DOMEx.java:209)[optimized]
                    at com/sun/xml/ws/message/SAX2DOMWriterEx.writeCharacters( .java:108)
                    at com/sun/xml/ws/util/xml/XMLStreamReaderToXMLStreamWriter.handleCharacters(XMLStreamReaderToXMLStreamWriter.java:153)
                    at com/sun/xml/ws/util/xml/XMLStreamReaderToXMLStreamWriter.bridge(XMLStreamReaderToXMLStreamWriter.java:114)
                    at com/sun/xml/ws/message/stream/StreamMessage.writePayloadTo(StreamMessage.java:313)
                    at com/sun/xml/ws/message/stream/StreamMessage.writeEnvelope(StreamMessage.java:343)
                    at com/sun/xml/ws/message/stream/StreamMessage.writeTo(StreamMessage.java:321)
                    at com/sun/xml/ws/message/AbstractMessageImpl.readAsSOAPMessage(AbstractMessageImpl.java:226)
                    at com/sun/xml/ws/handler/SOAPMessageContextImpl.getMessage(SOAPMessageContextImpl.java:87)
                    at weblogic/wsee/jaxws/framework/jaxrpc/SOAPMessageContext.getMessage(SOAPMessageContext.java:252)
                    at weblogic/wsee/security/wssp/handlers/WssHandler.getSecurityContext(WssHandler.java:318)
                    at weblogic/wsee/security/wssp/handlers/WssHandler.preValidate(WssHandler.java:420)
                    at weblogic/wsee/security/wssp/handlers/PreWssServerPolicyHandler.processRequest(PreWssServerPolicyHandler.java:25)
                    at weblogic/wsee/security/wssp/handlers/WssHandler.handleRequest(WssHandler.java:112)
                    at weblogic/wsee/jaxws/framework/jaxrpc/TubeFactory$JAXRPCTube.processRequest(TubeFactory.java:222)
                    at com/sun/xml/ws/api/pipe/Fiber.__doRun(Fiber.java:866)
                    at com/sun/xml/ws/api/pipe/Fiber._doRun(Fiber.java:815)
                    at com/sun/xml/ws/api/pipe/Fiber.doRun(Fiber.java:778)
                    at com/sun/xml/ws/api/pipe/Fiber.runSync(Fiber.java:680)
                    ^-- Holding lock: com/sun/xml/ws/api/pipe/Fiber@0x83736a70[biased lock]
                    at com/sun/xml/ws/server/WSEndpointImpl$2.process(WSEndpointImpl.java:403)
                    at com/sun/xml/ws/transport/http/HttpAdapter$HttpToolkit.handle(HttpAdapter.java:532)
                    at com/sun/xml/ws/transport/http/HttpAdapter.handle(HttpAdapter.java:253)
                    at com/sun/xml/ws/transport/http/servlet/ServletAdapter.handle(ServletAdapter.java:140)
                    at weblogic/wsee/jaxws/WLSServletAdapter.handle(WLSServletAdapter.java:171)
                    at weblogic/wsee/jaxws/HttpServletAdapter$AuthorizedInvoke.run(HttpServletAdapter.java:708)
                    at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
                    at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:146)
                    at weblogic/wsee/util/ServerSecurityHelper.authenticatedInvoke(ServerSecurityHelper.java:103)
                    at weblogic/wsee/jaxws/HttpServletAdapter$3.run(HttpServletAdapter.java:311)
                    at weblogic/wsee/jaxws/HttpServletAdapter.post(HttpServletAdapter.java:336)
                    at weblogic/wsee/jaxws/VerboseHttpProcessor.post(VerboseHttpProcessor.java:39)
                    at weblogic/wsee/jaxws/JAXWSServlet.doRequest(JAXWSServlet.java:98)
                    at weblogic/servlet/http/AbstractAsyncServlet.service(AbstractAsyncServlet.java:99)
                    at javax/servlet/http/HttpServlet.service(HttpServlet.java:820)
                    at weblogic/servlet/internal/StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
                    at weblogic/servlet/internal/StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
                    at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:300)
                    at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:183)
                    at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.wrapRun(Lweblogic/servlet/internal/ServletStubImpl;Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)Ljava/lang/Object;(Unknown Source)
                    at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.run()Ljava/lang/Object;(Unknown Source)
                    at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
                    at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:120)
                    at weblogic/servlet/internal/WebAppServletContext.securedExecute(Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;Z)V(Unknown Source)
                    at weblogic/servlet/internal/WebAppServletContext.execute(Lweblogic/servlet/internal/ServletRequestImpl;Lweblogic/servlet/internal/ServletResponseImpl;)V(Unknown Source)
                    at weblogic/servlet/internal/ServletRequestImpl.run()V(Unknown Source)
                    at weblogic/work/ExecuteThread.execute(ExecuteThread.java:209)
                    at weblogic/work/ExecuteThread.run(ExecuteThread.java:178)
                    at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)

    I am trying to upload a file using JAX-WS webservice which is deployed on the weblogic 10.3.5 server.Even before the code reaches the Service Endpoint lot of time is being spent at the weblogic layer. for files less than 10MB the performance is good but for files greater than 10 MB it takes around 3 mins to complete the request. I did take thread dumps  and I see the thread servicing the requests is taking lot of time when executing SAX2DOMEx.characters  it consumes around 80 -85 % of time here. Is there anything that I can do to improve the performance here ?
    "[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'" id=37 idx=0x90 tid=16848 prio=5 alive, suspended, native_blocked, daemon
                    at jrockit/vm/Allocator.allocLargeObjectOrArray(JIZ)Ljava/lang/Object;(Native Method)
                    at jrockit/vm/Allocator.allocObjectOrArray(Allocator.java:349)[optimized]
                    at jrockit/vm/StringMaker.toString(StringMaker.java:188)[inlined]
                    at com/sun/org/apache/xerces/internal/dom/CharacterDataImpl.appendData(CharacterDataImpl.java:191)[optimized]
                    at com/sun/xml/bind/marshaller/SAX2DOMEx.characters(SAX2DOMEx.java:218)[inlined]
                    at com/sun/xml/bind/marshaller/SAX2DOMEx.characters(SAX2DOMEx.java:209)[optimized]
                    at com/sun/xml/ws/message/SAX2DOMWriterEx.writeCharacters( .java:108)
                    at com/sun/xml/ws/util/xml/XMLStreamReaderToXMLStreamWriter.handleCharacters(XMLStreamReaderToXMLStreamWriter.java:153)
                    at com/sun/xml/ws/util/xml/XMLStreamReaderToXMLStreamWriter.bridge(XMLStreamReaderToXMLStreamWriter.java:114)
                    at com/sun/xml/ws/message/stream/StreamMessage.writePayloadTo(StreamMessage.java:313)
                    at com/sun/xml/ws/message/stream/StreamMessage.writeEnvelope(StreamMessage.java:343)
                    at com/sun/xml/ws/message/stream/StreamMessage.writeTo(StreamMessage.java:321)
                    at com/sun/xml/ws/message/AbstractMessageImpl.readAsSOAPMessage(AbstractMessageImpl.java:226)
                    at com/sun/xml/ws/handler/SOAPMessageContextImpl.getMessage(SOAPMessageContextImpl.java:87)
                    at weblogic/wsee/jaxws/framework/jaxrpc/SOAPMessageContext.getMessage(SOAPMessageContext.java:252)
                    at weblogic/wsee/security/wssp/handlers/WssHandler.getSecurityContext(WssHandler.java:318)
                    at weblogic/wsee/security/wssp/handlers/WssHandler.preValidate(WssHandler.java:420)
                    at weblogic/wsee/security/wssp/handlers/PreWssServerPolicyHandler.processRequest(PreWssServerPolicyHandler.java:25)
                    at weblogic/wsee/security/wssp/handlers/WssHandler.handleRequest(WssHandler.java:112)
                    at weblogic/wsee/jaxws/framework/jaxrpc/TubeFactory$JAXRPCTube.processRequest(TubeFactory.java:222)
                    at com/sun/xml/ws/api/pipe/Fiber.__doRun(Fiber.java:866)
                    at com/sun/xml/ws/api/pipe/Fiber._doRun(Fiber.java:815)
                    at com/sun/xml/ws/api/pipe/Fiber.doRun(Fiber.java:778)
                    at com/sun/xml/ws/api/pipe/Fiber.runSync(Fiber.java:680)
                    ^-- Holding lock: com/sun/xml/ws/api/pipe/Fiber@0x83736a70[biased lock]
                    at com/sun/xml/ws/server/WSEndpointImpl$2.process(WSEndpointImpl.java:403)
                    at com/sun/xml/ws/transport/http/HttpAdapter$HttpToolkit.handle(HttpAdapter.java:532)
                    at com/sun/xml/ws/transport/http/HttpAdapter.handle(HttpAdapter.java:253)
                    at com/sun/xml/ws/transport/http/servlet/ServletAdapter.handle(ServletAdapter.java:140)
                    at weblogic/wsee/jaxws/WLSServletAdapter.handle(WLSServletAdapter.java:171)
                    at weblogic/wsee/jaxws/HttpServletAdapter$AuthorizedInvoke.run(HttpServletAdapter.java:708)
                    at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
                    at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:146)
                    at weblogic/wsee/util/ServerSecurityHelper.authenticatedInvoke(ServerSecurityHelper.java:103)
                    at weblogic/wsee/jaxws/HttpServletAdapter$3.run(HttpServletAdapter.java:311)
                    at weblogic/wsee/jaxws/HttpServletAdapter.post(HttpServletAdapter.java:336)
                    at weblogic/wsee/jaxws/VerboseHttpProcessor.post(VerboseHttpProcessor.java:39)
                    at weblogic/wsee/jaxws/JAXWSServlet.doRequest(JAXWSServlet.java:98)
                    at weblogic/servlet/http/AbstractAsyncServlet.service(AbstractAsyncServlet.java:99)
                    at javax/servlet/http/HttpServlet.service(HttpServlet.java:820)
                    at weblogic/servlet/internal/StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
                    at weblogic/servlet/internal/StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
                    at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:300)
                    at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:183)
                    at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.wrapRun(Lweblogic/servlet/internal/ServletStubImpl;Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)Ljava/lang/Object;(Unknown Source)
                    at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.run()Ljava/lang/Object;(Unknown Source)
                    at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
                    at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:120)
                    at weblogic/servlet/internal/WebAppServletContext.securedExecute(Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;Z)V(Unknown Source)
                    at weblogic/servlet/internal/WebAppServletContext.execute(Lweblogic/servlet/internal/ServletRequestImpl;Lweblogic/servlet/internal/ServletResponseImpl;)V(Unknown Source)
                    at weblogic/servlet/internal/ServletRequestImpl.run()V(Unknown Source)
                    at weblogic/work/ExecuteThread.execute(ExecuteThread.java:209)
                    at weblogic/work/ExecuteThread.run(ExecuteThread.java:178)
                    at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)

  • Z30 New Handset is taking long time for first boot

    hold power button down for 30 seconds to reboot it

    Hi, I purchased Z30 handset yesterday and tried switching it ON. It has taken 15hrs for booting and still showing 99%.Is this issue related to my handset particularly or BB10 related?

  • Count taking long time for few rows

    Hello,
    We have table which is 3 mins to count 116K rows. Is there anyway to tune this other than drop and recreate the table?
    This table is daily used to insert/update and delete data. But on average we keep around 75K rows.
    Thanks
    Manohar

    Be careful about making conclusions using observation only. Simple example. Observe how the WARP_SPEED hint makes the SQL go faster:
    SQL> set timing on
    SQL> select count(*) from all_objects;
    COUNT(*)
    10460
    Elapsed: 00:00:09.60
    SQL> select /*+ WARP_SPEED */ count(*) from all_objects;
    COUNT(*)
    10460
    Elapsed: 00:00:00.50
    SQL>
    The empirical conclusion is that the WARP_SPEED hint made the query faster by 90%.
    This conclusion (based on observation) is incorrect. The real reason why the 2nd query is faster is that it made substantially less physical I/O than the 1st query. The 1st query loaded a lot of data needed into the buffer cache. The 2nd query found that data there and had no need to perform the same expensive and slow physical I/Os that the 1st query did. Nor is there a WARP_SPEED hint.
    So be very careful on making assumptions and basing conclusions solely on observation.

  • VB Macro in Bex Analyser is taking long time to complete execution

    Hi Experts,
    In a FI query , we have a VB macro which update the excel sheets by taking values from the previous excel sheets .
    The issue is its taking long time for query execution and if we are keep on pressing 'ENTER' button . The query is running very fastly and is giving the results ,but its not a correct way to do.
    Its a critical issue in Production and can anyone help me to resolve the issue.
    Regards,
    Anju

    Hi Anju,
    I need to create a VB macro in one of my query but i not familiar how i can plug the VB code into the query.
    Please can you give me some basic procedure on how to do that?
    What i need to do exactly is to bring a KF from a query into another query. Not sure if this is possible using VB.
    Let me know.
    Thanks.
    Cesar

  • Bex Reports takes long time for filtering

    Hi,
    We have gone live in last December.And already our inventory cube contains some 15 million records,sales cube contains 12 million records.
    Is there any specific limit to number of records.Because while doing filtering in inventory report or sales reports it is taking very  long time.
    Is there any alternative or we should delete some the data from the cube.
    for filtering any value it is taking long time than running the query itself.
    Pls help...
    Regards,
    viren.

    Hi Viren,
    I hope a cube can perform well even at 100 million records with some performance tunning. So i absolutely doubt why it is taking long time for your cube with just 10-15 million records.
    Do a performance analysis and check whether aggregates will be helpful or not.
    Check the below link for how to do a performance analysis.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/d9fd84ad-0701-0010-d9a5-ba726caa585d
    Hope it helps.
    Thx,
    Soumya

Maybe you are looking for