Is pipeline transactional?

Hi all!
I try to understand transactional aspect of ALSB.
Simple question: is it possible to create proxy service which transport is WebService/SOAP, which pipeline consists of several Service Callout nodes, and all these calls are in one transaction?
Edited by Butyrkin at 09/12/2007 6:33 AM

The response pipeline is in the same transaction than the request pipeline only if the inbound transport is synchrounous transactional. This only the case for Tuxedo transport today.
In general I don't recommend using the response pipeline for transactional calls.
The route node is in the same transaction than the request pipeline (if QOS exactly once).
In your specific case the EJB should be in the same transaction than the request pipeline. We have test cases covering this so I doubt there is a bug. Maybe your EJB starts a new transaction when it is invoked? This is very weird.
Gregory Haardt
ALSB Prg. Manager
[email protected]

Similar Messages

  • JMS proxy and XA connection factory

    Hi all,
    I would like to ask you what is the best practice to adopt about the scenario described below.
    A JMS proxy retrieves a message and processes it.
    Any error could occurr during processing it and, in case of errors, the JMS proxy error handler publishes the message on ad hoc recovery destination D.
    (later another app will check failed messages for fixing & republishing them into ALSB).
    In that scenario we have the JMS proxy and the jms business service that, in case of errors, publishes on destination D.
    Should the JMS proxy and the business service use both a XA connection factory in order to perform all the above steps in one transaction ?
    Otherwise if the business service itself fails publishing on destination D, the message retrieved by the JMS proxy is lost and not re-delivered to the JMS proxy.
    Or XA connection factory is not needed and could I use
    Routing Options with exactly once or both are needed ?
    Thanks
    ferp

    Hi all,
    I did some tests and follow what I've achieved.
    Scenario 1.
    - Precondition
    -- A JMS proxy with XA factory + Error Destination (MyRecoveryQueue)
    -- A business service BS with XA factory that publishes into MyOutboundQueue
    -- No "exactly once" routing option used calling BS
    -- an error is forced in the pipeline
    - Flow
    -- proxy retrieves a message and try to publish it using BS
    -- an error is forced in the pipeline:
    --- transaction is rolled back, message redelivered to proxy
    --- the message is posted to the error destination after all the retries failed
    Scenario 2.
    - Precondition
    -- As 1. but no error is forced in the pipeline
    -- the MyOutboundQueue destination queue is paused
    - Flow
    -- proxy retrieves a message and try to publish it using BS
    -- BS fails to publish it on MyOutboundQueue (because it is paused)
    --- transaction is rolled back, message redelivered to proxy
    --- the message is posted to the error destination after all the retries failed
    Scenario 2A.
    - Precondition
    -- As 2. with MyOutboundQueue paused and resumed
    - Flow:
    -- proxy retrieves a message and try to publish it using BS
    -- BS fails to publish it on MyOutboundQueue (because it is paused)
    --- transaction is rolled back, message redelivered to proxy
    --- before all the retries failed the queue is resumed
    --- BS succeeds to publish the message
    Scenario 3.
    - Precondition
    -- A JMS proxy with NO XA factory + Error Destination
    -- A business service BS with NO XA factory
    -- No "exactly once" routing option used calling BS
    -- an error is forced in the pipeline
    - Flow
    -- proxy retrieves a message and try to publish it using BS
    -- an error is forced in the pipeline:
    --- transaction is NOT rolled back, message NOT redelivered to proxy
    -- so
    --- no message delivered to MyOutboundQueue2 destination
    --- no message delivered to MyRecoveryQueue2 destination
    --- message consumed from MyQueue2 and now is lost!
    To publish message into MyRecoveryQueue2 a proxy error handler has to be added and here added explicit publish to MyRecoveryQueue2.
    But naturally in that case if in the proxy error handler, the explicit publish fails or any error occurrs no message is delivered to recovery queue.
    So if I'm not using an XAFactory the message is auto-acknowledged as soon as it is read and I've to use XA factory if I want the message to be put back in the queue in case of errors and the retry to happen.
    So both my proxy service and business service use an XA factory.
    Regards
    ferp

  • Pipeline function raised ORA-06519: active autonomous transaction detected

    Hi All,
    My name is John and I've got a problem which I need to share with all of you guru and experts. I've created the following pipeline function under the Oracle user ABC:
    CREATE OR replace FUNCTION SomeFunction(p_from_date DATE, p_to_date DATE) RETURN T_TAB_A pipelined
    IS
    PRAGMA autonomous_transaction;
    BEGIN
    DELETE FROM temp_rcm;
    INSERT INTO temp_rcm
    SELECT * FROM int.facility fd,
    int.capacity co
    WHERE co.resource_name = fd.resource_name
    AND co.trade_date = fd.trade_date
    AND co.trade_date BETWEEN p_from_date AND p_to_date;
    COMMIT;
    FOR rec IN (SELECT co.*
    FROM temp_rcm co
    left join int.outage o
    ON ( o.flag = 'Y'
    AND o.reason_flag = 'F'
    AND o.INTERVAL = co.INTERVAL
    AND co.resource_name = o.resource_name )
    ORDER BY co.INTERVAL,
    co.name) LOOP
    pipe ROW (T_A( rec.INTERVAL, rec.trade_date,
    rec.resource_name,rec.day_of_week_long, rec.working_day, rec.peak));
    END LOOP;
    RETURN;
    END SomeFunction;
    I was able to compile and create the SomeFunction function successfully but when I executed it using the following command:
    select * from table(SomeFunction(to_date('01/01/2010',to_date('01/01/2010')));
    I was returned with the Oracle error - ORA-06519: active autonomous transaction detected and rolled back
    I have searched through the web, such Oracle error occurs whenever the function has a missing 'COMMIT' or 'ROLLBACK' command inside an autonomous_transaction. But the fact is I have already included the 'COMMIT;' in the function. I suspected that the error was caused by the tables which I queried against (like int.facility and int.capacity) were all views that belonged to another schema called int. Or is that something that I miss in the function? Thank you for your time and assistance.
    Regards,
    John

    johnwanng wrote:
    Hi Guys,
    Thank you for all your feedback. In addition to your reply, Bill, can you spare some time and provide us a simple example of the steps involved to implement the 'correct' implementation based on the queries that I've used. As I do not understand your vanilla approach. Much appreciated and thank you for the time again.
    Regards,
    JohnIf I had to guess, Billy may have meant something like this (untested):
    CREATE OR REPLACE FUNCTION SomeFunction
    ( p_from_date IN int.facility.trade_date%TYPE
    , p_to_date   IN int.facility.trade_date%TYPE
    RETURN SYS_REFCURSOR
    AS
         rcur     SYS_REFCURSOR;
    BEGIN
         OPEN rcur FOR
              SELECT co.interval
                   , co.trade_date
                   , co.resource_name
                   , co.day_of_week_long
                   , co.working_day
                   , co.peak
              FROM   int.capacity co
              JOIN   int.facility fd        ON fd.resource_name = co.resource_name
                                           AND fd.trade_date    = co.trade_date
              LEFT OUTER JOIN int.outage o  ON o.interval       = co.interval
                                           AND o.resource_name  = co.resource_name
              WHERE  co.trade_date BETWEEN p_from_date AND p_to_date
              AND    o.reason_flag = 'F'
              AND    o.flag        = 'Y'
              ORDER BY co.interval
                     , co.name
         RETURN rcur;
    END;
    /I made the following modifications:
    1. I set the input parameter data types to match that of the table column you are checking against. A good practice to get into.
    2. Removed the autonomous transaction and inserting into a temp table. In Oracle it's a good practice to perform everything in a single SQL statement if possible.
    3. Changed the return data type to a SYS_REFCURSOR
    Hope this helps and provides a good example.

  • Pipeline Performance filter on transaction type?

    Dear all,
    We are currently working at one of our customers on a CRM 2007 (6.0) and I had two questions regarding Pipeline performance.
    1) If you define 2 different opportunity types, is there a way that you can filter the pipeline performance so that only one of the two opportunity types is taken into account to calculate the sales pipeline?
    2) Is it possible to add some extra filtercriteria in the filter right above the graph (eg. target to date graph, you can only filter on 'first quarter shown', 'last quarter shown', 'owner', 'relevant for forecast', 'sales team', I would like to add 'opportunity group' to that list).
    Thanks in advance!
    Yasmine

    hav you tried using solution keys....
    for e.g u hav defined 2 solution 1 america and 2 india...now india users can';t see the mesage for america....
    this can be achieved by using Solution keys
    goto DSWP where you get the list of all solutions if by default some solution open click back button u wil be navigated to the main screen of all solutions
    now from SAP Menu goto ->Technical information
    This will display solution and ID which you can assign to the users.
    Hope it solve ur problem
    Regards
    Prakhar

  • How can I configure a party such that this party can submit both 4010 and 5010 transaction?

    I try to configure a party such that this party can submit both 4010 and 5010 transactions. 
    I encounter error related to ISA11 field.
    When I submit a 4010 transaction, I need to uncheck the 'Use ISA11 as repetition separator' box in the X12 Interchange Processing Properties page of the party.  If not I will encounter error if the character 'U' in ISA11 appears as content inside
    the transaction.
    On the other hand, if I submit a 5010 transaction and left the 'Use ISA11 as repetition separator' box uncheck, I will encounter the following error:
    Error: 2 (Field level error)
         SegmentID: ISA
         Position in TS: 1
         Data Element ID: ISA11
         Position in Segment: 11
         Data Value: ^
        7: Invalid code value.
    The character in ISA11 in 5010 transaction is '^'.
    Is there any setting or trick I can use such that I don't have to adjust the 'Use ISA11 as repetition separator' box everytime a different version of EDI is submitted by the party.

    Hi,
        The problem you are describing is not because of Version of the document but actually it is because of different kind of ISA11s you are using for one party. It can happen with the same verion document as well.
    So for solving it You can right a pipline component and put it before edi-receive pipeline component. In this pipline component you can search ISA11 and replace with 'U' if you do not want to use it.
    But I think the problem will come further becauase if you have '^' in ISA11 then most probably you would be using it in document as repetition separator. So please check if you are not using '^' in the doc and then apply this pipeline component.
    Thanks
    Gyan
    If this answers your question, please mark it as "Answered".

  • ** Create New Transaction Check Box in BPM

    Hi Friends,
    I have ticked the check box 'Create New Transaction' for every step in BPM. In Block I have mentioned 'Block Start' & 'Block End' property as 'New Transaction'.
    I have gone through SAP Help Transactional Behaviour of BPM. Somewhere it is mentioned by check this to increase system performance, database hit time will be reduced. Somewhere 'No New Transaction' is recommened.
    I am confusing on this.
    Could you kindly help me to understand this clearly ? (Like, what is the advantage of 'Create new Transaction'?, If untick what will happen ? ...)
    Kind regards,
    Jegathees P.

    it depends on how you decide to use the property.
    in help it mentions;
    Transactional Behavior for Specific Step Types
    At runtime, the system normally creates a separate transaction for each step. The transaction then covers this step only. However, you can influence the transactional behavior of particular step types. In the step properties, you can define that the system is not to start a new transaction when the step is executed. The system then executes the step in the transaction that was started at the time of execution. Consequently, no background work item is created for the step and the database does not need to be accessed. In this way you can improve system performance.
    by defining to not use a new transaction at the right place you can improve performance.
    It also depends on step types. For example in case of a sync send this is what help says;
    No New Transaction: You can expect better system performance if the system does not create a new transaction for the send step. However, only select this setting if by repeating the send step the result is not changed (idempotency). This is the case, for example, with lookup operations.
    Note
    If you have selected this setting and an error occurs, synchronous sending can also cause problems in the pipeline and the receiver system following the rollback. Messages with an error status can also remain in the pipeline.
    New Transaction: If the result is changed when the step is repeated, choose Create New Transaction. Otherwise the following error situation can occur: The system successfully executes a synchronous send step but an error occurs in the subsequent step. The system rolls back processing and executes all steps in the transaction - including the send step - again. If the send step results in a write-to operation in the receiver system, for example, creating a purchase order, this is also repeated. This can result in semantic errors. 
    Mainly transaction processing helps to have a roll back implemented.

  • BPM Sync/Async Scenario:  error: "Timeout condition of pipeline reached"..!

    Hi,
    I am doing Sync/Async BPM scenario.
    -> Receive message and process and response back to the sender.
    Sometimes it will correct correctly. Sometime the message will come and stay in XI and fails with error
    <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="1">
      <SAP:Category>XIServer</SAP:Category>
      <SAP:Code area="INTERNAL">PL_TIMEOUT</SAP:Code>
      <SAP:P1 />
      <SAP:P2 />
      <SAP:P3 />
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:ApplicationFaultMessage namespace="" />
      <SAP:Stack>Timeout condition of pipeline reached</SAP:Stack>
      <SAP:Retry>N</SAP:Retry>
      </SAP:Error>
    I observed that the messages will stay in transaction "SXMS_SAMON" . and will fail after sometime.
    What might be the reason. ? what need to do to stop these kind of errors?
    Thanks
    Deepthi.

    Hi Praveen,
    Webservice <--> XI -->BW .
    BPM :
    start ->Receive(Request)> Transformation(Responsemap)>Send(SendtoBW)->Send(Send Response) ---> stop.
    Messages are getting struck in SMQ1 and SM58 at these three points.
    1. Message comes and stays in SXMB_MONI in status "Log Version"
    The messages are stuck in SMQ1 in READY status without doing any processing.
    XBQO$PE_WS90100002    WORKFLOW_LOCAL_100 1 READY 26.02.2009
    Once I push the queue by Activating/Unlocking, it will process.
    2. when it is trying to send the message to R/3 (Backend system) it is waiting in SM58 with below entry.
    WF-BATCH SWW_WI_EXECUTE_INTERNAL_RFC WORKFLOW_LOCAL_100 Transaction recorded
    Manually I execute LUW to push it. Once I done the message will go to R/3 system and Response mapping also complete.
    3. Again it is waiting at SM58 with below deatails while sending the response to sender.
    PIAFUSER  SWW_WI_COMP_EVENT_RECEIVE_IBF  WORKFLOW_LOCAL_100  Transaction recorded
    we are again manually execute LUW. Once we done the response message will go back to Sender.
    Any Idea to solve this..?
    Thanks
    Deepthi

  • BPM step "received" in status waiting in transact. SWI1

    Hello,
    possibly corresponding to my other topic from today I have a funny situation in an BPM.
    First step is a receive step for an IDOC coming into my BPM. The corresponding mapping is IDOC -> IDOC.
    Interf.mapp. is async outb. with ORDERS05 to asyn abstr. async.ORDERS05.
    So this process looks as it should be.
    The process ist started with an incoming IDOC and continues with the BPM till the end, except the first step (receive) which can be found in SXMB_MONI with "queue stopped" / "recorded for outbound processing" in pipeline step "receiver grouping".
    The message in SWF_XI_SWI1 transaction is for this step "waiting for event RECEIVED of object type ZXI_PROXY_RFC_STATUS_ABST_0001"".
    The queue stopped is XBTO (found in transact. SMQ2).
    Does anybody know whats going on here?
    Where is this object type from? (Step belongs to receiving a standard IDOC ORDERS05).
    And why is this process step stopped but not the whole BPM?
    Our idea: We had activated ALEAUD for IDOC acknowledgement. This step tries to return an ackn. but cannot?
    Thank you for your help!
    Regards
    Dirk

    Hi Moorthy,
    thank you.
    Unfortunately all this don´t help.
    No entries in ST22, Entries in SWWL are not corresponding to our problem.
    I deleted the problematic queue entries and we tried to restart the process.
    Possibly we got the problem.
    In one scenario we have no BPM. There an http receiver is obviously implemented wrong. It is the one who receives in the BPM the message starting the BPM.
    I am not sure why this results in the blocked queue. Usually connection problems come up with an error entry in SXMB_MONI.
    No idea about the BPM which is stucked in the queue.
    Possibly same problem, because in MONI we can see that the system stops at receiver grouping.
    When we have fixed the http-receiver issue we will test again with the BPM.
    Regards
    Dirk

  • Transaction roll back exception

    I am getting a transaction rollback exception during a long running query. running
    WL 5.1 and dont think I can use the
    trans-timeout-seconds in the transaction-descriptor section of the weblogic-ejb-jar
    descriptor file. Can anyone advise on if there is another way to do this or does
    5.1 support this?
    thanks
    Gary

    Hi Athhek,
    If I set reply with success then both route node and system error handlers are executed. I get following response metadata
    <con:metadata      xmlns:con="http://www.bea.com/wli/sb/test/config">
    <tran:response-code      xmlns:tran="http://www.bea.com/wli/sb/transports">1</tran:response-code>
    </con:metadata>
    and also following error reaches system handler:
    <con:fault      xmlns:con="http://www.bea.com/wli/sb/context">
    <con:errorCode>BEA-382050</con:errorCode>
    <con:reason>
    Expected active transaction, actual transaction status: Marked Rollback
    </con:reason>
    <con:location>
    <con:path>response-pipeline</con:path>
    </con:location>
    </con:fault>
    When I set reply with error only route node handler is executed but I still get
    <con:metadata      xmlns:con="http://www.bea.com/wli/sb/test/config">
    <tran:response-code      xmlns:tran="http://www.bea.com/wli/sb/transports">1</tran:response-code>
    </con:metadata>
    Thank you.

  • Getting error 'root transaction wanted to commit, but transaction aborted'

    We have a module in our project, which reads data from XML file and merges the data into the database. This merging at one end happens to a SQL Server 2005/SQL Server 2000 datbase. At the other end it happens to a Oracle database. We have a portal application developed in ASP.NET from where, we merge the data.
    When the data to be merged is very huge, we get this message. 'The root transaction wanted to commit, but transaction aborted'. Right now we are getting this message, when we try to merge data on to a oracle database.
    But this problem is very intermittent. It happens only when there is huge amount of data to be inserted to one table.
    As i have mentioned in my post, we use Windows server 2003 operation system with service pack 2. This error does not come when we do the same operation with service pack 1.
    So is it OS dependent?

    Please find the details of the log file.
    DMS_CORE_DAL_DBERROR
    at Oracle.DataAccess.Client.OracleException.HandleErrorHelper(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, String procedure)
    at Oracle.DataAccess.Client.OracleCommand.ExecuteNonQuery()
    at Oracle.DataAccess.Client.OracleCommand.ExecuteNonQuery()
    at CoreServices.DAL.DataManager.ExecuteNonQueryProc(DBConnection foConn, String fProcName, DOList foParamDOList)
    *** ORA-02291: integrity constraint (ADVTVS.FK_JCARD_JCARD_LAB) violated - parent key not found
    ORA-06512: at "ADVTVS.PKG_SYNC_MERGE_TRNS_SERVICE", line 318
    ORA-06512: at line 1 ---
    Server stack trace:
    at CoreServices.Pipeline.TransactionPipeline.Process(IPipelineable& foPipeLineDataObject, PipelineOperation fiPipelineOprn)
    at System.Runtime.Remoting.Messaging.Message.Dispatch(Object target, Boolean fExecuteInContext)
    at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg, Int32 methodPtr, Boolean fExecuteInContext)
    Exception rethrown at [0]:
    at DataSync.MergeData.MergeDataManager.Merge(Int32 fiDealerId)
    at Client.DataSync.cmdMerge_Click(Object sender, EventArgs e)
    DATASYNC_MERGE
    at System.Runtime.InteropServices.Marshal.ThrowExceptionForHR(Int32 errorCode, IntPtr errorInfo)
    at System.EnterpriseServices.Thunk.Callback.DoCallback(Object otp, IMessage msg, IntPtr ctx, Boolean fIsAutoDone, MemberInfo mb, Boolean bHasGit)
    at System.EnterpriseServices.ServicedComponentProxy.CrossCtxInvoke(IMessage reqMsg)
    at System.EnterpriseServices.ServicedComponentProxy.Invoke(IMessage request)
    at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
    at CoreServices.Pipeline.TransactionPipeline.Process(IPipelineable& foPipeLineDataObject, PipelineOperation fiPipelineOprn)
    at DataSync.MergeData.MergeDataManager.Merge(Int32 fiDealerId) at System.Runtime.InteropServices.Marshal.ThrowExceptionForHR(Int32 errorCode, IntPtr errorInfo)
    at System.EnterpriseServices.Thunk.Callback.DoCallback(Object otp, IMessage msg, IntPtr ctx, Boolean fIsAutoDone, MemberInfo mb, Boolean bHasGit)
    at System.EnterpriseServices.ServicedComponentProxy.CrossCtxInvoke(IMessage reqMsg)
    at System.EnterpriseServices.ServicedComponentProxy.Invoke(IMessage request)
    at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
    at CoreServices.Pipeline.TransactionPipeline.Process(IPipelineable& foPipeLineDataObject, PipelineOperation fiPipelineOprn)
    at DataSync.MergeData.MergeDataManager.Merge(Int32 fiDealerId)
    *** The root transaction wanted to commit, but transaction aborted ---
    The ORA codes are ORA-02291 and ORA-06512.
    We are not getting any of these errors, when we merge same data from Windows 2003 server SP1. But if we execute it from SP2, we are getting this error.
    Are there any hotfixes provided by MS to fix this problem.

  • Pipelined function ignores DML changes on subqueries

    Hello all,
    I have a really specific issue when using a pipelined function used in a complex subquery where the function ignores the changes made on the current transaction. The problem is the hidden hint materialize sometimes used by the Oracle optimizer. I say sometimes because it depends mostly on the execution plan and the complexity of the query.
    I can repeat the problem with a dummy scenario.
    Let's say we have a dummy table with a simple record :
    CREATE TABLE DUMMY ("NAME" VARCHAR2(50 BYTE));
    INSERT INTO DUMMY VALUES('Original name');
    We then create a package which will contain our pipelined function and its record object and collection:
    CREATE OR REPLACE PACKAGE PKG_DUMMY AS
    TYPE DUMMY_RECORD IS RECORD (NAME VARCHAR2(50 BYTE));
    TYPE DUMMY_RECORDS IS TABLE OF DUMMY_RECORD;
    FUNCTION FUNC_GET_DUMMY_NAME RETURN DUMMY_RECORDS PIPELINED;
    END PKG_DUMMY;
    CREATE OR REPLACE
    PACKAGE BODY PKG_DUMMY AS
    FUNCTION FUNC_GET_DUMMY_NAME RETURN DUMMY_RECORDS PIPELINED AS
    BEGIN
    FOR CUR IN ( SELECT * FROM DUMMY )
    LOOP
    PIPE ROW (CUR);
    END LOOP;
    END FUNC_GET_DUMMY_NAME;
    END PKG_DUMMY;
    With this SQL query, we can return the value of the table by the pipelined function :
    WITH DUMMY_NAME AS
    SELECT "NAME"
    FROM TABLE(PKG_DUMMY.FUNC_GET_DUMMY_NAME())
    SELECT "NAME"
    FROM DUMMY_NAME
    Result
    Original name
    If we modify the DUMMY table with a new name without a commit, and re-execute the query, we got the same result :
    UPDATE DUMMY SET "NAME" = 'New name';
    Result
    New name
    But if we add the materialize hint in the subquery (without doing a commit or rollback), we have the original value hence my issue :
    WITH DUMMY_NAME AS
    SELECT /*+ materialize */ "NAME"
    FROM TABLE(PKG_DUMMY.FUNC_GET_DUMMY_NAME())
    SELECT "NAME"
    FROM DUMMY_NAME
    Result
    Original name
    I know I can force my subquery to use an inline hint instead of the "materialize" hint chose by the optimizer but then the query lose a lot of performance. Is there a way to force Oracle to use current DML changes with the materialize hint on a pipelined funtion in a subquery?
    This thread is also for this issue : http://stackoverflow.com/questions/1597467/is-using-a-select-inside-a-pipelined-pl-sql-table-function-allowed

    Hi Eliante, Hi Dominic,
    Very Interesting. Here what I can reproduce in Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    sql > truncate table dummy;
    Table truncated.
    sql >INSERT INTO DUMMY VALUES('Original name');
    1 row created.Please pay attention that I didn't commit
    sql > with dummy_name as
      2  (
      3  select  "NAME"
      4  from table(pkg_dummy.func_get_dummy_name())
      5  )
      6  select "NAME"
      7  from dummy_name;
    NAME
    Original name
    sql> start c:\dispcursor
    PLAN_TABLE_OUTPUT
    SQL_ID  838mtur4m74j2, child number 0
    with dummy_name as ( select  "NAME" from table(pkg_dummy.func_get_dummy_name()) ) select "NAME"
    from dummy_name
    Plan hash value: 117055
    | Id  | Operation                         | Name                | Starts | A-Rows |   A-Time   | Buffers |
    |   1 |  COLLECTION ITERATOR PICKLER FETCH| FUNC_GET_DUMMY_NAME |      1 |      1 |00:00:00.01 |      15 |
    Note
       - rule based optimizer used (consider using cbo)
    17 rows selected.
    sql > with dummy_name as
      2  (
      3  select /*+ materialize */ "NAME"
      4  from table(pkg_dummy.func_get_dummy_name())
      5  )
      6  select "NAME"
      7  from dummy_name;
    no rows selected
    sql >start c:\dispcursor
    PLAN_TABLE_OUTPUT
    SQL_ID  9frx3wjk992rd, child number 0
    with dummy_name as ( select /*+ materialize */ "NAME" from table(pkg_dummy.func_get_dummy_name()) ) select "NAME" from dummy_name
    Plan hash value: 1359790764
    | Id  | Operation                           | Name                        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    |   1 |  TEMP TABLE TRANSFORMATION          |                             |      1 |        |      0 |00:00:00.01 |      20 |       |       |          |
    |   2 |   LOAD AS SELECT                    |                             |      1 |        |      0 |00:00:00.01 |      19 |  1024 |  1024 |          |
    |   3 |    COLLECTION ITERATOR PICKLER FETCH| FUNC_GET_DUMMY_NAME         |      1 |        |      0 |00:00:00.01 |      17 |       |       |          |
    |   4 |   VIEW                              |                             |      1 |   8168 |      0 |00:00:00.01 |       0 |       |       |          |
    |   5 |    TABLE ACCESS FULL                | SYS_TEMP_0FD9D780C_BD7649E3 |      1 |   8168 |      0 |00:00:00.01 |       0 |       |       |          |
    16 rows selected.I can point out that the TABLE ACCESS FULL of the global temporary SYS_TEMP_0FD9D780C_BD7649E3 table created by Oracle in response to the -materialize hint is returning *0 rows* in operation 5.
    Why?
    It seems for me that the reason for that comes from the fact that the creation of this SYS_TEMP_0FD9D780C_BD7649E3 table is done via direct path read/direct path write and as far as
    the insert of *'Original name'* has not been pushed yet into the disc then materializing the query will generate an empty temporary table (empty in this case).
    This is why if I had committed I will not have seen such a kind of discrepancy between those two queries
    What do you think?
    Mohamed Houri
    www.hourim.wordpress.com

  • Consignment and pipeline settlement error

    Hi Expert!
    I have error when processing consignment and pipeline settlement for company code 4424. The error is "No message was found for partner 146073/company code 4424". Can anyone tell me what has going wrong and how to solve it?
    Thanks.

    HI,
    The MRKO Transaction only for settling the Pipe line and Consigmnet pscenarions.
    No effect if u maintain as like that.
    If u have doubt do one in Devlopmet.
    Regards,
    Andra

  • Re: Creation of Pipeline Purchase Order or Consignment

    Hello all,
    Can you all explain me how to create Pipeline Purchase Order or Contract., so that a proof of document is maintained.
    How is is done in SAP?
    Regards,
    Smitha

    Hi,
    For Pipeline Process,
    1. Create Pipeline Material under material type PIPE
    2. You need to maintain the Pipeline Info Record in ME11. Here maintain the Prices and Tax Code mandatory
    Also in SPRO > Enterprise Structure > Assignment > Materials Management > Assign standard purchasing organization to plant
    There will not be any PO and GR Process for Pipeline materials. We consider that materials are available via Pipes in our plant and we have tp directly book consumption and then settlement of the same.
    If Stock is directly consumed from Pipeline Stock via 201 P or 261 P via transaction MB1A or MIGO then following FI Entry will appear;
    (GBB-VBR) Consumption Account - Dr
    (KON) Pipeline Liabilities - Cr
    Now do Pipeline Settlement in MRKO. Here you can not change the Invoice Value. Here following FI Entry will appear;
    Vendor Account - Cr
    Pipeline Liabilities - Dr
    Prerequisite for MRKO: -
    - Maintain condition record for output type KONS in MRM1
    Then take printout of the Settlement Document in MR91.
    Also refer following link;
    [Pipeline Handling 1|http://help.sap.com/saphelp_46c/helpdata/en/fd/45c3fe9d6411d189b60000e829fbbd/content.htm]
    [Pipeline Handling 2|http://help.sap.com/saphelp_erp2005/helpdata/en/4d/2b926443ad11d189410000e829fbbd/frameset.htm]

  • CONTAINER:atg.service.pipeline.RunProcessException: An exception was thrown from the context of the link named [loadCommerceItemObjects].; SOURCE:java.lang.RuntimeException:

    Can Any one help me in finding the cause of following error
    16:18:27,565 INFO  [PipelineManager] DEBUG Cancel Link Transaction
    16:18:27,565 INFO  [PipelineManager] DEBUG Transaction is TX_MANDATORY
    16:18:27,565 INFO  [PipelineManager] DEBUG Setting transaction to rollback
    16:18:27,565 INFO  [PipelineManager] DEBUG Cancel Chain Transaction
    16:18:27,565 INFO  [PipelineManager] DEBUG Transaction is TX_REQUIRED
    16:18:27,565 INFO  [PipelineManager] DEBUG Setting transaction to rollback
    16:18:27,565 ERROR [OrderManager]
    CAUGHT AT:
    CONTAINER:atg.service.pipeline.RunProcessException: An exception was thrown from the context of the link named [loadCommerceItemObjects].; SOURCE:java.lang.RuntimeException: CONTAINER:atg.repository.RepositoryException; SOURCE:org.jboss.util.NestedSQLException: Transaction is not active: tx=TransactionImple < ac, BasicAction: -53eaeff2:f142:52c29876:2a4f status: ActionStatus.ABORT_ONLY >; - nested throwable: (ja
            at atg.service.pipeline.PipelineChain.runProcess(PipelineChain.java:393)
            at atg.service.pipeline.PipelineChainContext.runProcess(PipelineChainContext.java:207)
            at atg.service.pipeline.PipelineManager.runProcess(PipelineManager.java:475)
            at atg.commerce.pipeline.CommercePipelineManager.runProcess(CommercePipelineManager.java:123)
            at atg.commerce.order.OrderImpl.ensureContainers(OrderImpl.java:1745)
            at atg.commerce.order.OrderImpl.getShippingGroups(OrderImpl.java:1084)
            at com.mk.integration.epicor.salesAudit.datamanager.EpicorSalesAuditDataManager.processShippingGroups(EpicorSalesAuditDataManager.java:477)
            at com.mk.integration.epicor.salesAudit.datamanager.EpicorSalesAuditDataManager.constructSalesAuditFeed(EpicorSalesAuditDataManager.java:431)
            at com.mk.integration.epicor.salesAudit.datamanager.EpicorSalesAuditDataManager.exportFullfilledOrder(EpicorSalesAuditDataManager.java:213)
            at com.mk.integration.epicor.salesAudit.processor.EpicorSalesAuditProcessor.exportFullfilledOrder(EpicorSalesAuditProcessor.java:42)
            at com.mk.integration.epicor.salesAudit.scheduler.EpicorSalesAuditScheduler.startSalesAuditExport(EpicorSalesAuditScheduler.java:65)
            at com.mk.integration.epicor.salesAudit.scheduler.EpicorSalesAuditScheduler.doScheduledTask(EpicorSalesAuditScheduler.java:49)
            at atg.service.scheduler.SingletonSchedulableService.performScheduledTask(SingletonSchedulableService.java:253)
            at atg.service.scheduler.ScheduledJob.runJobs(ScheduledJob.java:466)
            at atg.service.scheduler.Scheduler$2handler.run(Scheduler.java:782)
    Caused by: java.lang.RuntimeException: CONTAINER:atg.repository.RepositoryException; SOURCE:org.jboss.util.NestedSQLException: Transaction is not active: tx=TransactionImple < ac, BasicAction: -53eaeff2:f142:52c29876:2a4f status: ActionStatus.ABORT_ONLY >; - nested throwable: (javax.resource.ResourceException: Transaction is not active: tx=TransactionImple < ac, BasicAction: -53eaeff2:f142:52c29876:2a4f status:
            at atg.adapter.gsa.GSAItemDescriptor.loadProperty(GSAItemDescriptor.java:5994)
            at atg.adapter.gsa.GSAItem.getPersistentPropertyValue(GSAItem.java:1315)
            at atg.adapter.gsa.GSAItem.getPropertyValue(GSAItem.java:1208)
            at atg.adapter.gsa.GSAItem.getPropertyValue(GSAItem.java:1405)
            at atg.repository.RepositoryItemImpl.getPropertyValue(RepositoryItemImpl.java:151)

    Hi,
    I seems like you don't have any active transaction. Try start or get transaction before execute your required operations.
    After it, commit (or rollback, if something wrong happens) the transaction.
    Hope it helps.

  • Avoid transaction roll back using exception handler

    Hi everybody!
    I've created a proxy service in OSB using a DB adapter for polling from a database table. I've configured the service to be transactional with same transaction for reponse. The proxy has a route node that routes to a business service created from a db adapter for storing the polled data. Insert table has a db trigger associated wich raises a custom exception in certain cases. I need to catch that exception in the route exception handler to avoid the full transaction to be marked rolled back. For this, I've setup a response with failure action in the handler.
    After testing the service I realized that it's not working as expected as transaction is being rolled back when the trigger raises the exception (so registry is not marked as polled in source table).
    Could anyone explain me which action should be taken to avoid a rollback when handling exception?
    Best regards,
    Daniel.

    Hi Athhek,
    If I set reply with success then both route node and system error handlers are executed. I get following response metadata
    <con:metadata      xmlns:con="http://www.bea.com/wli/sb/test/config">
    <tran:response-code      xmlns:tran="http://www.bea.com/wli/sb/transports">1</tran:response-code>
    </con:metadata>
    and also following error reaches system handler:
    <con:fault      xmlns:con="http://www.bea.com/wli/sb/context">
    <con:errorCode>BEA-382050</con:errorCode>
    <con:reason>
    Expected active transaction, actual transaction status: Marked Rollback
    </con:reason>
    <con:location>
    <con:path>response-pipeline</con:path>
    </con:location>
    </con:fault>
    When I set reply with error only route node handler is executed but I still get
    <con:metadata      xmlns:con="http://www.bea.com/wli/sb/test/config">
    <tran:response-code      xmlns:tran="http://www.bea.com/wli/sb/transports">1</tran:response-code>
    </con:metadata>
    Thank you.

Maybe you are looking for

  • Expdp error anyone have an idea on how to fix this?

    I'm trying to get an export of a database that has encrypted columns. Exp really isn't a good option I'd have to unencrypt the columns first everytime. First I create the directory D:\DPDUMP and give Everyone SERVICE and SYSTEM full rights to the dir

  • WRT54GS V7 128 BIT ENCRYPTION

    I just this morning bought a new wireless router, ran home and hooked it up.  I have had 3 other Lynksys (wired) routers in the past, do all my own work from mechanical to software and LAN. The initial setup went fine with the first desktop PC, runni

  • Connecting two computers, wirelessly, using Airport?

    I have a IMac intel and a PowerBook G4 - 400 Ti. They both have Airport cards. I understand that they will "talk" to each other with out benefit of a router. I would like to know how I can set this up. I have gone through the setup procedures endless

  • Fileserver consistency checks failing 0x809909B0

    I have a file server with approx. 1.7TB of files which became inconsistent on the 8<sup>th</sup> of November, and has been unable to complete a consistency check since then.  Consistency checks fail with the error: The protection agent on <server> wa

  • Create a MSDS for multiple Specifications via New Program

    Hi Friends I want to develope a new program, which will create MSDS (report) for multiple Specifications in Multiple languages and multple Generation variants. I can pull out specifications, Generation Variant and Languages for selection. My issues i