Transaction propagation and ESB

I am currently performing a product assesment for integration platforms (ESB).
The environment is somewhat this: there will be a J2EE architecture that envolves many types of components. In addition, there will be a integration product installed.
A UserTransaction is started from a Session Bean and this session bean performs two types of functions. Functionalities that reside entirely inside the J2EE container and functionalitites that reside within the selected ESB product.
Is the transaction context propagated to the ESB product through some interface in a way, that would make the ESB functionalities a solid part of the transaction?
Thank you for all your help!

From the advanced architecture document I found this "slide"....
Transactions
• Global End-to-End JTA/XA Transactions
• BPEL <-> ESB <-> BPEL
• JCA <-> ESB <-> WSIF
• ESB Inherits Inbound Global Transactions
• “Async” Routing Rules ends scope of current transaction
• New ESB initiated transactions grouped by ESB System
• Transaction Exception Handling and Rollback
• Errors on existing inbound transactions rolled back to initiator
• Errors on ESB initiated transactions can be resubmitted
• End-to-end message flow terminates on first failed service
regardless of transaction state or owner
I guess "inherits inbound global transactions" means, that ESB processes/functions can be made a part of an existing transaction. If this is true, than this solves my problem :)

Similar Messages

  • Transaction propagation and integration

    I am currently performing a product assesment for integration platforms (EAI/ESB).
    The environment is somewhat this: there will be a J2EE architecture that envolves many types of components. In addition, there will be a integration product installed.
    If a UserTransaction is started from a Session Bean and this session bean performs two types of functions. Functionalities that reside entirely inside the J2EE container and functionalitites that reside within the selected EAI/ESB product.
    Is the transaction context propagated to the EAI/ESB product through some interface in a way, that would make the EAI/ESB functionalities a solid part of the transaction?
    Thank you for all your help!

    You will need to pick a product that supports distributed transactions. This works for example with most JMS implementations and datasources.
    (see http://www.onjava.com/pub/a/onjava/2001/05/23/j2ee.html?page=3 for an explanation on the subject)
    Regards,
    Lonneke

  • How to remove error from propagation and verify replication is ok?

    Have a one-way schema level streams setup and the target DB is 3-node RAC (named PDAMLPR1). I run a large insert at source DB (35million rows). After committing on source, I made a failure test on the target DB by shutting down the entire database. The streams seems stopped as the heartbeat table sequence (it inserts a row each min) on target is still reflecting last night. We get the error in the dba_propagation:
    ORA-02068: following severe error from PDAMLPR1
    ORA-01033: ORACLE initialization or shutdown in progress
    ORA-06512: at "SYS.DBMS_AQADM_SYS", line 1087
    ORA-06512: at "SYS.DBMS_AQADM_SYS", line 7639
    ORA-06512: at "SYS.DBMS_AQADM", line 631
    ORA-06512: at line 1
    08-FEB-10
    while capture, propagation, and apply are all in enabled status. I restarted capture and propagation at source DB. But still see the error message above. My questions are:
    1. How to delete the error from dba_propagation?
    2. How to verify the streams is still running fine?
    In normal test, during such large insert, the heartbeat table added a row in an hour. Very slow.
    thanks for advice.

    Well, if I can give you my point of view: I think that 35 millions LCR is totally unreasonnable. Did you really post a huge insert of 35 millions then committed that single utterly huge transaction? Don't be surprised it's going to work very very hard for a while!
    With a default setup, Oracle recommends to commit every 1000 LCR (row change).
    There are ways to tune Streams for large transaction but I have not done so personnaly. Look on Metalink, you will find information about that (mostly documents id 335516.1, 365648.1 and 730036.1).
    One more thing: you mentionned about a failure test. Your target database is RAC. Did you read about queue ownership? queue_to_queue propagation? You might have an issue related to that.
    How did you setup your environment? Did you give enough streams_pool_size? You can watch V$STREAMS_POOL_ADVICE to check what Oracle think is good for your workload.
    If you want to skip the transaction, you can remove the table rule or use the IGNORETRANSACTION apply parameter.
    Hope it helps
    Regards,

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Have a transaction propagated to two remote machines!!!(URGENT!!!)

              Can we have a transaction propagated to two ejb's in different machines if we have database interaction in both?
              I tested it out with Account beans (examples)
              deployed on two different(remote) servers both servers having the same connection pool name and the mapping to the
              same oracle database (Using the oracle thin driver as well as the Weblogic Driver). One of the beans is in a local server and one in a remote server and both are accessed in the
              same transaction context. What happens is that the 2nd bean accessed throws a Null pointer Exception
              when it tries to getConnection().
              This is the server side stack trace -----
              SQLException: java.sql.SQLException: java.lang.NullPointerException:
              Start server side stack trace:
              java.lang.NullPointerException
              at weblogic.jdbc.common.internal.ConnectionMOWrapper.<init>(ConnectionMO
              Wrapper.java:42)
              at weblogic.jdbc.common.internal.ConnectionEnv.setConnection(ConnectionE
              nv.java:142)
              at weblogic.jdbc.common.internal.DriverProxy.execute(DriverProxy.java:17
              3)
              at weblogic.t3.srvr.ClientRequest.execute(ClientContext.java:1030)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java, Compiled Code)
              End server side stack trace
              It appears that when the database call on the 2nd WL server is routed to the first WL server(the server that established the first connection for the transaction) for the database connection it is not able to find the connection( and hence the bombing). I'm going nuts over this for two days. Please help. We need to use Weblogic for our project and i need to confirm that this functionality works!!!!
              I'm attaching the stateless bean code which accesses both these beans.
              [TraderBean.java]
              

              Hi,
              Are you using cluster?
              Definitely you can be in one transaction if you just access one data source. that's two phase transaction.
              "kartik" <[email protected]> wrote:
              >
              >
              >
              >Can we have a transaction propagated to two ejb's in different machines if we have database interaction in both?
              >
              >I tested it out with Account beans (examples)
              > deployed on two different(remote) servers both servers having the same connection pool name and the mapping to the
              > same oracle database (Using the oracle thin driver as well as the Weblogic Driver). One of the beans is in a local server and one in a remote server and both are accessed in the
              > same transaction context. What happens is that the 2nd bean accessed throws a Null pointer Exception
              > when it tries to getConnection().
              >
              >This is the server side stack trace -----
              >SQLException: java.sql.SQLException: java.lang.NullPointerException:
              >Start server side stack trace:
              >java.lang.NullPointerException
              > at weblogic.jdbc.common.internal.ConnectionMOWrapper.<init>(ConnectionMO
              >Wrapper.java:42)
              > at weblogic.jdbc.common.internal.ConnectionEnv.setConnection(ConnectionE
              >nv.java:142)
              > at weblogic.jdbc.common.internal.DriverProxy.execute(DriverProxy.java:17
              >3)
              > at weblogic.t3.srvr.ClientRequest.execute(ClientContext.java:1030)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java, Compiled Code)
              >End server side stack trace
              >-----------------
              >
              >It appears that when the database call on the 2nd WL server is routed to the first WL server(the server that established the first connection for the transaction) for the database connection it is not able to find the connection( and hence the bombing). I'm going nuts over this for two days. Please help. We need to use Weblogic for our project and i need to confirm that this functionality works!!!!
              >
              >I'm attaching the stateless bean code which accesses both these beans.
              >
              

  • Transaction propagation via plain Java classes?

              Hello,
              I have a question on transaction propagation in the following scenario:
              1. a method of EJB1 with setting "Required" is invoked.
              2. the method creates a plain Java class and invokes a method of the class
              3. the class's method invokes a method of EJB2 with setting "Required".
              Is my understanding of EJB spec correct, when I assume that the transaction created
              when the first EJB method was called will be propagated through the plain Java
              class (supposedly via association with current thread), so the second EJB will
              participate in the same transaction?
              Thank you in advance,
              Sergey
              

    Yup, current transaction is associated with the current thread.
              Sergey <[email protected]> wrote:
              > Hello,
              > I have a question on transaction propagation in the following scenario:
              > 1. a method of EJB1 with setting "Required" is invoked.
              > 2. the method creates a plain Java class and invokes a method of the class
              > 3. the class's method invokes a method of EJB2 with setting "Required".
              > Is my understanding of EJB spec correct, when I assume that the transaction created
              > when the first EJB method was called will be propagated through the plain Java
              > class (supposedly via association with current thread), so the second EJB will
              > participate in the same transaction?
              > Thank you in advance,
              > Sergey
              Dimitri
              

  • There are two transactions ZJPVCS303 and ZJPVCS303_US for one single Report

    When run as a batch program, (currently this is the case), or withT-Code ZJPVCS303 the selection screen is unchanged (except for additional sales area above)
    - When run as T-Code ZJPVCS303_UL (UL stands for Upload) the selection screen is changed.  The unix file option is no longer available, and the user is able to upload a local file (in the same format as the current unix file, but tab delimited) to the program for processing.
    Requirements:
    There are two transactions ZJPVCS303 and ZJPVCS303_US for one single Report.
    ->When ZJPVCS303 Transaction is executed, the file is uploaded from the Application
      server to SAP R/3. The selection screen parameters would be:
      Logical Filename:
      Sales Organization:
      Distribution Channel:
      Division:
    ->When ZJPVCS303_US Transaction is executed, the file is uploaded from the Presentation Server
      to SAP R/3. When this transaction is executed, it should not have the 'Logical
      Filename' parameter anymore on the selection-screen. Instead it should only have
      Local File name on the presentation server:
      Sales Organization:
      Distribution Channel:
      Division:
        The same thing is applicable for the other transaction ZJPVCS303. When transaction ZJPVCS303
    is executed, it should not have the 'Local Filename' parameter anymore on the selection-screen. Instead it should only have
    Logical Filename:
    Sales Organization:
    Distribution Channel:
    Division:
    So how should I make these parameters invisible depending on the transaction codes execution.
    I have an idea of using MODIF ID, LOOPING AT SCREEN...MODIFY SCREEN.
    I have an idea of using SY-TCODE.
    EX:
    AT SELECTION-SCREEN OUTPUT.
    IF SY-TCODE = 'ZJPVCS303'.
    LOOP AT SCREEN.
    IF SCREEN-GROUPID = 'GRP'.
       SCREEN-INPUT   = 0.
       SCREEN-INVISIBLE = 1.
       MODIFY SCREEN.
    ENDIF.
    ENDLOOP.
    ELSEIF SY-TCODE = 'ZJPVCS303_US'.
    LOOP AT SCREEN.
    IF .....
    ENDLOOP.
    ENDIF.
    ENDIF.
    But I am not able to get the output which I require. Please help me out.

    Hello Rani
    Basically the transaction determines whether upload starts from application server (AS) or presentation server (PC). Thus, you will have the following parameter:
    PARAMETERS:
      p_as_fil          TYPE filename   MODIF ID unx,  " e.g. Unix server
      p_pc_fil          TYPE filename   MODIF ID wnd.  " e.g. Windows PC
    AT SELECTION-SCREEN OUTPUT.
      CASE syst-tcode.
    *   transaction(s) for upload from server (AS)
        WHEN 'ZJPVCS303.
          LOOP AT screen.
            IF ( screen-group1 = 'UNX' ).
              screen-input = 0.
              screen-invisible = 1.
              MODIFY screen.
            ENDIF.
          ENDLOOP.
    *   transaction(s) for upload from local PC (PC)
        WHEN 'ZJPVCS303_US.
          LOOP AT screen.
            IF ( screen-group1 = 'WND' ).
              screen-input = 0.
              screen-invisible = 1.
              MODIFY screen.
            ENDIF.
          ENDLOOP.
       WHEN others.
       ENDCASE.
    Regards
      Uwe

  • What is difference between ESB Service and ESB Service Group

    Guys please tell me What is difference between ESB System and ESB Service Group and when we use them, i mean in what condition.
    Many Thanks
    Deepak

    The use of these is explained in the ESB developers guide along with some examples in what case to use them. See: http://download-west.oracle.com/docs/cd/B31017_01/integrate.1013/b28211/esb_jdev.htm#sthref167

  • What is the diff b/w transactional cube and std cube

    What is the diff b/w transactional cube and std cube

    Hi Main differences,
    1) Trasactional infocube are optimized for writing data that is multiple user can simultaneously write data into it without much effect on the performance where as the std infocube are optimized to read the data ie.e through queries.
    2) transactional inofcubes can be loaded through SEM process as well normal laoding process.Std can be loaded only thorugh normal loading process.
    3) the way data is stored is same but the indexing and partionong aspects are different since one is optimized for writing and another one for reading.
    Thanks
    Message was edited by:
            Ajeet Singh

  • Monitoring of remote system's Transactional RFC and Queued RFC

    Hello,
    In our production system, in rz20- CCMS monitor templates- Communication-Transactional RFC and Queued RFc- outbound queues- Queues otherwise not monitored we can see blocked queues for each client.
    System is connected to solution manager and we wish the central auto reaction is implemented in solution manager
    However i am unable to find Transactional RFC and Queued RFC for the remote system, they exist only for solution manager itself
    Tell me how can i do the central monitoring

    Hello,
    First you need to check with your Landscape in solman in order to monitor any kind of activities to do so pls follow these steps.
    Go to SMSY in solman under Landscape components>Product systemsselect you satellite system example SAP ECC.
    On the main screen you will find client for which you have generated RFC connection. Please check though connection are working fine, Go to edit mode and try to click on generate button there will be a pop-up, which gives a clear picture of RFC connection which already exists, and you can also re-generate this RFC connection by clean it up when you re-generate pls select under Actions after generation assign RFC dest for system monitoring.
    But make sure there is no project impact on this RFC, like they are not using any logical components and already have some projects running on this RFC connection.
    I would advise you to first you the option of assign and check RFC button which is next to generate icon.
    Regards
    JUDE

  • CALL TRANSACTION 'MIR6' AND SKIP FIRST SCREEN .

    Hi all,
    i hope there is someone that can help me.
    My problem is: in an ABAP custom report in a FORM user-command my code is
      CASE ls_selfield-sel_tab_field.
        WHEN 'ITAB-BELNR'.
          IF NOT ls_selfield-value IS INITIAL.
            SET PARAMETER ID 'RBN' FIELD ls_selfield-value.
            CALL TRANSACTION 'MIR6' AND SKIP FIRST SCREEN .
      ENDCASE.
    When i click on output field of my report, not skip first screen but only call MIR6!!!
    How can i solve this problem? 
    Regards.

    Hi,
    First of all Please don't use all Caps for Subject Line
    Test Following Sample Code it will solve out your problem,
    DATA: it_bdcdata TYPE TABLE OF bdcdata,
          wa_it_bdcdata LIKE LINE OF it_bdcdata,
          belnr(10).
    belnr = '100'. " Give Document Number here
    DATA opt TYPE ctu_params.
    CLEAR wa_it_bdcdata.
    wa_it_bdcdata-program  = 'SAPMM08N'.
    wa_it_bdcdata-dynpro   = '100'.
    wa_it_bdcdata-dynbegin = 'X'.
    APPEND wa_it_bdcdata TO it_bdcdata.
    CLEAR wa_it_bdcdata.
    wa_it_bdcdata-fnam = 'BDC_CURSOR'.
    wa_it_bdcdata-fval = 'SO_BELNR-LOW'.
    APPEND wa_it_bdcdata TO it_bdcdata.
    CLEAR wa_it_bdcdata.
    wa_it_bdcdata-fnam = 'SO_BELNR-LOW'.
    wa_it_bdcdata-fval = BELNR.
    APPEND wa_it_bdcdata TO it_bdcdata.
    CLEAR wa_it_bdcdata.
    wa_it_bdcdata-fnam = 'BDC_OKCODE'.
    wa_it_bdcdata-fval = '=CRET'.
    APPEND wa_it_bdcdata TO it_bdcdata.
    opt-dismode = 'E'.
    CALL TRANSACTION 'MIR6' USING it_bdcdata OPTIONS FROM opt.
    Please Reply if any Issue,
    Best Regards,
    Faisal

  • Call transaction 'LS33' and skip first screen.

    This doesn't work:
    set parameter id 'LEN' field '04018091'.
    call transaction 'LS33' and skip first screen.
    I've found this:
    <a href="https://www.sdn.sap.com/irj/sdn/message?messageID=3715690">https://www.sdn.sap.com/irj/sdn/message?messageID=3715690</a>
    but I'm not sure about the exact meaning (wouldn't know where to put the *)
    nor even about if it applies.
    Thanks in advance.

    Sorry, I mistook this forum for that of ABAP General.
    Anyway, the answer is in
    Call transaction 'LS33' and skip first screen.

  • CALL TRANSACTION 'ME23N' AND SKIP FIRST SCREEN.

    Hi,
    i'm using this in an report.
    SET PARAMETER ID 'BES' FIELD WA_ITAB-EBELN.
    SET PARAMETER ID 'BSP' FIELD WA_ITAB-EBELP.
    CALL TRANSACTION 'ME23N' AND SKIP FIRST SCREEN.
    It works OK.
    Is there any parameter ID to go in a sprecial part of an PO, for example
    direct to materialdata or konditions?.
    thanks.
    Regards, Dieter

    Hi,
    I think u can do that by using BDC as used in this code please refer to this code.
    FORM get_user_command USING cr_ucomm     LIKE sy-ucomm      "#EC *
                                cr_selfield TYPE slis_selfield.
      CLEAR wa_final.
      DATA BEGIN OF bdcdata OCCURS 10.
              INCLUDE STRUCTURE bdcdata.
      DATA END OF bdcdata.
      CASE cr_ucomm.
        WHEN '&IC1'.
          READ TABLE it_final INTO wa_final
                   INDEX cr_selfield-tabindex.
          IF cr_selfield-fieldname = 'MATNR'.
            IF sy-subrc IS INITIAL.
              SET PARAMETER ID 'MXX' FIELD 'E'.
              SET PARAMETER ID 'MAT' FIELD wa_final-matnr.
              SET PARAMETER ID 'WRK' FIELD wa_final-werks.
              CALL TRANSACTION 'MM03' AND SKIP FIRST SCREEN.
            ENDIF.
          ELSEIF cr_selfield-fieldname = 'LIFNR'.
            IF sy-subrc IS INITIAL.
              CLEAR bdcdata.
              bdcdata-program  = 'SAPMF02K'.
              bdcdata-dynpro   = '0101'.
              bdcdata-dynbegin = 'X'.
              APPEND bdcdata.
              CLEAR bdcdata.
              bdcdata-fnam     = 'RF02K-LIFNR'.
              bdcdata-fval     = wa_final-lifnr.
              APPEND bdcdata.
              CLEAR bdcdata.
              bdcdata-fnam     = 'RF02K-EKORG '.
              bdcdata-fval     = wa_final-ekorg.
              APPEND bdcdata.
              CLEAR bdcdata.
              bdcdata-fnam     = 'RF02K-D0310'.
              bdcdata-fval     = 'X'.
              APPEND bdcdata.
              CLEAR bdcdata.
              bdcdata-fnam     = 'WRF02K-D0320'.
              bdcdata-fval     = 'X'.
              APPEND bdcdata.
              CLEAR bdcdata.
              bdcdata-fnam     = 'BDC_OKCODE'.
              bdcdata-fval     = '/00'.
              APPEND bdcdata.
              CLEAR bdcdata.
              bdcdata-program  = 'SAPMF02K'.
              bdcdata-dynpro   = '0110'.
              bdcdata-dynbegin = 'X'.
              APPEND bdcdata.
              CALL TRANSACTION 'XK03' USING bdcdata MODE 'E'.
            ENDIF.
          ENDIF.
      ENDCASE.
    ENDFORM.  " get_user_command
    regards,
    sudha

  • CALL TRANSACTION 'VA02' AND SKIP FIRST SCREEN?

    Hi, Gurus:
    In my program, after I call 'CALL TRANSACTION 'VA02' AND SKIP FIRST SCREEN.'    in the va02, user maybe do some change and save
    then I want to get those change as the dollowing:
        select single uvall into c_complete_flag from vbuk
         where vbeln = lv_vbeln.
    why I can't get the latest change.
    Thanks,

    VA02 usese the asynchronus update mehtod so, it gives you the message like "Order 1111 was saved" but the database update is still going on.
    So, you need to wait for some time before facthing the data from the database.
    You can use like:
    WAIT UP TO 10 SECONDS.
    But this statement will force you to wait for atleast 10 seconds before doing any thing.
    Regards,
    Naimesh Patel

  • Is the concept of Transactional DSO and Cube in BI 7?

    Hi,
    Is the concept of Transactional DSO and Cube in BI 7?
    I see 3 type of Cubes[Standard or VirtualProvider (Based on DTP, BAPI or Function Module)]
    but canu2019t see transaction Cube.
    also
    I see 3 type of DSO(Standard, Write-optimized, Direct update )
    but canu2019t see transaction DSO
    See this link on DSO, with examples etc:
    http://help.sap.com/saphelp_nw04s/helpdata/en/F9/45503C242B4A67E10000000A114084/content.htm
    I am looking for such summary also for Cubes have you see a similar one?
    Thanks

    New terminology in BI 7.x
    Transactional ODS =  DSO for direct update
    Transactional Cube = Realtime Cube
    Jen

Maybe you are looking for

  • SOAP Receiver/Sender in IDOC- XI- SOAP receivers?

    hi, i have idoc-> xi-> soap receiver. 1. How can i get a response back from soap receiver? 2. in the above scenario is SOAP the receiver or agian the sender? 3. not sure how i can get a response back from the soap receiver? any tips would be helpful.

  • OBIEE 11g installation - OutOfMemoryError: PermGen space

    Hi All OBIEE 11g installation fails at configuration step with the following error: Creating a new AdminServer Object ... AdminServer port is 7001 Starting the domain ... java.lang.OutOfMemoryError: PermGen space java.lang.OutOfMemoryError: PermGen s

  • DMEE tree: to split the details to 2 type; payment and invoices

    my client requests to have new set of payment format. the header footer stil same, but on the details, they request to split to 2 type; P-payment and I-invoice. the flow will be like below: H,P P,ACH,ON,,,, MY,KUL,312148938785,25/01/2006,XXXXXX BERHA

  • Help! Itunes is not reinstalling

    Hello, I recently downloaded the newest version of Itunes. Yesterday my computer detected a problem and backup up to a previous date in which it was working correctly. This seems to have screwed up my Itunes. I removed Itunes and Quick time to reinst

  • TREX Shared memory not initialized

    Hi,        we have installed TREX 7.0 on windows and there is some problem after installation. when we try to start the admin tool with "TREXAdmin.bat" we are getting a mesg like "shared memory not initialized. retry in 5 sec ...disabling shared memo