Database Transaction in xMII ?

Is it possible to ensure a Database Transaction in xMII ?
To clarify further, let me take an example :
Suppose I have 2 tables : Tab1 and Tab 2 and I want to insert data simultaneously in both tables. The PK of Tab1 is a FK to Tab2. Is it possible to ensure that an insert into Tab1 will fail/rollback if there is an error in inserting dependent data in Tab2 without me doing an explicit rollback ?

Very few applications are really database independent.  In fact, the ActivePortal and QualityPortal infrastructure in xMII is not.  It uses database-specific templates to achieve certain specific functionality.
Quite often, database "independence" means a compromise in functionality, performance, or unnecessary coding to work around the incompatibilities.
Instead, what you might consider is providing "template sets" for each of the databases you plan to support.  You can simply use an xMII BLS global variable to define the current database type, and use incoming links to the QueryTemplate property to dynamically choose the correct template(s).
Also, if you wanted to use the database-agnostic code generated if you create a class in the NW Java IDE, you could create custom action(s) to insulate the user from the database specifics.  In version 12.0, you could even share JDBC connection pools between your custom action(s) and xMII (and other WebAS applications).
- Rick

Similar Messages

  • Can i use java only for database transaction with vb as a front end

    Hello All.
    I am sorry,i don't know whether this location is the correct location to post my question or not
    I have developed one application using complete vb only. That is completly desktop appliation and not suitable for centralized database. Is there any facility to keep my front end as same (vb application) and for the database transaction can i use jdbc? Is there any possibility to do that like View is vb based application and the database connectivity is only jdbc
    If my thought is wrongplease execuse me.
    thanks in advance
    sowjanya

    Sounds like a really bad choice even if it was possible.

  • Database transaction management in Web services

    Hi,
    I am using Oracle8i and firing some database queries from my web services. I want to do the transaction management for the same i.e. When one of the queries fail, i want to rollback. But when i write my own transaction management, it gives me an error :
    java.sql.SQLException: Cannot call Connection.commit in distributed transaction.Transaction Manager will commit the resource manager when the distributed transaction is committed.
    Can anyone please help me out as to how to perform the database transaction management in web services.
    Thanking in advance.
    Prashant

    Unfortunately to manage transactions over web services there is no viable solution available in market. All implementations come with restrictions e.g. Metro works with only EJBs on Glassfish, JBossTS works on JBoss but not with JAX-WS, Atomikos supports only Axis as of now.
    1. See explanation above.
    2. Yes, it can be but conditions mentioned above are applied :-)
    3. [www.oasis-open.org/committees/ws-tx/|www.oasis-open.org/committees/ws-tx/]
    4. Unfortunately as of now I do not see an easy way to this problem.

  • Execute Business Transaction from xMII via XI to SAP R/3

    Hello everyone,
             I have a requirement to post the SAP R/3 transaction from xMII via XI. Does anyone one could give me the steps needs to follow to accomplish this.
    Thanks,
    Mahesh

    Mahesh,
    So as i understand your flow is xMII -> XI -> SAP ECC. So basically in this case u can setup XI to host a web service and xMII can call the web-service and post the data to XI. You will have to get the WSDL for the XI Web Service and then use it in the Web Service action block in xMII to consume. You will also need to setup the soap adapter in XI to receive the web service call from xMII. Once xMII calls the web service, XI can then update the data into SAP ECC either thru Idoc or RFC adapter.
    So once your trigger point in xMII is activated, it should call the BLS which in turn will call the XI Web Service and post the data to XI.
    In case the scenario is the other way round you can also host your BLS as a Web Service in xMII. XI can then call the web service and send the data to xMII. Alternatively you can also use the http post action block in xMII to do the same.
    Let me know if you need anymore info.
    Thanks,
    Kapil.

  • Database Transaction log suspected pages

    We migrated our Production Databases to New SQL Cluster and when I run query to find any suspected pages entries in MSDB Database .I found there are 5 entries in msdb.dbo.suspected_pages tables .These enries for Production Database Transaction file (File_id=2)
    , Pages_id =1,2,3,6,7 and the event _type was updated to 4 for all pages after I did DB restore and error_count is 1 for each page_id.
    As my understanding , before I did the DB restore ,there were transaction log pages were corrupted ,but the restored repaired those corrupted pages .Since pages are repaired then there is no need to concern for now .I have now Database consistency check
    job scheduled to check the Database corruption on Report server each night .I restore Database on report server using the a copy of Production Database Backup .Someone please help me to understand what caused the log file pages to get corrupted .Page_id 1,2,3,6,7 
    are called boot pages for the log file  ? What shold I do if I will find the Log file supected Pages ?
    Thank so your help in advance .
    Daizy

    Hi Andreas , Thanks for your reply .
    FYI- You have the event_type 1and 3 for your Database , but the event_type was updated to 4 on my system after I did restore and the date/time shows the exact date/time when the event_type was updated .
    Please help me understand usually Database Data file is organized in pages ,not the log file ??
    Thanks
    Daizy
    Hello Daizy
    yes, the event types 1-3 were the error-state before the "repair".
    After I did a Full backup + Restore I now have type 4 just as you do.
    Yes, the Log files is organized in so called "Virtual Log Files"/VLFs, which have nothing in common with the 8-KB data-pages of the data-files. Therefore a page_id does not make sense there.
    You can read more on the architecture of the Transaction Log here:
    SQL Server Transaction Log Architecture and Management
    This article by Paul Randal might also be of interest to you for:
    Transaction log corruption and backups
    Hope that helps.
    Andreas Wolter (Blog |
    Twitter)
    MCSM: Microsoft Certified Solutions Master Data Platform, MCM, MVP
    www.SarpedonQualityLab.com |
    www.SQL-Server-Master-Class.com

  • Once again:  Does MySQL allow multiple querys in one database transaction???

    Hello to ALL!!!
    The problem is:
    I'm trying to make simple query (database: "myDB" (engine:
    myisam/innodb - doesn't mater) with one table: "Info" with two
    columns: "Id" (autoinc), "info_c" (varchar):
    <cfif structkeyexists (form, "name")>
    <cfquery datasource="myDB" name="qDB" >
    INSERT into Info (info_c)
    VALUES ('#form.name#');
    SELECT @@identity AS Id
    </cfquery>
    </cfif>
    <cfform>
    <cfinput type="text" name="name">
    <cfinput type="submit" name="submit">
    </cfform>
    BUT after "Submit" I get:
    Error Executing Database Query.
    You have an error in your SQL syntax; check the manual that
    corresponds to your MySQL server version for the right syntax to
    use near '; select @@identity as Id' at line 2
    Please, tell me WHY???
    I have:
    MySQL
    Server information:
    MySQL version: MySQL 5.1.25-rc-community via TCP/IP
    Clent Information:
    Version: MySQL client version 5.1.11
    Coldfusion Version Information:
    Version 8,0,0,176276
    Great THANKS for your answers!!!

    > SELECT @@identity AS Id
    AFAIK, MySQL uses LAST_INSERT_ID(). @@IDENTITY is MS SQL
    specific. Though SCOPE_IDENITY() is recommended over @@IDENTITY.
    What version of CF are you using? ColdFusion supports the
    "result" attribute, which will return the ID value for simple
    inserts. See the documentation for details
    http://livedocs.adobe.com/coldfusion/8/Tags_p-q_17.html
    > Do JDBC drivers for MySQL prohibit the use of multiple
    queries
    > in a single database transaction???
    For security purposes this is disabled by default. To enable
    it you must modify your datasource url
    http://www.petefreitag.com/item/357.cfm

  • Database Transaction

    Hi,
    In our project we have EJB's(CAS) consumed in CAS. The EJB's communicate with SQL/Oracle DB's through standard queries(insert, update etc). Now we want to implement "Database Transaction" i.e, if the any one user from front-end is accessing the particular table of the DB, no other user should be able to modify the same table simultaneously. How is thisfeasible?
    Regards,
    Jagannathan R

    > It is the businness/functionality requirement which is prompting us to think that way.
    I'm sure there are technical other ways of accomplishing what's needed, such as 'dirty reads'.
    > Can you please elaborate the method that you are talking about even though it might take a huge toll on the performance?
    The "method" is the configuration of the database engines isolation level:
    http://msdn.microsoft.com/en-en/library/ms173763.aspx
    Markus

  • Urgent: EJB Transaction mechanism and Database Transaction mechanism

    Anybody please clarify me how EJB transaction mechanism use the underlying database transaction mechanism? Here my concern is that in the context EJB transaction, how much reponsibilities are performed by EJB container and how much responsibilities are performed by underlying database server. I will deem it a great favor if you kindly explain the whole story with example(s).

    Actually the ejb container is managing the persistence.
    It will be like this.
    if u r using entity beans or statefull beans
    while creating entity bean class you have to specify in the
    deployment descriptor, which table in the database this bean is representing .
    On the runtime , when you are creating an instance of a entity bean ,that instance will be corresponds to a row in the mapped table.
    what all changes you have made to that instance's attributes ie;
    columns in that row that all will be avilable in the session
    When you commit this particular session .this changes will be written to disk.
    that's how the change is managed ...
    assume if one user is modifying the particular row and another user is deleting it ..which ever transaction commits first will be get effected.
    if modification is committing first and then delete the row will be deleted last.but if first delete and then modify while commiting modifycation..
    you should get an error saying that particular row is missing from storage
    this how ejb container is manging the persistence
    in all cases even in case of synchronus acess
    i think u r cleard with this much

  • Database transaction throught pages

    Hello !
    I need to write web application that will use database transaction within different pages. Ie. the user changes the data in several pages, and after that clicks commit or rollback buttons.
    Is there any standart solution for my problem ?
    Is it good idea to store the database connection as the session attribute ? Can I get the connection from pool, or rather create my own connection ?
    Regards,
    Kamil

    Perhaps you can keep track of all the database edits/changes using objects stored in your session and then perform the update after the last page.
    or maybe you can use a temporary session table in your database and update that table as the user navigates bewteen your JSP pages, then when they get to your last page with the submit and rollback buttons if they choose submit you can update your main table with the data from the session table. If they choose rollback then you just clear or drop the temp table.
    Any any event transactions are handled in java by turning off autocommit and then calling commit on your connection object when you want to save the changes. And two, its a good idea to use connection pooling if you are going to opening and closing lots of connections. You can store your connection pool in the application scope or use JNDI. That will allow you to easily retrieve a connection from all your servlets or JSPs.

  • Progress Bar of a Database Transaction

    Hi,
    Is it possible to have a progress bar showing the progress a Database Transaction?
    I use MS Access and a PreparedStatement which I fill up with addBatch() and then I execute with executeBatch().
    Inserting in an Access Table 10.000 new entries takes about a minute...
    Instead of having the user wait with a message "Please wait..."
    can I track its progress somehow?
    Thanks!!!

    Unfortunately, I don't think that there's a standard way to get asynchronous responses back from the database on the status of your batches. You could fake it by figuring out the "standard" time it takes, but otherwise you're in the dark about your statement's status.

  • Database transaction management with MDB

    Hi all,
    I'm working on an Online Order Processing system. One of the functionalities that we've developed is mass order upload, that consists of one CSV file with a lot of orders inside.
    To have concurrent processing on the processing of an order upload file (as the orders present in file are independent of each), we've implemented a Message Driven Bean that receives an order to process in a request queue, processes the order and answers with the processed order in a response queue. The flow is like:
    1- Struts action is invoked to handle order upload.
    2- A method on a session bean is called to process the file.
    3- The session bean parses the file, separates all the order lines by origin customer (one order for each customer in the file) and sends messages to the request queue.
    4- For each order, an instance of the MDB will receive the message, process the order and send the processed order via response queue.
    5- After the Session Bean sends all the requests with the orders to process, he will wait for all the responses in the response queue until all the orders are processed and provide a valid answer, according to the whole process result, to the online user.
    My doubt is how can I manage efficiently transactions (mainly database transactions) so if inside a bunch of 200 orders processed concurrently one fails, I am able to rollback all the database transactions used by the MDB to process all the orders?
    As the failure of an order can be related to some error in the file, I want to rollback everything, give a message to the user to fix the file and allow him to upload the whole file (fixed) again.
    Any help would be appreciated.
    Regards,
    Campos

    Hi,
    1) As Of Version 4.6 C BAPIs has autocommit.ie Once you call a BAPI from anywhere before it returns it commits.So, we no need to worry about that.
    But some BAPI's doesn't in that case only we need to make COMMIT call.
    2)Transaction doesn't depend on Client object.
    3)No.
    Regards,
    Abdul Raheem .S

  • Transaction debuggers/tracers for database transactions ?

              Does Weblogic 6.1 have debugger/tracer tools for monitoring, logging and debugging
              database transactions ??
              If so, where and how do I access and use these tools ?
              Urgent response please ! Thanks.
              

    Please do not cross-post.
    Gabriel wrote:
    Does Weblogic 6.1 have debugger/tracer tools for monitoring, logging and debugging
    database transactions ??
    If so, where and how do I access and use these tools ?
    Urgent response please ! Thanks.--
    Rajesh Mirchandani
    Developer Relations Engineer
    BEA Support

  • Monitor Database Transaction on the web in realtime using JAVA .

    Dear All,
    ...... I am new with JAVA .....
    I have the INFORMIX database runs on IBM AIX server.
    There are some tables getting alarm from some Network Equipment in a network.
    ...... I need to show all transactions that inserted to INFORMIX database on the web in realtime using JAVA.
    Could you please guide me how to do that?.
    A million thanks.
    Anthony.

    Use database logging... see Informix documentation for this.
    (I assumed you used Informix Dynamic Server)

  • Need a Walkthrough on How to Create Database & Transaction Log Backups

    Is this the proper forum to ask for this type of guidance?  There has been bad blood between my department (Research) and the MIS department for 30 years, and long story short I have been "given" a virtual server and cut loose by my MIS department
    -- it's my responsibility for installs, updates, backups, etc.  I have everything running really well, I believe, with the exception of my transaction log backups -- my storage unit is running out of space on a daily basis, so I feel like I have to be
    doing something wrong.
    If this is the proper forum, I'll supply the details of how I currently have things set up, and I'm hoping with some loving guidance I can work the kinks out of my backup plan.  High level -- this is for a SQL Server 2012 instance running on a Windows
    2012 Server...

    Thanks all, after posting this I'm going to read the materials provided above.  As for the details:
    I'm running on a virtual Windows Server 2012 Standard, Intel Xeon CPU 2.6 GHz with 16 GB of RAM; 64 bit OS.  The computer name is e275rd8
    Drives (NTFS, Compression off, Indexing on):
    DB_HVSQL_SQL-DAT_RD8-2(E:) 199 GB (47.2 used; 152 free)
    DB_HVSQL_SQL-Dat_RD8(F:) 199 GB (10.1 used; 189 free)
    DB_HVSQL_SQL-LOG_RD8-2(L:) 199 GB (137 used; 62 free) **
    DB_HVSQL_SQL-BAK_RDu-2(S:) 99.8 GB (64.7 used; 35 free)
    DB_HVSQL_SQL-TMP_RD8-2(T:) 99.8 GB (10.6 used; 89.1 free)
    SQL Server:
    Product: SQL Server Enterprise (64-bit)
    OS: Windows NT 6.2 (9200)
    Platform: NT x64
    Version: 11.0.5058.0
    Memory: 16384 (MB)
    Processors: 4
    Root Directory: f:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL
    Is Clustered: False
    Is HADR Enabled: False
    Database Settings:
    Default index fill factor: 0
    Default backup media retention (in days): 0
    Compress backup is checkmarked/on
    Database default locations:
      Data: E:\SQL\Data
      Log: L:\SQL\LOGs
      Backup: S:\SQLBackups
    There is currently only one database: DistrictAssessmentDW
    To create my backups, I'm using two maintenance plans, and this is where I'm pretty sure I'm not doing something correctly.  My entire setup is me just guessing what to do, so feel free to offer suggestions...
    Maintenance Plan #1: Backup DistrictAssessmentDW
      Scheduled to run daily Monday Through Friday at 3:33 AM
      Step 1: Backup Database (Full) 
        Backup set expires after 8 days 
        Back up to Disk (S:\SQLBackups)
        Set backup compression: using the default server setting
      Step 2: Maintenance Cleanup Task
        Delete files of the following type: Backup files
        Search folder and delete files based on an extension:
          Folder: L:\SQL\Logs
          File extension: trn
          Include first-level subfolders: checkmarked/on
        File age: Delete files based on the age of the file at task run time older than 1 Day
      Step 3: Maintenance Cleanup Task
        Delete files of the following type: Backup files
        Search folder and delete files based on an extension:
          Folder: S:\SQLBackups
          File extension: bak
          Include first-level subfolders: checkmarked/on
        File age: Delete files based on the age of the file at task run time older than 8 Days
    Maintenance Plan #2: Backup DistrictAssessmentDW TRANS LOG ONLY
      Scheduled to run daily Monday through Friday; every 20 minutes starting at 6:30 AM & ending at 7:00 PM
      Step 1: Backup Database Task
        Backup Type: Transaction Log
        Database(s): Specific databases (DistrictAssessmentDW)
        Backup Set will expire after 1 day
        Backup to Disk (L:\SQL\Logs\)
        Set backup compression: Use the default server setting
    Around 2:30 each day my transaction log backup drive (L:) runs out of space.  As you can see, transactions are getting backed up every 20 minutes, and the average size of the backup files is about 5,700,000 KB.
    I hope this covers everything, if not please let me know what other information I need to provide...

  • SQL Server Database - Transaction logs growing largely with Simple Recovery model

    Hello,
    There is SQL server database on client side in production environment with huge transaction logs.
    Requirement :
    1. Take database backup
    2. Transaction log backup is not required. - so it is set to Simple recovery model.
    I am aware that, Simple Recovery model also increases the transaction logs same as in Full Recovery model as given on below link.
    http://realsqlguy.com/origins-no-simple-mode-doesnt-disable-the-transaction-log/
    Last week, this transaction log became of 1TB size and blocked everything on the database server.
    How to over come with this situation?
    PS :  There are huge bulk uploads to the database tables.
    Current Configuration :
    1. Simple Recovery model
    2. Target Recovery time : 3 Sec
    3. Recovery interval : 0
    4. No SQL Agent job schedule to shrink database.
    5. No other checkpoints created except automatic ones.
    Can anyone please guide me to have correct configuration on SQL server for client's production environment?
    Please let me know if any other details required from server.
    Thank you,
    Mittal.

    @dave_gona,
    Thank you for your response.
    Can you please explain me this in more details -- 
    What do you mean by one batch ?
    1. Number of rows to be inserted at a time ?
    2. or Size of data in one cell does matter here.
    As in my case, I am clubbing together all the data in one xml (on c# side) and inserting it as one record. Data is large in size, but only 1 record is inserted.
    Is it a good idea to shrink transaction log periodically, as it is not happening itself in simple recovery model.
    HI Mittal,
    Shrinking is bad activity yu should not shrink log files regularly, in rare case if you want to recovery space you may do it.
    Have manual chekpoints in Bulk insert operation.
    I cannot tell upfront what should be batch size but you can start with 1/4 th of what you are currently inserting.
    Most important what does below query return for database
    select log_reuse_wait_desc from sys.databases where name='db_name'
    The value it returns is what stopping the log from getting cleared and reused.
    What is version and editon of SQl server we are talking about. What is output of
    select @@version
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

Maybe you are looking for