JDO transaction, caching and stored procedures

I currently retreive an existing object, update the object, and then call
a stored procedure - all in one transaction. The issue I am running into
is that the stored proc must have the modified data available when it
selects the row (modified object) from the table. This is currently not
happening. For example, if the initial field was null, then it is updated
and the proc called, the proc only sees null. How do I specify that the
changes should be processed w/in the transaction (not commited) and
available for other processes reading the data in the same tx?
Note: everything works perfect if I manually execute the update statement
through a Statement on the connection for the stored proc. I have tried
flush().
Any ideas would be greatly appreciated!

Here is a dumbed down version... I also tried doing a Query with JDO and
that returns the modified value. Basically, the code below find the
existing object, modifies a field, and then calls a stored proc to do some
processing. Unfortunately, the stored proc needs to be able to "see" the
modified data in the transaction.
public void modifyFoo( UserToken userToken,
Foo newFoo,
Integer applicationId )
PersistenceManager pm = null;
Transaction tx = null;
Integer applicationId = producerApplication.getId();
try
pm = JDOFactory.getPersistenceManager( userToken );
tx = pm.currentTransaction();
tx.begin();
FooBoId appId = new FooBoId();
appId.fooBolId = applicationId;
FooBo appDbVo = (FooBo)pm.getObjectById( appId, true );
appDbVo.setEffectiveDate( newFoo.getEffectiveDate() );
String fooNumber = executeFooBar( userToken, applicationId
appDbVo.setFooNumber( fooNumber );
tx.commit();
private String executeFooBar( UserToken userToken, Integer
applicationId )
throws PulsarSystemException, ValidationException
String licenseNumber = null;
try
// Get a connection from the persistence manager
Connection connection = JDOFactory.getConnection( userToken );
connection.setAutoCommit( false );
CallableStatement cs = connection.prepareCall( "{call
sys_appl_pkg.fooBar(?,?,?,?,?,?)}" );
// Register the type of the return value
cs.registerOutParameter( 4, Types.NUMERIC );
cs.registerOutParameter( 5, Types.NUMERIC );
cs.registerOutParameter( 6, Types.VARCHAR );
// Set parameters:
cs.setString( 1, "Test" );
cs.setInt( 2, applicationId.intValue() );
cs.setInt( 3, 0 );
catch( SQLException ex )
ex.printStackTrace();
return licenseNumber;
Patrick Linskey wrote:
Can you post the code that you're executing?
-Patrick
On Mon, 14 Jul 2003 20:38:09 +0000, Josh Zook wrote:
I currently retreive an existing object, update the object, and then call
a stored procedure - all in one transaction. The issue I am running into
is that the stored proc must have the modified data available when it
selects the row (modified object) from the table. This is currently not
happening. For example, if the initial field was null, then it is updated
and the proc called, the proc only sees null. How do I specify that the
changes should be processed w/in the transaction (not commited) and
available for other processes reading the data in the same tx?
Note: everything works perfect if I manually execute the update statement
through a Statement on the connection for the stored proc. I have tried
flush().
Any ideas would be greatly appreciated!
Patrick Linskey
SolarMetric Inc.

Similar Messages

  • DAC task with Informatica mapping and stored procedure (very slow)

    Hello,
    We have a DAC task that launch an Informatica Workflow with a simple query and stored procedure, like this:
    SQL QUERY
    ==========================
    SELECT
    W_ACTIVITY_F.ROW_WID,
    W_AGREE_D.AGREE_NUM,
    W_PRODUCT_D.ATTRIB_51,
    W_SRVREQ_D.ATTRIB_05,
    W_ORG_DH.TOP_LVL_NAME,
    W_ORG_D.ATTRIB_06,
    W_PRODUCT_GROUPS_D.PRODUCT_LINE,
    W_PRODUCT_D.PROD_NAME
    FROM
    W_AGREE_D,
    W_SRVREQ_F,
    W_ACTIVITY_F,
    W_PRODUCT_D LEFT OUTER JOIN W_PRODUCT_GROUPS_D ON W_PRODUCT_D.PR_PROD_LN = W_PRODUCT_GROUPS_D.PRODUCT_LINE,
    W_ORG_D,
    W_SRVREQ_D,
    W_ORG_DH
    WHERE
    W_SRVREQ_F.AGREEMENT_WID = W_AGREE_D.ROW_WID AND
    W_SRVREQ_F.SR_WID = W_ACTIVITY_F.SR_WID AND
    W_SRVREQ_F.PROD_WID = W_PRODUCT_D.ROW_WID AND
    W_SRVREQ_F.ACCNT_WID = W_ORG_D.ROW_WID AND
    W_SRVREQ_F.SR_WID = W_SRVREQ_D.ROW_WID AND
    W_ORG_D.ROW_WID = W_ORG_DH.ROW_WID
    STORED PROCEDURE
    ===========================
    ConvSubProy(W_AGREE_D.AGREE_NUM,
    W_PRODUCT_D.ATTRIB_51,
    W_SRVREQ_D.ATTRIB_05,
    W_ORG_DH.TOP_LVL_NAME,
    W_ORG_D.ATTRIB_06,
    W_PRODUCT_GROUPS_D.PRODUCT_LINE,
    W_PRODUCT_D.PROD_NAME)
    The mapping is very simple:
    Source Qualifier -> Stored procedure -> Update strategy (only two ports: ROW_WID and custom column) -> Target Table
    When I launch the DAC Execution Plan the corresponding task take much time (40 minuts). But when I launch the corresponding Workflow from Informatica PowerCenter Workflow Manager this only take 50 seconds... when I see the log session for the task I can see that much time is spent on the time of the updates. For example, when DAC is running the writer updates 10000 records every 6/7 minuts, but when Workflow Manager is running thw writer updates 10000 records every 8/9 seconds.
    So, what happens (in the DAC) to that so much time difference? Is there a way to reduce the execution time when the task is launched from DAC?
    Thanks
    Best Regards
    Benjamin Tey

    Have you tried using bulk load type?
    In Workflow Manager can you open the associated task, navigate to the mapping tab and seled the target table.
    What is the value for "Target load type" and which of the following boxes are checked: Insert, Update as Update, Update as Insert, Update else Insert, Delete?

  • Database updation using XML and stored Procedure?

    Hello,
    I want to perform updation in multiple tables using XML files.Please suggest can I do updation using xml and stored procedure.
    If yes then which is more efficient and takes less time.
    1.Updation using xml files only
    2.Updation using xml files with stored procedure.
    3.Stored procedure alone.
    If direct xml and stored procedure communication is possible.then please write how.
    Thanks in advance for any help.

    Here's a sample. The next code drop of the XSQL Servlet will make the easy-to-do from within XSQL Pages:
    package package1;
    import org.w3c.dom.*;
    import java.sql.*;
    import oracle.jdbc.driver.*;
    import oracle.xml.sql.query.OracleXMLQuery;
    public class Class1 extends Object {
    public static void main( String[] arg ) throws Exception {
    Connection conn = getConnection();
    CallableStatement ocs = conn.prepareCall("begin ? := App.HotItems('PAUL'); end;");
    ocs.registerOutParameter(1,OracleTypes.CURSOR);
    ocs.execute();
    ResultSet rs = ((OracleCallableStatement)ocs).getCursor(1);
    OracleXMLQuery oxq = new OracleXMLQuery(conn,rs);
    System.out.println(oxq.getXMLString());
    oxq.close();
    rs.close();
    ocs.close();
    conn.close();
    public static Connection getConnection() throws Exception {
    String username = "scott";
    String password = "tiger";
    String dburl = "jdbc:oracle:thin:@localhost:1521:xml";
    String driverClass = "oracle.jdbc.driver.OracleDriver";
    Driver d = (Driver)Class.forName(driverClass).newInstance();
    return DriverManager.getConnection(dburl,username,password);
    null

  • Approval Procedures and Stored Procedures

    Hi All
    I have a problem with approval procedures and stored procedures.
    I created an approval procedure to check if a value in the udf is selected or not . if not then a messge is produced to ask the user to select the person to approve the PO.
    Biut now the problem is the PO's are linked approval templates to the error messge don't show before the approval template screen.
    Please assist.
    Thanks
    Bongani Dlamini

    Hi Gordon
    The stored procedure is just for validating if the user has selected the udf value.
    It is not for approval*
    I created an approval template for this purpose. So my problem is when I add the Po to the system the SP doesn't kick-in and alert the user to select the person to approve the PO.
    Process Flow.
    User captures PO then select the value on the UDF Field ( Approver) The value selected is linked to a query which is used by the Approval Template
    If this method is still not supported then please let me know .
    Thanks
    Bongani Dlamini

  • Cache synchorinzation, events, and stored procedure

    Hello,
    I am having problems with cache synchronization after doing some updates through a stored procedure.
    1. On insert, one of the columns is encrypted by a trigger. I tried to update the object later but once the refreshObject is issued the entire transaction is ignored. I then tried to use a postCommit.. event on the UOW. That never got fired. I might be using it incorrectly?
         UnitOfWork uow = session.getActiveUnitOfWork();
         uow.getEventManager().addListener(new SessionEventAdapter()
              public void postCommitUnitOfWork(SessionEvent pEvent)
         System.out.println("UnitOfWork postCommit event triggered");
              session.refreshObject(o);
         uow.executeQuery(call);
    2. After the above stored procedure has executed, a parent column is also updated (say, a status changed from 1 to 2). Some time later, making another call to retrieve all rows using readAllObjects shows the SELECT statement issued but the values do not reflect the change in the DB (status = 1) but should be 2. Bouncing the server works. I got around this problem by selecting 'Always refresh' for several descriptors, which takes away the value of toplink caching. Sometimes I still have to call refresh on these objs.
    I am assuming these probs would be solved if I used the events correctly to refresh these objects after the stored proc completes execution.
    Thanks,
    Vamshi

    I don't fully understand your issue, but as a suggestion you may try to run refresh object on the UOW rather than on the session object.

  • MS SQL Server 7 - Performance of Prepared Statements and Stored Procedures

    Hello All,
    Our team is currently tuning an application running on WL 5.1 SP 10 with a MS
    SQL Server 7 DB that it accesses via the WebLogic jConnect drivers. The application
    uses Prepared Statements for all types of database operations (selects, updates,
    inserts, etc.) and we have noticed that a great deal of the DB host's resources
    are consumed by the parsing of these statements. Our thought was to convert many
    of these Prepared Statements to Stored Procedures with the idea that the parsing
    overhead would be eliminated. In spite of all this, I have read that because
    of the way that the jConnect drivers are implemented for MS SQL Server, Prepared
    Statments are actually SLOWER than straight SQL because of the way that parameter
    values are converted. Does this also apply to Stored Procedures??? If anyone
    can give me an answer, it would be greatly appreciated.
    Thanks in advance!

    Joseph Weinstein <[email protected]> wrote:
    >
    >
    Matt wrote:
    Hello All,
    Our team is currently tuning an application running on WL 5.1 SP 10with a MS
    SQL Server 7 DB that it accesses via the WebLogic jConnect drivers.The application
    uses Prepared Statements for all types of database operations (selects,updates,
    inserts, etc.) and we have noticed that a great deal of the DB host'sresources
    are consumed by the parsing of these statements. Our thought was toconvert many
    of these Prepared Statements to Stored Procedures with the idea thatthe parsing
    overhead would be eliminated. In spite of all this, I have read thatbecause
    of the way that the jConnect drivers are implemented for MS SQL Server,Prepared
    Statments are actually SLOWER than straight SQL because of the waythat parameter
    values are converted. Does this also apply to Stored Procedures???If anyone
    can give me an answer, it would be greatly appreciated.
    Thanks in advance!Hi. Stored procedures may help, but you can also try MS's new free type-4
    driver,
    which does use DBMS optimizations to make PreparedStatements run faster.
    Joe
    Thanks Joe! I also wanted to know if setting the statement cache (assuming that
    this feature is available in WL 5.1 SP 10) will give a boost for both Prepared Statements
    and stored procs called via Callable Statements. Pretty much all of the Prepared
    Statements that we are replacing are executed from entity bean transactions.
    Thanks again

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Sp_executesql vs transaction statement within Stored Procedure

    Experts,
    Any difference between sp_executesql vs Transaction/Commit statement or try/catch block within Stored Procedure?
    What is the best practice to use and why?
    Thank You
    Regards,
    Kumar
    Please do let us know your feedback. Thank You - KG, MCTS

    Your question is a bit strange. sp_executesql is used for dynamic SQL. Unless the problem demands dynamic SQL and, therefore, sp_executesql, there is no need to use it.
    For a single statement I would not use transaction and try/catch. For multiple statements you do need to use transaction. It's up to you if you want to use try/catch in the SP or not and trap the error in the client application.
    For every expert, there is an equal and opposite expert. - Becker's Law
    My blog
    My TechNet articles

  • Baffled by Toplink and stored procedures

    Here's what I'm after...
    I have a nice package built to act as a bridge to a MySQL database. Everything works fine, but I need to write classes to "wrap" two stored procedures (procedures, not functions or triggers, defined in the database itself). I want to leverage the TopLink connection handling so that I don't have to open/close connections for these two classes. Surely there is some way to do this.
    Working with one example...
    Stored Procedure definition:
    CREATE PROCEDURE openPorts (ipAddr INT UNSIGNED)
    BEGIN
      DECLARE lastScanID INT UNSIGNED;
      SELECT results.scanrequestid INTO lastScanID
        FROM (results,network,scanner)
        WHERE
          results.ip=ipAddr and
          results.ip between network.ip and network.ip+pow(2,(32-network.cidr))-1 and
          results.scannerip=scanner.ip and
          scanner.regionid=network.regionid
        ORDER BY results.endtime DESC LIMIT 1;
      SELECT distinct protocol,port
        FROM results
        where
          ip=ipAddr and
          scanrequestid=lastScanID and
          port>0 and protocol>0
        ORDER BY PROTOCOL,PORT;
    ENDI want to create a class file OpenPortsDAO such that calling
    List<OpenPort> ports = OpenPortsDAO.find(int ipInteger)Yields a list (or vector, whatever) of the rows returned as the result.
    Ignoring for the moment the issues with mapping returned row fields to object fields in my OpenPort class, how the heck do I...
    1) Define the stored procedure
    2) Set the input variable value
    3) Get the connection (if necessary)
    4) Execute the query
    I imagine it's going to be something like...
    StoredProcedureCall call = new StoredProcedureCall();
    call.setProcedureName("hostSummary");
    call.addNamedArgumentValue("ipAddr", value);
    call.getResult();Should this wrapping class be defined in the persistence unit just like any other? If so, my assumption is then that I would not have to manually handle connections, etc.
    Any help would be very much appreciated. I think if I can just get pointed in the right direction, it should "click" for me.
    -David

    Doug,
    Thanks for the info. I was able to handle the stored procedure, but I did so via POJOs.
    I'm interested in this part of your reply...
    >
    This will give you an entity class with a full lifecycle to it assuming that TopLink may write it back to the database if modified in a transaction.
    Given the following pseudo-&lt;code|SQL>...
    drop table starship
    go
    drop table contractor
    go
    create table contractor (
    id serial primary key,
    name text unique)
    go
    create table starship (
    id serial primary key,
    name text unique,
    contractor_id int references contractor)
    go
    create function get_by_id(shipid int, OUT contractor, OUT shipname)
    BEGIN
      blah
    END;
    GOAre you saying that I can create a full CRUD object for the function/procedure call? IE., assuming I get back a result and I see that the ship name is incorrect, I could make a change to the returned data and have that persisted? Without having to get a starship object on my own?
    If that's the case, I'd like to know more.
    -David

  • SQL Server 2008 R2 - Report Builder 3.0 - timeout using shared data source and stored procedure

    I select the shared datasource from the data source propeties dialog, test the connection and everything is good.
    I add a dataset by selecting "use a dataset embedded in my report" option within the Dataset properties dialog.
    I select the newly added data source, click the "Stored procedure" query type and drop down the list box and select my intended stored procedure.
    the timeout for the dataset is "0" seconds.
    I click the "OK" button and I'm presented with the parameters to the stored procedure.
    I enter valid data for the parameters and click the "OK" button.
    I then get the following error message after 30 seconds:
    The problem is, all of the timeouts, that I'm aware of, have values of zero (no timeout) or high enough values that 30 seconds isn't even close to the timeout.
    I think the smallest timeout we have is 120 seconds.
    I have searched this site and many others and the solutions all involve altering the stored procedure to get the fields into report builder and then revert the stored procedure back to its original form.
    To me, this is NOT a solution.  
    I have too many stored procedures that need to be brought into Report Builder.
    I need a real solution.
    Thank you for you time, Tim Caldwell.
    Timothy E Caldwell

    I don't mean to be rude, but really, check to see if the stored procedure can return data rows???
    Maybe I'm not being clear enough.
    The stored procedure runs perfectly fine.
    it runs perfectly fine in the production environment and the test environment.
    I can access the stored procedure in several ways and have it return correct data.
    I can even trick report builder into creating a dataset with parameters and run the stored procedure that way.
    What I cannot do, is to get report builder to not timeout after 30 seconds on the initial creation of a dataset with a Query type of stored procedure.
    I have seen this issues posted again and again and again on may different sites and the "solution" is to simplifiy the stored procedure by creating a stored procedure that has a create table and a select in the stored procedure and that's it.  After
    report builder creates the dataset the developer then has to replace the simplified stored procedure with the actual stored procedure and everything works fine after that.
    HOWEVER, having to go through this process for 70 or more stored procedures is ridiculous.
    It would appear that there is something within report builder itself that is causing this issue.
    The SQL Script included is an example of a stored procedure that will not create fields create a dataset with fields and parameters in Report Builder 3.0:
    USE [CRUM_IT]
    GO
    /****** Object: StoredProcedure [dbo].[COGNOS_Level5ScriptSP] Script Date: 11/17/2014 08:02:26 ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    ALTER procedure [dbo].[COGNOS_Level5ScriptSP]
    @CompanyCode varchar(8) = null,
    @GetSiblings varchar(1) = 'N'
    as
    Begin
    -- get emergency contact info
    select *
    into #tmp_Contacts
    from
    (select
    ConEEID,
    con.connamelast as [Emer Contact Last Name],
    con.connamefirst as [Emer Contact First Name],
    con.connamemiddle as [Emer Contact Middle Initial/Name]--,
    ,ROW_NUMBER() over (Partition by ConEEID order by ConNameLast)as rn
    ,ISNULL(
    case when con.conphonepreferred = 'H'
    then '(' + substring(con.conphonehomenumber, 1, 3) + ')' + substring(con.conphonehomenumber, 4, 3) + '-' + substring(con.conphonehomenumber, 7, 4)
    else '(' + substring(con.conphoneothernumber , 1, 3) + ')' + substring(con.conphoneothernumber , 4, 3) + '-' + substring(con.conphoneothernumber , 7, 4)
    end,
    ) as [Emergency Phone]
    from [ultiprosqlprod1].[ultipro_crum].dbo.Contacts con
    where con.ConIsEmergencyContact='y'
    and con.ConIsActive='y'
    ) A
    where A.rn = 1
    CREATE TABLE #tmp_CompanyCodes (CompanyCode varchar(8))
    If @GetSiblings = 'Y'
    Begin
    INSERT INTO #tmp_CompanyCodes (CompanyCode)
    EXEC [z_GetClientNumbers_For_ParentOrg_By_ClientNumber] @CompanyCode
    End
    INSERT INTO #tmp_CompanyCodes
    values (@CompanyCode)
    select *
    into #tmp_Company
    from [ultiprosqlprod1].[ultipro_crum].dbo.Company
    where cmpcompanycode in (select CompanyCode from #tmp_CompanyCodes)
    select distinct
    cmpcompanycode as [Client ID],
    CmpCompanyDBAName as [Client Name],
    eec.eecEmplStatus AS [Employment Status],
    eec.eecEmpNo AS [Employee Num],
    rtrim(eep.eepNameLast) AS [Last Name],
    rtrim(eep.eepNameFirst) AS [First Name],
    isnull(rtrim(ltrim(eep.eepNameMiddle)), '') AS [Middle Initial/Name],
    rtrim(eep.eepAddressLine1) AS [Address Line 1],
    isnull(rtrim(eep.eepAddressLine2), '') AS [Address Line 2],
    eep.eepAddressCity AS [City],
    eep.eepAddressState AS [State],
    CASE
    WHEN len(eep.eepAddressZipCode) > 5 and charindex(eep.eepAddressZipCode, '-', 1) = 0
    THEN substring(eep.eepAddressZipCode, 1, 5)
    ELSE rtrim(eep.eepAddressZipCode)
    END AS [Zip code],
    CASE
    WHEN len(eep.eepAddressZipCode) > 5 and charindex(eep.eepAddressZipCode, '-', 1) = 0
    THEN substring(eep.eepAddressZipCode, 6, 4)
    WHEN len(eep.eepAddressZipCode) > 5 and charindex(eep.eepAddressZipCode, '-', 1) > 0
    THEN substring(eep.eepAddressZipCode, charindex(eep.eepAddressZipCode, '-', 1) + 1, 4)
    WHEN len(eep.eepAddressZipCode) <= 5
    THEN ''
    END AS [ZIP + 4],
    substring(eep.eepSSN, 1, 3) + '-' + substring(eep.eepSSN, 4, 2) + '-' + substring(eep.eepSSN, 6, 4) AS [SSN],
    isnull(convert(VARCHAR(10), eep.eepDateOfBirth, 101), '') AS [Date Of Birth],
    eetFED.TAXCODE AS [FED Tax Code],
    eetFED.FILINGSTATUS AS [Fed Filing Status],
    eetFED.EXEMPTIONS AS [Fed Exemption Allowance],
    eetFED.ADDITIONAL AS [Additional Fed Withholding],
    eetSIT.TAXCODE AS [SIT Tax Code],
    eetSIT.FILINGSTATUS AS [State Filing Status],
    eetSIT.EXEMPTIONS AS [State Exemption Allowance],
    eetSIT.ADDITIONAL AS [Additional State Withholding],
    isnull('(' + substring(eep.eepPhoneHomeNumber, 1, 3) + ')' + substring(eep.eepPhoneHomeNumber, 4, 3) + '-' + substring(eep.eepPhoneHomeNumber, 7, 4), '') AS [Home Phone],
    isnull((SELECT cod.codDesc
    FROM [ultiprosqlprod1].[ultipro_crum].dbo.Codes cod WITH (NOLOCK)
    WHERE cod.codCode = eep.eepEthnicID
    AND cod.codDosTable = 'ETHNICCODE'), '') AS [Race-Origin], --eep.eepEthnicID AS [Race-Origin],
    eep.eepGender AS [Gender],
    isnull(convert(VARCHAR(10), eec.eecDateOfOriginalHire, 101), '') AS [Original Hire Date],
    isnull(convert(VARCHAR(10), eec.eecDateOfSeniority, 101), '') AS [Seniority Date],
    isnull(convert(VARCHAR(10), eec.eecDateOfTermination, 101), '') AS [Termination Date],
    isnull(eecTermType,'') as [Termination Type],
    isnull(TchDesc, '') as [Termination Reason],
    rtrim(eec.eecJobCode) AS [WC Code],
    isnull(eec.eecJobTitle, '') AS [Job Title],
    pgr.pgrPayFrequency AS [Pay Frequency],
    eec.eecFullTimeOrPartTime AS [Full/Part Time],
    eec.eecSalaryOrHourly AS [Pay Type],
    isnull(convert(MONEY, eec.eecHourlyPayRate), 0.00) AS [Hourly Rate],
    isnull(eec.eecAnnSalary, 0.00) AS [Annual Salary],
    [YTD Hours],
    isnull(eep.eepNameFormer, '') AS [Maiden Name],
    eec.eecLocation AS [Location ID],
    rtrim(eec.eecOrgLvl1) AS [Department ID],
    eec.eecorglvl2 AS [Cost Item],
    eec.eecorglvl3 as [Client Project],
    eec.eecPayGroup as [Pay Group],
    isnull(eepAddressEMail,' ') as [Email Address],
    isNull(BankName1,' ') as PrimaryBank,
    isNull(BankRoute1,' ') as PrimaryRouteNum,
    isNull(Account1,' ') as PrimaryAccount,
    isNull(AcctType1,' ') as PrimaryAcctType,
    isNull(DepositRule1,' ') as PrimaryDepositRule,
    isNull(BankName2,' ') as SecondaryBank,
    isNull(BankRoute2,' ') as SecondaryRouteNum,
    isNull(Account2,' ') as SecondaryAccount,
    isNull(AcctType2,' ') as SecondaryAcctType,
    isNull(DepositRule2,' ') as SecondaryDepositRule,
    isNull(
    CASE
    WHEN DepositRule2 = 'D'
    THEN '$' + convert(varchar, cast(EddAmtOrPct2 AS decimal(10,2)))
    WHEN DepositRule2 = 'P'
    THEN convert(varchar, cast((EddAmtOrPct2*100) AS decimal(10,0))) + '%'
    ELSE null
    END,' ') as SecondaryDepositAmount,
    isNull(BankName3,' ') as ThirdBank,
    isNull(BankRoute3,' ') as ThirdRouteNum,
    isNull(Account3,' ') as ThirdAccount,
    isNull(AcctType3,' ') as ThirdAcctType,
    isNull(DepositRule3,' ') as ThirdDepositRule,
    isNull(
    CASE
    WHEN DepositRule3 = 'D'
    THEN '$' + convert(varchar, cast(EddAmtOrPct3 AS decimal(10,2)))
    WHEN DepositRule3 = 'P'
    THEN convert(varchar, cast((EddAmtOrPct3*100) AS decimal(10,0))) + '%'
    ELSE null
    END,' ') as ThirdDepositAmount,
    Supervisor,
    eec.eecEEID AS [Employee EEID],
    eec.EecJobCode As [Job Code],
    isnull(eec.EecTimeclockID,' ') As [Time Clock ID],
    con.[Emer Contact Last Name],
    con.[Emer Contact First Name],
    con.[Emer Contact Middle Initial/Name],
    con.[Emergency Phone]
    from [ultiprosqlprod1].[ultipro_crum].dbo.empPers eep WITH (NOLOCK)
    inner join [ultiprosqlprod1].[ultipro_crum].dbo.empComp eec WITH (NOLOCK)
    ON eep.eepEEID = eec.eecEEID
    inner join #tmp_Company cmp WITH (NOLOCK)
    ON eec.eecCOID = cmp.cmpCOID
    inner join [ultiprosqlprod1].[ultipro_crum].dbo.PayGroup pgr WITH (NOLOCK)
    ON eec.eecPayGroup = pgr.pgrPayGroup
    left outer join [ultiprosqlprod1].[ultipro_crum].dbo.TrmReasn
    on tchCode = eecTermReason
    left join (select CAST(sum(isnull(eee.eeeYTDHrs,0.00))AS DECIMAL(18,2)) as [YTD Hours],
    eeeEEID,
    eeeCOID
    from [ultiprosqlprod1].[ultipro_crum].dbo.EmpEarn eee with (NOLOCK)
    group by eeeCOID,eeeEEID)eee
    on eec.eecEEID = eee.eeeEEID
    and eec.eecCOID = eee.eeeCOID
    left join (SELECT eetCOID AS COID,
    eetEEID AS EEID,
    eetTaxCode AS TAXCODE,
    eetFilingStatus AS FILINGSTATUS,
    eetExemptions AS EXEMPTIONS,
    eetExtraTaxDollars AS ADDITIONAL
    FROM [ultiprosqlprod1].[ultipro_crum].dbo.empTax WITH (NOLOCK)
    WHERE eetTaxCode = 'USFIT'
    )eetFED
    ON eec.eecCOID = eetFED.COID
    and eec.eecEEID = eetFED.EEID
    left join (SELECT eetCOID AS COID,
    eetEEID AS EEID,
    eetTaxCode AS TAXCODE,
    eetFilingStatus AS FILINGSTATUS,
    eetExemptions AS EXEMPTIONS,
    eetExtraTaxDollars AS ADDITIONAL
    FROM [ultiprosqlprod1].[ultipro_crum].dbo.empTax WITH (NOLOCK)
    WHERE eetTaxCode like '%SIT'
    AND eetIsWorkInTaxCode = 'Y'
    )eetSIT
    ON eec.eecCOID = eetSIT.COID
    and eec.eecEEID = eetSIT.EEID
    left outer join (SELECT eddCOID,
    eddEEID,
    eddEEBankName BankName1,
    eddEEBankRoute BankRoute1,
    eddAcct Account1,
    EddAcctType AcctType1,
    EddDepositRule DepositRule1,
    EddAmtOrPct EddAmtOrPct1
    FROM [ultiprosqlprod1].[ultipro_crum].dbo.EmpDirDp WITH (NOLOCK)
    WHERE eddSequence = '99')edd
    ON eec.eecCOID = edd.eddCOID
    and eec.eecEEID = edd.eddEEID
    left outer join (SELECT eddCOID,
    eddEEID,
    eddEEBankName BankName2,
    eddEEBankRoute BankRoute2,
    eddAcct Account2,
    EddAcctType AcctType2,
    EddDepositRule DepositRule2,
    EddAmtOrPct EddAmtOrPct2
    FROM [ultiprosqlprod1].[ultipro_crum].dbo.EmpDirDp WITH (NOLOCK)
    WHERE eddSequence = '01')edd2
    ON eec.eecCOID = edd2.eddCOID
    and eec.eecEEID = edd2.eddEEID
    left outer join (SELECT eddCOID,
    eddEEID,
    eddEEBankName BankName3,
    eddEEBankRoute BankRoute3,
    eddAcct Account3,
    EddAcctType AcctType3,
    EddDepositRule DepositRule3,
    EddAmtOrPct EddAmtOrPct3
    FROM [ultiprosqlprod1].[ultipro_crum].dbo.EmpDirDp WITH (NOLOCK)
    WHERE eddSequence = '02')edd3
    ON eec.eecCOID = edd3.eddCOID
    and eec.eecEEID = edd3.eddEEID
    left outer join (SELECT eecCOID,
    eecEEID,
    rtrim(eepNameLast) + ', ' +
    rtrim(eepNameFirst) + ' ' +
    isnull(rtrim(ltrim(eepNameMiddle)), '') AS [Supervisor]
    FROM [ultiprosqlprod1].[ultipro_crum].dbo.EmpComp WITH (NOLOCK)
    join [ultiprosqlprod1].[ultipro_crum].dbo.EmpPers with (NoLock)
    on eeceeid = eepeeid)eec2
    ON eec.eecSupervisorID = eec2.eecEEID
    left outer join #tmp_Contacts con
    on eep.eepEEID = con.ConEEID
    order by [Client ID],
    [Last Name],
    [First Name]
    drop table #tmp_Contacts
    END
    Timothy E Caldwell

  • Crystal Report and stored procedure calls

    Hi,
    1.   I am trying to access a Stored Procedure through Crystal Reports 2008. I have 3 Parameters in it and now I need the 2 Parameters from the stored Procedure to be given by User and the 3rd Parameter should be passed from the backend code. Please help me with the setps that would help me do this.
    2.I also have another Stored procedure where I have 3 parameters and in addition to that I should provide two other parameters for user to enter. Now the total number of parameters will be 5 Parameters. Can any one let me know how to implement all these parameters.
    3. In another Storedprocedure I have two Parameters which are discrete and multi select and both of them are cascaded. How do I implement those.
    Thanks
    Pradeep

    Please re-post if this is still an issue or purchase a case and have a dedicated support engineer work with you directly:
    http://store.businessobjects.com/store/bobjamer/DisplayProductByTypePage&parentCategoryID=&categoryID=11522300?resid=-Z5tUwoHAiwAAA8@NLgAAAAS&rests=1254701640551

  • Crystal Reports 9 and Stored Procedures using supplied parameters from user

    Hi,
    I have a Crystal Reports 9 report that prints and works fine.  It requests from the user 2 pieces of data which both are String data and not nulls.  I want to use the same report in a Visual Basic.Net 2003 program.  I can pull the report onto the form and it runs wonderfully.  I want to make it run programatically with out prompting for the user to key in the parameters.  I can supply them from an existing data file.  I have tried many things but have lacked the solution. 
    How can I pass the parameters from VB.Net 2003 to the stored procedure which runs the Crystal Report with in the VB.Net 2003 program.
    I appreciate all help and comments.
    Can this be done or do I need to just re-write the report in VB.Net 2003?
    Thanks,
    Norman

    Hi, Norman;
    It sure is possible to pass parameters to a Stored Procedure via our .NET SDK.
    Have a look at these samples:
    https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/9043bbbc-ae66-2b10-ce96-b48f9e25a450
    There are samples showing passing parameters.
    Regards,
    Jonathan

  • Coherence 3.6.0 transactional cache and POF - NULL values

    Hi,
    We are trying to use the new transactional scheme defined in 3.6.0 and we encounter an abnormal behaviour. The code executes without any exception or warnings but in the cache we find the key associated with a NULL value.
    To try to identify the problem, we defined two services (see cache-config below):
    - one transactional cache
    - one distributed cache
    If we try to insert into transactional cache primitives or strings everything is normal (both key and value are visible using coherence console). But if we try to insert custom classes using POF, the key is inserted with a NULL value.
    In same cluster we defined a distributed cache that uses the same POF classes/configuration. A call to put will succeed in any scenario (both key and value are visible using coherence console).
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>cnt.*</cache-name>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>stt.*</cache-name>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <transactional-scheme>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
                   <service-name>storage.transactionalcache.cnt</service-name>
                   <thread-count>10</thread-count>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </transactional-scheme>
              <distributed-scheme>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
                   <service-name>storage.distributedcache.stt</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
         </caching-schemes>
    </cache-config>
    Failing code (uses transaction APIs 3.6.0):
         public static void main(String[] args)
              Connection con = new DefaultConnectionFactory().createConnection("storage.transactionalcache.cnt");
              con.setAutoCommit(false);
              try
                   OptimisticNamedCache cache = con.getNamedCache("cnt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.insert(tID, tC);
                   con.commit();
              catch (Exception e)
                   e.printStackTrace();
                   con.rollback();
              finally
                   con.close();
    Code that succeeds (but without transaction APIs):
         public static void main(String[] args)
              try
                   NamedCache cache = CacheFactory.getCache("stt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.put(tID, tC);
              catch (Exception e)
                   e.printStackTrace();
              finally
    And here is what we list using coherence console if we use transactional APIs:
    Map (cnt.t1): list
    CId {
    id = 11111
    } = null
    Any suggestion, please?

    Cristian,
    After looking at your configuration I noticed that your configuration is incorrect. For a transactional scheme you cannot specify a backing-map-scheme.
    Your config contained:
    <backing-map-scheme>
    <local-scheme>
    <high-units>250M</high-units>
    <unit-calculator>binary</unit-calculator>
    </local-scheme>
    </backing-map-scheme>To specify high-units for a transactional scheme, simply provide a high-units element directly under the transactional-scheme element.
    <transactional-scheme>
        <scheme-name>small-high-units</scheme-name>
        <service-name>TestTxnService</service-name>
        <autostart>true</autostart>
        <high-units>1M</high-units>
    </transactional-scheme>http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/api_transactionslocks.htm#BEIBACHA
    The reason that it is not allowable to specify a backing-map-scheme for a transactional scheme is that transactional caches use their own storage.
    I am not sure why this would work with primitives and only fail with POF. We will look into this further here and try to reproduce.
    Can you please change your configuration with the above changes and let us know your results.
    Thanks,
    John
    Edited by: jspeidel on Sep 16, 2010 10:44 AM

  • EJB 3.0 Persistence API and Stored Procedures

    Does the EJB 3.0 Persistence API in any way prescribe or facilitate the use of Stored Procedure calls? Given the recent popularity of using (database) server APIs consisting of Stored Procedures that are addressed by ORM frameworks for performing the DML operations and of course the general usefulness of stored procedures for specific data related operations, there definitely is a need for clarity on using the straightforward Persistence operations in combination with calls to Stored Procedures. Can we use the Native Query for executing CallableStatements? Can we get to a Connection instance from the EntityManager? Can we easily use the results from a Stored Procedure invocation to refresh our Domain Objects - or is that all application logic in which the EntityManager cannot help us?
    If the spec does not help us at this point, do you know how TopLink will deal with this topic?
    Thanks for any help you can give me!
    Best regards,
    Lucas Jellema

    I'm am also interested in a response to this post. I've just upgraded to JDeveloper 10.1.3.1 and am interested in making calls to stored procedures.
    I've found examples of just creating my own session beans but it seems I would have to hard code the datasource name. Does anyone know if I can get around this using the entity manager supplied by the tool?
    thanks,
    Dan

  • Multithreading and stored procedures

    Hi,
    Does anyone know of any problems with calling a stored procedure that returns a reference cursor in a multithreading application? In other words, if I have multiple threads calling the same stored procedure is the reference cursor guarenteed to be unique to each thread/thread safe?
    Thanks in advance,
    Glenn

    That should work fine and each call will get a separate ref cursor as long as the app is written properly and is managing it's threads anbd handles properly. Are you seeing some behaviour that suggests this is not the case?
    Chris

Maybe you are looking for

  • Problem Adding Places in 8.03 Update

    Hi there, Apologies if this duplicates any known issue mentioned at length in this forum already. I have been manually (and slowly) going through all my photos and adding Places to them. The other day I installed 8.03 software update and now I find t

  • OS X Mavericks/Safari: How do I Stop Video Auto-Play?

    Using Mavericks 10.9.2/Safari on very recent MBP. How do I stop video auto-play when visiting various web sites?

  • How to ignore .dtd

    please help me for this problem I have a xml file: datasources.xml <?xml version = '1.0' encoding = 'windows-1252'?> <!DOCTYPE data-sources PUBLIC "Orion data-sources" "http://xmlns.oracle.com/ias/dtds/data-sources.dtd"> <data-sources> </data-sources

  • Oracle APC (PLM) Security Issue in Item Catalog

    Hi Forum Experts, that have knowledge of Oracle Advanced Product Catalog! This is my first post, so will be interesting for me. We are using Oracle EBS 11.5.10 PLM-E I have a requirement for our Trade Compliance users to have access to only update my

  • Adding scripts to an EM plugin

    Hi, We're attempting to create an EM plugin for our application. We've defined metrics for our plugin and we're using an OSLineToken fetchlet (http://docs.oracle.com/cd/E24628_01/doc.121/e25161/fetchlets.htm#i1005577) in order to call a shell script