Transaction propagation and integration

I am currently performing a product assesment for integration platforms (EAI/ESB).
The environment is somewhat this: there will be a J2EE architecture that envolves many types of components. In addition, there will be a integration product installed.
If a UserTransaction is started from a Session Bean and this session bean performs two types of functions. Functionalities that reside entirely inside the J2EE container and functionalitites that reside within the selected EAI/ESB product.
Is the transaction context propagated to the EAI/ESB product through some interface in a way, that would make the EAI/ESB functionalities a solid part of the transaction?
Thank you for all your help!

You will need to pick a product that supports distributed transactions. This works for example with most JMS implementations and datasources.
(see http://www.onjava.com/pub/a/onjava/2001/05/23/j2ee.html?page=3 for an explanation on the subject)
Regards,
Lonneke

Similar Messages

  • Transaction propagation and ESB

    I am currently performing a product assesment for integration platforms (ESB).
    The environment is somewhat this: there will be a J2EE architecture that envolves many types of components. In addition, there will be a integration product installed.
    A UserTransaction is started from a Session Bean and this session bean performs two types of functions. Functionalities that reside entirely inside the J2EE container and functionalitites that reside within the selected ESB product.
    Is the transaction context propagated to the ESB product through some interface in a way, that would make the ESB functionalities a solid part of the transaction?
    Thank you for all your help!

    From the advanced architecture document I found this "slide"....
    Transactions
    • Global End-to-End JTA/XA Transactions
    • BPEL <-> ESB <-> BPEL
    • JCA <-> ESB <-> WSIF
    • ESB Inherits Inbound Global Transactions
    • “Async” Routing Rules ends scope of current transaction
    • New ESB initiated transactions grouped by ESB System
    • Transaction Exception Handling and Rollback
    • Errors on existing inbound transactions rolled back to initiator
    • Errors on ESB initiated transactions can be resubmitted
    • End-to-end message flow terminates on first failed service
    regardless of transaction state or owner
    I guess "inherits inbound global transactions" means, that ESB processes/functions can be made a part of an existing transaction. If this is true, than this solves my problem :)

  • Unit Testing and Integrating testing In HR

    Dear Sap Gurus,
    Would you be kind enough to  give me an example of unit testing and integrating testing??  what do you test, eg..TC and what else.. what happened.??. And also an example of Integrating testing  ..and an example ....I know what unit and integrating test is ..and with a good example, i will have a great idea about it ...thanks a lot.

    Hi Pooja
    Unit Testing:
    A process for verifying that software, a system, or a system component performs its intended functions.
    Unit transactions are tested against their own specifications and design documents.
    Integration Testing
    An orderly progression of testing in which software elements, hardware elements or both are combined and tested until the entire system has been integrated.
    Integration tests deal mainly with the testing of cross-application process chains in addition to transactions and business processes. The process models and the test cases derived from these form the basis of these tests.
    Regards
    Vijay

  • How to remove error from propagation and verify replication is ok?

    Have a one-way schema level streams setup and the target DB is 3-node RAC (named PDAMLPR1). I run a large insert at source DB (35million rows). After committing on source, I made a failure test on the target DB by shutting down the entire database. The streams seems stopped as the heartbeat table sequence (it inserts a row each min) on target is still reflecting last night. We get the error in the dba_propagation:
    ORA-02068: following severe error from PDAMLPR1
    ORA-01033: ORACLE initialization or shutdown in progress
    ORA-06512: at "SYS.DBMS_AQADM_SYS", line 1087
    ORA-06512: at "SYS.DBMS_AQADM_SYS", line 7639
    ORA-06512: at "SYS.DBMS_AQADM", line 631
    ORA-06512: at line 1
    08-FEB-10
    while capture, propagation, and apply are all in enabled status. I restarted capture and propagation at source DB. But still see the error message above. My questions are:
    1. How to delete the error from dba_propagation?
    2. How to verify the streams is still running fine?
    In normal test, during such large insert, the heartbeat table added a row in an hour. Very slow.
    thanks for advice.

    Well, if I can give you my point of view: I think that 35 millions LCR is totally unreasonnable. Did you really post a huge insert of 35 millions then committed that single utterly huge transaction? Don't be surprised it's going to work very very hard for a while!
    With a default setup, Oracle recommends to commit every 1000 LCR (row change).
    There are ways to tune Streams for large transaction but I have not done so personnaly. Look on Metalink, you will find information about that (mostly documents id 335516.1, 365648.1 and 730036.1).
    One more thing: you mentionned about a failure test. Your target database is RAC. Did you read about queue ownership? queue_to_queue propagation? You might have an issue related to that.
    How did you setup your environment? Did you give enough streams_pool_size? You can watch V$STREAMS_POOL_ADVICE to check what Oracle think is good for your workload.
    If you want to skip the transaction, you can remove the table rule or use the IGNORETRANSACTION apply parameter.
    Hope it helps
    Regards,

  • Timeout in ESB and Integration Builder

    Hi,
    The Enterprise Service Builder, Runtime Workbench and Integration Builder browser will time out after a certain time limit.  Can someone point me to where these default times can be changed in the NWA for PI 7.1?
    I searched and found one thread on this, but it was not answered.
    Thanks in advance for your input.
    Regards,
    Rick

    Hi Rick,
    Actually this is the BASIS issue.
    Logon to PI system and execute the transaction RZ10 and choose the profile instance and select the Extended maintanance radio button and click on display then you will get the ...j2ee_timeout parameter there you can increase the value.
    Regards
    Ramesh

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Have a transaction propagated to two remote machines!!!(URGENT!!!)

              Can we have a transaction propagated to two ejb's in different machines if we have database interaction in both?
              I tested it out with Account beans (examples)
              deployed on two different(remote) servers both servers having the same connection pool name and the mapping to the
              same oracle database (Using the oracle thin driver as well as the Weblogic Driver). One of the beans is in a local server and one in a remote server and both are accessed in the
              same transaction context. What happens is that the 2nd bean accessed throws a Null pointer Exception
              when it tries to getConnection().
              This is the server side stack trace -----
              SQLException: java.sql.SQLException: java.lang.NullPointerException:
              Start server side stack trace:
              java.lang.NullPointerException
              at weblogic.jdbc.common.internal.ConnectionMOWrapper.<init>(ConnectionMO
              Wrapper.java:42)
              at weblogic.jdbc.common.internal.ConnectionEnv.setConnection(ConnectionE
              nv.java:142)
              at weblogic.jdbc.common.internal.DriverProxy.execute(DriverProxy.java:17
              3)
              at weblogic.t3.srvr.ClientRequest.execute(ClientContext.java:1030)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java, Compiled Code)
              End server side stack trace
              It appears that when the database call on the 2nd WL server is routed to the first WL server(the server that established the first connection for the transaction) for the database connection it is not able to find the connection( and hence the bombing). I'm going nuts over this for two days. Please help. We need to use Weblogic for our project and i need to confirm that this functionality works!!!!
              I'm attaching the stateless bean code which accesses both these beans.
              [TraderBean.java]
              

              Hi,
              Are you using cluster?
              Definitely you can be in one transaction if you just access one data source. that's two phase transaction.
              "kartik" <[email protected]> wrote:
              >
              >
              >
              >Can we have a transaction propagated to two ejb's in different machines if we have database interaction in both?
              >
              >I tested it out with Account beans (examples)
              > deployed on two different(remote) servers both servers having the same connection pool name and the mapping to the
              > same oracle database (Using the oracle thin driver as well as the Weblogic Driver). One of the beans is in a local server and one in a remote server and both are accessed in the
              > same transaction context. What happens is that the 2nd bean accessed throws a Null pointer Exception
              > when it tries to getConnection().
              >
              >This is the server side stack trace -----
              >SQLException: java.sql.SQLException: java.lang.NullPointerException:
              >Start server side stack trace:
              >java.lang.NullPointerException
              > at weblogic.jdbc.common.internal.ConnectionMOWrapper.<init>(ConnectionMO
              >Wrapper.java:42)
              > at weblogic.jdbc.common.internal.ConnectionEnv.setConnection(ConnectionE
              >nv.java:142)
              > at weblogic.jdbc.common.internal.DriverProxy.execute(DriverProxy.java:17
              >3)
              > at weblogic.t3.srvr.ClientRequest.execute(ClientContext.java:1030)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java, Compiled Code)
              >End server side stack trace
              >-----------------
              >
              >It appears that when the database call on the 2nd WL server is routed to the first WL server(the server that established the first connection for the transaction) for the database connection it is not able to find the connection( and hence the bombing). I'm going nuts over this for two days. Please help. We need to use Weblogic for our project and i need to confirm that this functionality works!!!!
              >
              >I'm attaching the stateless bean code which accesses both these beans.
              >
              

  • American Express Transaction Loader and Validate program error

    I am working on AMEX integration with iexpenses. When I run the American Express Transaction Loader and Validate program, Cocurent program run with an error:
    header and tailor do not have same report create date. submit this request again with the correct data file from American Express
    Any suggestions?
    Thanks,
    Pradeep

    Thank You. I tried to implement this functionality ( American Express Transaction Loader and Validation Con prog) by seperately running loader and validator
    1.e 1.American Express Transaction Loader 2. Credit Card Transaction Loader Program. I could able to run them successfully. I could see some new credit cards
    (which are in the data file) and assign them to employees through Internet Expenses Setup and Administration Responsibility -> Internet Expenses Administration -> new Accounts Tab
    But, When I login throgh that employee, I could not see the transactions in the Iexpenses resposibility. Where can one see his transactions? Is there any iexpnses user guide available for R12?
    Thanks.
    Pradeep

  • Meta data,Transaction data and Master data

    Hi all,
    Could you plz make me clear exactly wht does Meta data,Transaction data and Master data mean and the differences

    Hi Ganesh,
    <b>MASTER Data</b> is the data that exists in the organization like employee details, material master, customer master, vendor master etc. These are generally created once.
    Master data are distributed throughout the company, they are often not standardised and often redundant. As a result it is very costly to offer efficient customer service, keep track of supply chains and make strategic decisions. With SAP Master Data Management (SAP MDM) these important business data from across the company can be brought together, harmonised and made accessible to all staff and business partners. As a key component of SAP NetWeaver, SAP MDM ensures data integrity via all IT systems.
    Regardless of the industry, companies often work with different ERP and Legacy systems. The result: the business processes are based on information about customers, partners and products which is displayed in different ways in the systems. If the data are recorded manually, there are more inconsistencies: some data sets are entered several times, others cannot be retrieved by all divisions of the company.
    As corporate applications are becoming increasingly complex and produce ever greater amounts of data, the problem is intensified further. Nevertheless, your employees must work with the inconsistent data and make decisions on this basis. The lack of standardised master data easily leads to wrong decisions, which restrict efficiency and threaten customer satisfaction and profitability.
    In a word: in order to save costs and ensure your company’s success it is necessary to consolidate master data about customers, partners and products, make them available to all employees beyond system boundaries and use attributes valid company-wide for the purpose of description.
    <b>TRASNACTION Data</b> - These are the business documents that you create using the master data - Purchase orders, sales orders etc
    http://help.sap.com/saphelp_nw2004s/helpdata/en/9d/193e4045796913e10000000a1550b0/content.htm
    Regards,
    Santosh

  • Difference between Transaction database and relational database

    Whats the Difference between Transaction database and relational database ??

    'Transaction' refers to the usage of a database.  'Relational' refers to the way in which a given database stores data.
    A 'transaction database' (or operational database) could be relational, hierarchical, et al.  A transaction database supports business process flows and is typically an online, real-time system.  The way in which that data is stored is typically
    based on the application(s).  Companies often have multiple 'transaction databases'.
    An 'operational data store' (ODS) is an integrated view or compilation of transaction data.
    The you get into data warehouse databases, where the transaction data is optimized for querying, reporting, and analysis activities.

  • Transaction propagation via plain Java classes?

              Hello,
              I have a question on transaction propagation in the following scenario:
              1. a method of EJB1 with setting "Required" is invoked.
              2. the method creates a plain Java class and invokes a method of the class
              3. the class's method invokes a method of EJB2 with setting "Required".
              Is my understanding of EJB spec correct, when I assume that the transaction created
              when the first EJB method was called will be propagated through the plain Java
              class (supposedly via association with current thread), so the second EJB will
              participate in the same transaction?
              Thank you in advance,
              Sergey
              

    Yup, current transaction is associated with the current thread.
              Sergey <[email protected]> wrote:
              > Hello,
              > I have a question on transaction propagation in the following scenario:
              > 1. a method of EJB1 with setting "Required" is invoked.
              > 2. the method creates a plain Java class and invokes a method of the class
              > 3. the class's method invokes a method of EJB2 with setting "Required".
              > Is my understanding of EJB spec correct, when I assume that the transaction created
              > when the first EJB method was called will be propagated through the plain Java
              > class (supposedly via association with current thread), so the second EJB will
              > participate in the same transaction?
              > Thank you in advance,
              > Sergey
              Dimitri
              

  • There are two transactions ZJPVCS303 and ZJPVCS303_US for one single Report

    When run as a batch program, (currently this is the case), or withT-Code ZJPVCS303 the selection screen is unchanged (except for additional sales area above)
    - When run as T-Code ZJPVCS303_UL (UL stands for Upload) the selection screen is changed.  The unix file option is no longer available, and the user is able to upload a local file (in the same format as the current unix file, but tab delimited) to the program for processing.
    Requirements:
    There are two transactions ZJPVCS303 and ZJPVCS303_US for one single Report.
    ->When ZJPVCS303 Transaction is executed, the file is uploaded from the Application
      server to SAP R/3. The selection screen parameters would be:
      Logical Filename:
      Sales Organization:
      Distribution Channel:
      Division:
    ->When ZJPVCS303_US Transaction is executed, the file is uploaded from the Presentation Server
      to SAP R/3. When this transaction is executed, it should not have the 'Logical
      Filename' parameter anymore on the selection-screen. Instead it should only have
      Local File name on the presentation server:
      Sales Organization:
      Distribution Channel:
      Division:
        The same thing is applicable for the other transaction ZJPVCS303. When transaction ZJPVCS303
    is executed, it should not have the 'Local Filename' parameter anymore on the selection-screen. Instead it should only have
    Logical Filename:
    Sales Organization:
    Distribution Channel:
    Division:
    So how should I make these parameters invisible depending on the transaction codes execution.
    I have an idea of using MODIF ID, LOOPING AT SCREEN...MODIFY SCREEN.
    I have an idea of using SY-TCODE.
    EX:
    AT SELECTION-SCREEN OUTPUT.
    IF SY-TCODE = 'ZJPVCS303'.
    LOOP AT SCREEN.
    IF SCREEN-GROUPID = 'GRP'.
       SCREEN-INPUT   = 0.
       SCREEN-INVISIBLE = 1.
       MODIFY SCREEN.
    ENDIF.
    ENDLOOP.
    ELSEIF SY-TCODE = 'ZJPVCS303_US'.
    LOOP AT SCREEN.
    IF .....
    ENDLOOP.
    ENDIF.
    ENDIF.
    But I am not able to get the output which I require. Please help me out.

    Hello Rani
    Basically the transaction determines whether upload starts from application server (AS) or presentation server (PC). Thus, you will have the following parameter:
    PARAMETERS:
      p_as_fil          TYPE filename   MODIF ID unx,  " e.g. Unix server
      p_pc_fil          TYPE filename   MODIF ID wnd.  " e.g. Windows PC
    AT SELECTION-SCREEN OUTPUT.
      CASE syst-tcode.
    *   transaction(s) for upload from server (AS)
        WHEN 'ZJPVCS303.
          LOOP AT screen.
            IF ( screen-group1 = 'UNX' ).
              screen-input = 0.
              screen-invisible = 1.
              MODIFY screen.
            ENDIF.
          ENDLOOP.
    *   transaction(s) for upload from local PC (PC)
        WHEN 'ZJPVCS303_US.
          LOOP AT screen.
            IF ( screen-group1 = 'WND' ).
              screen-input = 0.
              screen-invisible = 1.
              MODIFY screen.
            ENDIF.
          ENDLOOP.
       WHEN others.
       ENDCASE.
    Regards
      Uwe

  • What is the diff b/w transactional cube and std cube

    What is the diff b/w transactional cube and std cube

    Hi Main differences,
    1) Trasactional infocube are optimized for writing data that is multiple user can simultaneously write data into it without much effect on the performance where as the std infocube are optimized to read the data ie.e through queries.
    2) transactional inofcubes can be loaded through SEM process as well normal laoding process.Std can be loaded only thorugh normal loading process.
    3) the way data is stored is same but the indexing and partionong aspects are different since one is optimized for writing and another one for reading.
    Thanks
    Message was edited by:
            Ajeet Singh

  • Unit testing and integration testing

    hello 2 all,
                    what is the diff bet unit and integration testing? in sap what is unit teesting consists of and integration testing consists of what?
    is this the work  of test engineers r whose work is this?
    take care
    love ur parents

    Hi Sameer,
    Unit Testing
    A unit test is a procedure used to validate that a particular module of source code is working properly from each modification to the next. The procedure is to write test cases for all functions and methods so that whenever a change causes a regression, it can be quickly identified and fixed. Ideally, each test case is
    separate from the others; constructs such as mock objects can assist in separating unit tests. This type of testing is mostly done by the developers and not by end-users.
    Integration testing
    Integration testing can proceed in a number of different ways, which can be broadly characterized as top down or bottom up. In top down integration testing the high level control routines are tested first, possibly with the middle level control structures present only as stubs. Subprogram stubs were presented in Section 2 as incomplete subprograms which are only present to allow the higher level control routines to be tested. Thus a menu driven program may have the major menu options initially only present as stubs, which merely announce that they have been successfully called, in order to allow the high level menu driver to be tested.
    Top down testing can proceed in a depth-first or a breadth-first manner. For depth-first integration each module is tested in increasing detail, replacing more and more levels of detail with actual code rather than stubs. Alternatively breadth-first would proceed by refining all the modules at the same level of control
    throughout the application. In practice a combination of the two techniques would be used. At the initial stages all the modules might be only partly functional, possibly being implemented only to deal with non-erroneous data. These would be tested in breadth-first manner, but over a period of time each would be
    replaced with successive refinements which were closer to the full functionality. This allows depth-first testing of a module to be performed simultaneously with breadth-first testing of all the modules.The other major category of integration testing is bottom up integration testing where an individual module is
    tested from a test harness. Once a set of individual modules have been tested they are then combined into a collection of modules, known as builds, which are then tested by a second test harness. This process can continue until the build consists of the entire application.
    In practice a combination of top-down and bottom-up testing would be used. In a large software project being developed by a number of sub-teams, or a smaller project where different modules were being built by individuals. The sub-teams or individuals would conduct bottom-up testing of the modules which they were
    constructing before releasing them to an integration team which would assemble them together for top-down testing.
    I think this will help.
    Thanks ,
    Saptarshi

  • V I Engineering, Inc has immediate needs for Systems Engineers (Contract) and Senior Systems Engineers (Contract) (reporting to the Test Software and Integration Group Manager)

    Company: V I Engineering, Inc.
    Locations: Various - USA
    Salary/Wage: $negotiable
    Status: Hourly Contractor
    Relevant Work Experience: 5+ years system integration (LabVIEW/TestStand experience required)
    Career Level: Intermediate/Experienced
    Education Level: Bachelor's Degree
    Residency/Citizenship: USA Citizenship or Greencard required
    Driving Business Results through Test Engineering
    V I Engineering, Inc. has a vision for every client we engage. That vision is to achieve on-time and on-budget program launch more efficiently that the competition. To realize this vision, customers need to achieve predictable test systems development, eliminate waste in test information management, and drive increased leverage of test assets. An underlying requirement for all of these areas is metrics tracking and measurement based decision making.
    Job Description
    Ready to make a difference? Bring your experiences and skills to the industry leading test organization. Help us to continue to shape the way the world views test. We are seeking a talented Systems Engineer Contractor to be responsible for technical execution of successful projects in the Medical, Military, Transportation, Consumer Electronics and Aerospace Industries. The position will have very high visibility to customers and vendors. This is a very fast paced team with close customer contact and strong career development opportunities. A large part of the position is to identify, own and drive technical design, development and installation of test systems. You will work alongside other like-minded and equally talented engineers, and be creative in a fast-paced and flexible environment that encourages you to think outside the box. You will be available to spend extended periods at our customer sites to complete system installations.
    Required
    5+ years of Systems Integration experience
    3+ years LabVIEW experience
    1+ years TestStand experience
    Experience in Implementation and Delivery of Test Systems, including integration
    Experience in ATE usage and development
    Experience in building and Integrating Mechanical Fixtures
    Experience in Understanding the design of Circuit Boards as they relate to a total system, and their fault-finding
    Experience in Taking Part in Technical Teams throughout All Phases of Project Lifecycle
    Experience in Interfacing with Sub-vendors and Customers
    Ability to Multitask
    Comfortable Working on Various Team Sizes
    Excellent Communication Skills
    Desired
    Requirements generation and review experience
    National Instruments Hardware knowledge
    Experience with Source Code Control (SCC)
    Experience executing verification and validation for projects
    Experience generating and/or reviewing cost proposals
    RF Technology (DAQ, General RF Theory)
    FPGA (with LabVIEW)
    Professional software engineering processes and metrics experience
    TortoiseSVN
    V I Package Manager (VIPM)
    Experience with Projects for Regulated Industries
    MS Project
    Formal Education
    Technical degree (BS Engineering, Computer Science, Physics, Math)
    National Instruments Courses a plus
    National Instruments certification a plus
    Notes:
    Expected Travel Time is up to 50%.
    V I Engineering, Inc. offers a dynamic work environment and the flexibility of a small company.
    The Test Software and Integration Group values innovation, out-of-the-box thinking, high-tech toys and a fun / amazingly collaborative working environment. We're a National Instruments Select Integrator, and we're the closest you can get to playing with all the pre-released and new NI toys without joining the NI R&D team - and we get to play with them in the real world.
    To apply for this position, email a cover letter and resume to [email protected] with the subject "TSIG Systems Engineer (Contract) employment application".
    Copyright © 2004-2015 Christopher G. Relf. Some Rights Reserved. This posting is licensed under a Creative Commons Attribution 2.5 License.

    Edit
    Jeff

Maybe you are looking for

  • Ken Burns affect driving me crazy.

    Every still picture I import has the Ken Burns affect. I highlight all the stills and even after changing the preferences to 'fit', I still get the ken burns affect on all my stills. Shouldn't the default be 'fit' instead of Ken Burns? Especially sin

  • Patch 107081-33.zip

    I am having a devil of a time trying to download the patch 107081-33.zip. I have tried all day on last Friday and yet again Monday 8/13/01 . I am not having the luck of geting the entire patch. Our web server is doing fine here. One thing I noticed i

  • Safari 6.0 facebook auto log out

    im not sure im the only one with this problem. but yeah, when i was at other tab then decide to return back to facebook. then it automatically log me off. or eventho im at facebook for less than a min, it will do the same as well. im running on a 15"

  • Trying to install Illustrator trial version

    I have been attempting to install the trial version of Illustrator CS4 for a couple of hours.  It is telling me that it cannot install until I close all open programs, namely Adobe Bridge.  Adobe Bridge is  not open and I even restarted my computer t

  • MATLAB script in executable

    I made an exe out of a VI that uses a Matlab script. When I run the exe, I get a 1047 error (failed to send variable to script server) even though that error does not appear when I run the VI. I looked at the knowledge base document 1JNEPGU0 and did