Trxn Management with Tuxedo ATMI or SQL

Hi,
Ours is a leading bank in Saudi Arabia with a network of 70 branches in the country.
We have a number of heteogeneous systems for various banking operations.
Specifically, this quaestion is regarding choosing a choice of Tuxedo ATMI or
SQL for incorporating various business rules in retail banking, right from opening
customer realtionship and accounts until closing the customer file.
Given the very hectic time schedule and also the concern for long-term maintenance,
please advise which one is right decision
- Building the business logic in the PL/SQL procedures in
the central ORACLE server (Branches / Departments /
ATMs / ... will access this and iteropration of all
these sevices have to be managed (some of these are
XA compliant and some are not)
or
- building the business rules in the Tuxedo ATMI services?
Please note that I already read another email inquiry (and reply ) from Mervin
with the subject 'Transaction management with ATMI or SQL'.
Adavnce thanks.
N Dhandapani
Saudi French Bank
Riyadh

Hi,
Implementing business checks/repository in BEA Tuxedo servers seems a cleaner
approach from architecture and maintenance point of view.
It helps the application to maintain n-tier architecture and leave the client
to take care of only the interface and database the data storage. All checks and
business rules will be in the middle-layer.
Moreover, using Tuxedo brings in some advantages like:
*Load Balancing
*Scalability
*Transaction Monitoring
*Fail-over
*Connection Pooling
*Easier Database Migrations (very easy if only ANSI features are used, else code
changes mey be required)
NOTE: Tuxedo services can be developed using ESQL/C (Pro*C for Oracle) which may
be database specific. A database independent approach (a study/analysis may be
required) would be to use some database library that encapsulates the calls like
Rogue Wave.
HTH
Best Regards
MS

Similar Messages

  • Ricart-Agarwala algorithm with Tuxedo ATMI/Queue components C++ servers.

    Hi. I work on a large ERP system that uses Tuxedo ATMI services and queues, and I am trying to research on my own how to add into our framework the ability to have servers which have RPC-enabled threads which are needed to implement the Ricart-Agarwala algorithm for distributed mutual exclusion. In this case, it is a specific requirement to be able to have the addresses bound for such RPC calls, like with DCE RPC stuff but from looking at the Tux DCE bridge examples and documentation I don't believe that is actually completely the right choice here. I think either I want to use sockets in those server threads to do the comunications so I can bind each server instance partaking in the election the addresses of the other nodes for sending request/reply messages to each node, or possibly SALT/SCA. Can you please let me know if in these scenarios with SALT/SCA services I would be able to bind a call to a particular endpoint? If not, then based on the information I have provided do you believe it is possible to implement a solution to this algorithm using a combination of Tux ATMI/Q stuff and DCE RPC with the TuxRPC bridge? Thanks for any insights you can give.
    Jeremy

    Hi Todd,
    Thanks the event broker subscriptions sounds like it could well be what I need. It will be a good exercise to determine how to use the publications and subscriptions to refer to particular nodes but with a regular expression to match the individual events I do not see too much trouble.
    You're right to be suspicious but I think I can do a better job of explaining the use cases and then you may believe it is more correct than it originally appears. To address your metaphor, I agree it may be overkill in some ways however I want to give the sledgehammer a test drive. Also I really want that bug to die. To have the foresight to see that later the bugs may grow in size for this to be a useful thing to have is also there. I want to have
    1) A solution that is extensible to be used for solving other problems
    2) Introduce the related functionality into our codebase to be used as such for 1.
    3) No new proprietary license.
    4) In the history of the development of enterprise patterns, we have devised I guess Fowler's cornerstones of Object-Relational design. It involves taking power out of the database like establishing and maintaining object relationships, persisting object hierarchies, all the things relational databases do poorly and developing abstractions of these concepts in our code. It follows to me for a few reasons why I would want to do the same for the next layer between me and the database. If I can take power out of there it suits me because
    4a) Writing up domain objects and their GUI stuff is incredibly boring
    4b) Although my opinion is tempered by others here many times over, I believe the fact that Tux is proprietary is a hindrance to me. So if I can free myself of the shackles of more and more intrusion into the overhead costs of the software though that seems like a viable enough business case to appease the bean counters on this rationale (Not to say they are funding my foray).
    4c) What if we wanted to get rid of Tuxedo then what would I do? I am stuck now re-implementing tpsubscribe, etc. in stead of having just done it with only POSIX calls from day one.
    Current use case:
    Two invoices reference one order. Two seperate servers pick up the seperate invoices and attempt to turn them into a payment but one fails because the order was modified in the other server's unit of work. The transaction fails and the server will retry first based on the number of server message retries. This is basically just putting the message back onto the queue to get picked up again with the next service invocation. The other case here is that the server retries are set to 1 and then the queue retries will come into play. That all works fine. What I was trying to convey is that when I debug it it always works fine and the message goes to the error queue. On the client account team though when they debug it they get confused or something and can't figure out where the message went. They get confused and don't like the error queue behavior anyways because it is currently a manual effort. Now that's a separate problem that I'd like to solve too but that's for another day. You may see that this is a mess and I don't have control over how all this goes. People aren't too interested in really understanding what's going on enough to say whether it is behaving properly or not. So I still have this defect assigned to me from like a year ago about this but it is in a release.x status meaning it will appear "in some release x" :)
    Okay.
    For the use cases I envision with this algorithm:
    Message is queued up that uses order 1, invoice 1. In what i will call RA1 (Ricart-Agarwala implementation 1) I expect it to send out the requests since this is a "marked" service that must perform an election due to the potential for bad behavior. The events will be fired, the other nodes will respond "Go ahead". Except for this one other node. He gets the event and says okay well I just sent out a request with logical clock value 1 too etc... the semantics of the algorithm take over. Eventually this first node gets its responses and goes through. Then node 2 gets its response it needed from node 1 and proceeds. No retries, no error queue no way.
    In RA2 then I would know some extra information such as first, can't I query and determine if I need to do an election by determining if this order is referenced by more than 1 invoice and there is a possibility for contention? Secondly, I know that in one case the server picks up a request that has orders, invoices, and some other objects let's say vendors and some other cruft. But this is service call something else, not our original service of just orders and invoices. So two separate calls but they both can potentially have this oplock collision. So in this case the servers have to be smart enough to have some sort of heap structure of service calls to traverse to know okay service call key shape 1304 and service call 1305 key shape can collide so an election must be performed. To me this is a more advanced implementation but fairly strictly required in a performance sense since I don't want to be doing too many elections or too few. Since the heap traversal should be O(lg n) I am not too worried about that but maybe later I will realize that is a problem.
    We are using queues for the incoming messages for most servers. Some servers do what is called a "polling server" setup where they just call themselves their one advertised service and so multiple servers can poll one queue and they just keep polling and waking up as opposed to the other servers which have an option of calling tpdeque or being on the receiving end of a tpcall. Retrying the request is a problem due to the potential for error queue messages. In the case of this RA algorithm I am amortizing the cost of potential retries/failures into the election process.
    I hope I have explained what I think about Tuxedo in terms of a middleware solution in terms of just the ethics of its proprietary nature (yes, our software is proprietary too I guess so it's a bit silly but I think you get me), and just having the power in my own hands of knowing and understanding all of this. To me it's like any other exercise like lifting weights or whatever, the more familiarity you have with the thought process the easier it becomes to recognize when something is or is not a viable solution, or what could be done to turn it into one. I'm not interested in the easy way out here. Maybe it is a sign to me that I have to figure out other new ways of being able to write the software that I want but for now this is my compromise on that.
    Thanks again Todd for your great insights.
    Jeremy

  • Transaction Management with ATMI or SQL

    If a system has only single database server, that is, no distribution or heterogeneity,
    application servers can use ATMI or SQL manage transations. My question is: which
    method is better? In that case, whether it is unnesseary to use the TM service
    of Tuxedo? Any comment is welcome.

    Brian:
    If you use the database's begin, commit, and rollback you must have all the database
    work related to that transaction performed in one Tuxedo service. If you use
    XA connections to the database and use the tp/tx family of function calls you
    can split a transaction across multiple servers/services and it will be tied together
    by Tuxedo.
    Sometimes this is nice. Let's say your business transaction comprises three distinct
    database operations which must be included in the same transaction. If you put
    these in different services you can get individual timings for each operation
    simply by turning on txrpt.
    hope this helps.
    mervin
    "brian luo" <[email protected]> wrote:
    >
    If a system has only single database server, that is, no distribution
    or heterogeneity,
    application servers can use ATMI or SQL manage transations. My question
    is: which
    method is better? In that case, whether it is unnesseary to use the TM
    service
    of Tuxedo? Any comment is welcome.

  • A Tuxedo server hangs at tmboot with Tuxedo 12.1.3, but works fine with 10.0

    We have been running a Tuxedo server with pretty much the same logic in the sample code below in our systems for years on AIX (OS level 6100) with Tuxedo 10-32 bit. We are upgrading to Tuxedo 12.1.3 (12cr2) now. The code is compiled fine with Tuxedo 12, however, it just hangs at tmboot. We have tried on multiple servers (all AIX), but it hung on all the servers with tmboot. Is there anyone in the forum ran into the same problem?
    The source code ForkSrv.c:
    #include <unistd.h>
    #include <signal.h>
    #include <atmi.h>
    void    doChildProcess();
    void    launchChildProcess();
    static pid_t m_iChildPid = -1;
    /** FUNCTION: tpsvrinit */
    int tpsvrinit( int argc, char **argv ) {
      launchChildProcess();
      userlog("Service initilized: (pid=%d)\n", getpid());
    /* FUNCTION: killChild
    *  DESCRIPTION: send SIGTERM to the child process and wait for it terminates.  */
    void killChild() {
      int iChildStatus;
      if (m_iChildPid>0)
        kill(m_iChildPid, SIGTERM);
        userlog("Service (pid=%d) kill child process %d\n",
              getpid(), m_iChildPid);
        wait(&iChildStatus);
        userlog("Service (pid=%d) killed child process %d\n",
              getpid(), m_iChildPid);
    /** FUNCTION: tpsvrdone
    *  DESCRIPTION: terminate the child process and do other clean ups */
    void tpsvrdone(void) {
      killChild();
      userlog("Service done: (pid=%d)\n", getpid());
    /*  FUNCTION: ForkSvc
    *  DESCRIPTION: service function */
    void ForkSvc(TPSVCINFO *tpinfo)
      userlog("Service call: (pid=%d)\n", getpid());
      tpreturn(TPSUCCESS, 0, tpinfo->data, 0, 0);
    /*  FUNCTION: launchChildProcess
    *  DESCRIPTION: launch the child process. If the child process exists terminate it first.  */
    void launchChildProcess()
      m_iChildPid = fork();
      switch (m_iChildPid) {
      case -1:/* error */
        userlog("launchChildProcess: Service failed to fork: (pid=%d)\n", getpid());
        break;
      case 0:/* child */
        doChildProcess();
        exit(0);
        break;
      default:/* parent */
        userlog("launchChildProcess: Child created with pid=%d\n", m_iChildPid);
        break;
    /*  FUNCTION: doChildProcess
    *  DESCRIPTION: child process routing */
    void doChildProcess()
      sleep(100000);
      userlog("doChildProcess: Service child exited: (pid=%d)\n", getpid());
      exit(0);
    The Makefile fork.mak:
    CC=cc
    SERVERS=ForkSrv
    server: $(SERVERS)
    all:ForkSrv
    ForkSrv: ForkSrv.o
            buildserver -t -o ForkSrv -s ForkSvc -f ForkSrv.o
    ForkSrv.o:ForkSrv.c
            $(CC) -c -I${TUXDIR}/include -o ForkSrv.o ForkSrv.c

    We have a tuxedo service which needs to communicate with a POS device by socket. The parent process provides the tuxedo service. The child process provides the connection management for the device. Unnamed pipe is used for communication between the parent and the child. In the child process, there is no code related to tuxedo. The benefit of that design is the tuxedo server does not need to wait for connection from the device when boots up, and the tuxedo service does not need to wait for connection from the device when the service is called.
    The tuxedo server was developed 10 years ago, and worked fine till we upgraded tuxedo from 10 to 12 recently. That means it worked for 10 years, and it worked in tuxedo 6.5, tuxedo 10. But in tuxedo 12, tmboot does not return for this tuxedo server. We have to press CTRL-C and yes to cancel. After cancel, the tuxedo service seems working fine.

  • Tuxedo buildserver : use SQL in cobol applics

    Hi,
    I'm trying to use SQL through COBOL applications in a Tuxedo (8.0) environment.
    I've added the ODBC libpath to the buildserver command. The execution/compilation of the buildserver seems to be ok, but upon execution of the cobol program the SQL CONNECT fails with SQL-error err 10000.
    I have no idea how to make SQL work ? Can anyone help ?
    I read about RM (resources manager), is this required to make it work ?
    thanks !
    Hugo
    Our platform is HP/UX and we use SQL/COBOL outside tuxedo without problems.
    This is the make file :
    # Fc 970618: incremental compilation for Cobol added
    TUXINC=$(TUXDIR)/include
    BTNINC=$(BTNDIR)/incl
    INCLUDES=-I $(TUXINC) -I $(BTNINC) -I /jates/progs/srcs
    ODBCLIBS="-L /usr/local/unixODBC/lib -lodbc"
    COBOPT="-t"
    COBCPY="/jates/tuxedo80/cobinclude:/jates/btn/btndevl/incl:/jates/cobol/cobol4000sp2/cpylib"
    #avoid unwanted C-compiler warnings
    NLSPATH=$NLSPATH:/opt/ansic/lib/nls/msg/C/%N.cat
    # btnrouter
    btnrouter: /jates/progs/tps/btnrouter.cbl /jates/progs/tps/btn400.cbl /jates/progs/tps/mod461.cbl /jates/progs/tps/mod470.cbl
         buildserver -C -v -o $@ \
    -f /jates/progs/tps/btnrouter.cbl \
    -f /jates/progs/tps/btn400.cbl \
    -f /jates/progs/tps/mod461.cbl \
    -f /jates/progs/tps/mod470.cbl \
    -f /jates/progs/tps/res400.cbl \
    -f ${ODBCLIBS} \
    -s SJETAIR
         -tmshutdown -s $@
         cp -p $@ ..
         -tmboot -s $@
    # general instructions
    .SUFFIXES: .cbl .c .o
    .c.o: $(BTNINC)/fml_flds.h
         cc -c $(INCLUDES) $<
    .cbl.o: $(BTNINC)/fml_flds.h
         cob -xc $<
    #******************************************************

    The normal way to use Tuxedo with an XA-compliant resource manager is to
    1. Have the Tuxedo administrator add a line for the resource manager to the
    $TUXDIR/udataobj/RM file including the resource manager name, XA switch
    name, and libraries required for linking.
    2. Build servers using the resource manager with the "-r rmname" line.
    This will include the resource manager lines specified in
    $TUXDIR/udataobj/RM in the buildserver line, and the application will not
    need to manually provide the libraries to buildserver.
    3. If using transactions, have the Tuxedo administrator build a TMS process
    for the RM using the buildtms command, or do this yourself. If not using
    transactions, this step can be omitted.
    When a server is built with the "-r rmname" option, Tuxedo will
    automatically call TPOPEN to connect to the resource manager within
    TPSVRINIT.
    (The only exception to this is if the application programmer replaces the
    default verion of TPSVRINIT with their own version and does not include a
    call to TPOPEN, so it is good to verify that this is not the case.) Since
    Tuxedo opens the resource manager when the server is started, there is no
    need to include SQL CONNECT statements within the application logic in such
    a server.
    If your resource manager is not XA compliant then you will need to manage
    connection to the database yourself, but most databases are XA compliant
    nowadays.
    <Paul Debleecker> wrote in message news:[email protected]...
    Hi,
    I'm trying to use SQL through COBOL applications in a Tuxedo (8.0)
    environment.
    I've added the ODBC libpath to the buildserver command. The
    execution/compilation of the buildserver seems to be ok, but upon
    execution of the cobol program the SQL CONNECT fails with SQL-error err
    10000.
    I have no idea how to make SQL work ? Can anyone help ?
    I read about RM (resources manager), is this required to make it work ?
    thanks !
    Hugo
    Our platform is HP/UX and we use SQL/COBOL outside tuxedo without
    problems.
    This is the make file :
    # Fc 970618: incremental compilation for Cobol added
    TUXINC=$(TUXDIR)/include
    BTNINC=$(BTNDIR)/incl
    INCLUDES=-I $(TUXINC) -I $(BTNINC) -I /jates/progs/srcs
    ODBCLIBS="-L /usr/local/unixODBC/lib -lodbc"
    COBOPT="-t"
    COBCPY="/jates/tuxedo80/cobinclude:/jates/btn/btndevl/incl:/jates/cobol/cobol4000sp2/cpylib"
    #avoid unwanted C-compiler warnings
    NLSPATH=$NLSPATH:/opt/ansic/lib/nls/msg/C/%N.cat
    # btnrouter
    btnrouter: /jates/progs/tps/btnrouter.cbl /jates/progs/tps/btn400.cbl
    /jates/progs/tps/mod461.cbl /jates/progs/tps/mod470.cbl
    buildserver -C -v -o $@ \
    -f /jates/progs/tps/btnrouter.cbl \
    -f /jates/progs/tps/btn400.cbl \
    -f /jates/progs/tps/mod461.cbl \
    -f /jates/progs/tps/mod470.cbl \
    -f /jates/progs/tps/res400.cbl \
    -f ${ODBCLIBS} \
    -s SJETAIR
    -tmshutdown -s $@
    cp -p $@ ..
    -tmboot -s $@
    # general instructions
    .SUFFIXES: .cbl .c .o
    .c.o: $(BTNINC)/fml_flds.h
    cc -c $(INCLUDES) $<
    .cbl.o: $(BTNINC)/fml_flds.h
    cob -xc $<

  • If the occi can work with TUXEDO well ,is there any restricts?

    we want to use occi to access oracle in TUXEDO serive.
    If the occi can work with TUXEDO well ,is there any restricts?
    who can describe how to use , step by step.
    THS a lot !
    mails:[email protected]

    Hi,
    Yes OCCI can be used to access Oracle database from Tuxedo. If you want Tuxedo to manage transactions for you then you need to follow essentially the same approach as in using OCI or Embedded SQL. Since you are accessing a resource manager, you will need to define an OPENINFO string in the Tuxedo RM file located in the udataobj directory of the Tuxedo installation. Likewise you will need to build an RM specific TMS for the Oracle Database and specify the name of the RM entry in the buildtms and buildserver commands with the -r switch.
    Before going into a step by step set of instructions, can you describe what you know about Tuxedo and OCCI? Have you built Tuxedo servers before and have they accessed a resource manager?
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

  • Managing literal values in PL/SQL

    Running on Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production With the Partitioning and OLAP options JServer Release 9.2.0.8.0 - Production
    We need to manipulate some specific values at runtime, and decided on storing these in a table. Our PL/SQL code retrieves these values to use in various queries etc... such as limit a cursor to a certain number of rows, or change some constant in a package etc.
    I've looked into Steven Feuerstein's best practices site with regards to managing "magic values" in PL/SQL. We've implemented a solution as per his column: http://apex.oracle.com/pls/otn/f?p=2853:4:271093498565934::NO::P4_QA_ID:16382
    Basically a table holder our "literal" value data, with a packaged function to retrieve a given value by its name from another PL/SQL package.
    Now to avoid multiple calls to literal value get functions, I create a private global variable in the package body, and retrieve the given literal value into it. I then re-use this global variable throughout the pacakge body.
    For instance:
    g_bulk_mailer_email            VARCHAR2 (255)
          := pkg_application_constant.varchar2_value ('BULK_MAILER_EMAIL_ADDRESS');The problem I'm facing is, when I change the literal value BULK_MAILER_EMAIL_ADDRESS, in the database table, it doesn't apply to my global variable g_bulk_mailer_email in my package body. I assume because the package is already initialized and pinned in the UGA?
    Is this a case for the SERIALLY_REUSABLE pragma, or bad implementation/design on my part?
    The one alternative I thought of is to simply not use the global variable, and simply make multiple local calls where needed in my package subprograms, so it always gets the latest values from the literal value table.
    p.s. Once we migrate to 11g, I plan on using the result function cache...
    Thanks for any tips/advice
    Stephane

    >
    The problem I'm facing is, when I change the literal value BULK_MAILER_EMAIL_ADDRESS, in the database table, it doesn't apply to my global variable g_bulk_mailer_email in my package body. I assume because the package is already initialized and pinned in the UGA?Highly likely so. One way can be to reinitilize the package variable. But this works only if you change the database value inside the same session. Other sessions would not be effected.
    There should be others ways to
    Flushing the shared pool would probably work, but should not be done because it harms the overall performance of your system more than it helps. THere could be a way to invalidate the package. But then the next access from each session would get an error (invalid package state) and the second access to it would reload all.
    The one alternative I thought of is to simply not use the global variable, and simply make multiple local calls where needed in my package subprograms, so it always gets the latest values from the literal value table.It is a tradeoff between performance and accuracy. But if the literal value table is pinned to the SGA, then oracle will take care of optimizing the access to it. This doesn't mean that you approach is badly designed. Just that there are different things to consider.

  • Integrating Java caps with tuxedo

    Hi,
    Could any one let me know the best way of integrating Java caps 6 with Tuxedo.
    Regards,
    Abdul

    But the C code that gets called from Java doesn't seem to have access to the data known by the C code that called the Java.
    I don't know exactly what you mean here, unless you're simply saying that the scope of your C data makes it inaccessible between functions.
    Is there a way to pass pointers into Java?
    Yep - Just treat the pointers as opaque types and wrap them in something large enough for the platform - for example a jlong. You need to be sure to be careful in managing the lifetime of the pointers with respect to the Java objects that hold onto them - for example, with function pointers that come from a dynamically loaded shared library or any data pointers that you might be passing around. You probably want to read the section on native peers in the JNI programmer's guide.
    God bless,
    -Toby Reyelts
    Check out the free, open-source, JNI toolkit, Jace - http://jace.reyelts.com/jace

  • My drive recently had to be replaced with a new one installed by Apple they reinstalled all my stuff from my time machine. Most my programs had to be updated which I managed with a little help from my friends, but the last one that I can't solve is

    My IMAC OSX 10.6.8   2.8 GHz intelcore  2Duo 4GB 800 Mhz DDR2 SDRam recently had to have its hard drive replaced by apple thy actually had to give me a larger one because mine was no longer available,They also reinstalled all my programs and data from my 2 terrabite time machine back up. I got my system home and found that I had to upgrade most of my programs which I managed with a little help from my friends, but I still have one problem that I can't solve I have a Nikon Coolpix S6 that I have Been Syncing with my Iphoto since I've got it 3 years ago and now when I place it in the cradle the program reconizes it and says that it is going to start to import the new photos the little white wheel in the center of the screen starts spinning but nothing else happens. I checked all my connections and they are goog plus I even downloaded a nikon progam just to double check the camera & cradle and it works there but it wont pair off with my IPhoto.

    First go to iPhoto Preferences, look in both General and Advanced tabs to make sure that things are set to import from camera into iPhoto.
    Then if that doesn't help, connect the camera and open Image Capture in your Applications > Utilities folder and see if you can use Image Capture to reset the import path from the camera to iPhoto.
    Image Capture: Free import tool on Mac OS X - Macgasm
    Message was edited by: den.thed
    Sorry John, I didn't see your post when I clicked the reply button. Dennis

  • Using REF with object table in SQL Developer

    When i create object tables and fill them with data, then in SQL Developer de REF value isn't displayed.
    I did the following:
    CREATE TYPE adres_type AS OBJECT
    (straat VARCHAR2(20)
    ,nummer VARCHAR2(10)
    ,postcode VARCHAR2(6)
    ,plaats VARCHAR2(50));
    CREATE TABLE adressen of adres_type;
    CREATE TYPE locatie_type AS OBJECT
    (nr NUMBER
    ,naam VARCHAR2(20)
    ,adres REF adres_type);
    CREATE TABLE locaties OF locatie_type;
    CREATE TABLE locaties OF locatie_type
    (SCOPE FOR (adres) IS adressen);
    insert into adressen values (adres_type('Arnhemsestraatweg', '33','6881ND','Velp'));
    insert into locaties values (1,'Directie', (select ref (a) from adressen a where a.plaats = 'Velp'))
    Then in SQL Developer de REF(A) column is empty, while in SQL*Plus it displays the REF value:
    In SQL Developer: SELECT a.*, REF(a) FROM adressen a;
    STRAAT NUMMER POSTCODE PLAATS REF(A)
    Arnhemsestraatweg 33 6881ND Velp
    In SQLPLUS: SELECT a.*, REF(a) FROM adressen a;
    STRAAT NUMMER POSTCODE PLAATS REF(A)
    Arnhemsestraatweg 33 6881ND Velp 0000280209C70341FBB96B4F77813B27B50E53BB4332382E22ADD64AD9B755F651D416B6DA010134
    Is this a bug or is there another reason why the ID doesnt display in SQL Developer.
    (this didnt work in all the previous SQL Developer releases and still not in de 2.1 E.A. version)

    Hi <not sure of your first name>,
    I have replicated the issues and logged a bug against this
    Bug 9102579 - FORUM: REF FUNCTION NOT RETURNING CORRECT RESULT
    Regards,
    Dermot O'Neill
    SQL Developer Team

  • Execute procedure with out parameter in sql*plus

    HI All,
    I am executing an stored proc with OUT parameter from sql*plus.
    Getting this error message:
    SQL> execute sp1_cr_ln_num('01',0,3);
    BEGIN sp1_cr_ln_num('01',0,3); END;
    ERROR at line 1:
    ORA-06550: line 1, column 7:
    PLS-00306: wrong number or types of arguments in call to
    'sp1_cr_ln_num'
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    Whereas it works fine using Toad. 4th parameter is for output.
    Thanks.

    then you can see the value either using print :var or execute dbms_output.put_line(:var)

  • How to export data with column headers in sql server 2008 with bcp command?

    Hi all,
    I want know "how to export data with column headers in sql server 2008 with bcp command", I know how to import data with import and export wizard. when i
    am trying to import data with bcp command data has been copied but column names are not came.
    I am using the below query:-
    EXEC master..xp_cmdshell
    'BCP "SELECT  * FROM   [tempdb].[dbo].[VBAS_ErrorLog] " QUERYOUT "D:\Temp\SQLServer.log" -c -t , -T -S SERVER-A'
    Thanks,
    SAAD.

    Hi All,
    I have done as per your suggestion but here i have face the below problem, in print statment it give correct query, in EXEC ( EXEC master..xp_cmdshell @BCPCMD) it was displayed error message like below
    DECLARE @BCPCMD
    nvarchar(4000)
    DECLARE @BCPCMD1
    nvarchar(4000)
    DECLARE @BCPCMD2
    nvarchar(4000)
    DECLARE @SQLEXPRESS
    varchar(50)
    DECLARE @filepath
    nvarchar(150),@SQLServer
    varchar(50)
    SET @filepath
    = N'"D:\Temp\LDH_SQLErrorlog_'+CAST(YEAR(GETDATE())
    as varchar(4))
    +RIGHT('00'+CAST(MONTH(GETDATE())
    as varchar(2)),2)
    +RIGHT('00'+CAST(DAY(GETDATE())
    as varchar(2)),2)+'.log" '
    Set @SQLServer
    =(SELECT
    @@SERVERNAME)
    SELECT @BCPCMD1
    = '''BCP "SELECT 
    * FROM   [tempdb].[dbo].[wErrorLog] " QUERYOUT '
    SELECT @BCPCMD2
    = '-c -t , -T -S '
    + @SQLServer + 
    SET @BCPCMD
    = @BCPCMD1+ @filepath 
    + @BCPCMD2
    Print @BCPCMD
    -- Print out below
    'BCP "SELECT 
    * FROM   [tempdb].[dbo].[wErrorLog] " QUERYOUT "D:\Temp\LDH_SQLErrorlog_20130313.log" -c -t , -T -S servername'
    EXEC
    master..xp_cmdshell
    @BCPCMD
      ''BCP' is not recognized as an internal or external command,
    operable program or batch file.
    NULL
    if i copy the print ourt put like below and excecute the CMD it was working fine, could you please suggest me what is the problem in above query.
    EXEC
    master..xp_cmdshell
    'BCP "SELECT  * FROM  
    [tempdb].[dbo].[wErrorLog] " QUERYOUT "D:\Temp\LDH_SQLErrorlog_20130313.log" -c -t , -T -S servername '
    Thanks, SAAD.

  • How many domains can Prime Collaboration Advanced manage with the BE6000?

    The BE6000 Administration guide states that "Most BE6K deployments have a single domain as part of a Standard Prime installation. Multiple domains are available with Prime Collaboration Advanced (available for purchase) that can be used for complex Business Edition 6000 deployments."
    How many domains can Prime Collaboration Advanced manage with the BE6000 solution? How do we order and deploy Prime Collaboration Advanced with the BE6000 solution?

    http://docwiki.cisco.com/wiki/System_Capacity_for_Cisco_Prime_Collaboration_10.0

  • How do I disable Firefox's Download functionality, I would like to use another download manager with Firefox...many thanks Bruce

    Hi I am using version 3.6.15 with Windows 7, I would like to use another download program in place of Firefox's default option, how can I disable this in Firefox
    Many thanks
    Bruce Baxter

    Use this extension to integrate an external download manager with Firefox. <br />
    https://addons.mozilla.org/firefox/220/<br />
    http://www.flashgot.net/whats

  • My ipod 4th generation wont show up on itunes or my compute, but there is a file for USB mass storage device in my device management with a yellow triangle next to it. how do i get it to sync? HELP!!!

    my ipod 4th generation wont show up on itunes or my compute, but there is a file for USB mass storage device in my device management with a yellow triangle next to it. how do i get it to sync? HELP!!!

    Here:
    iOS: Device not recognized in iTunes for Windows
    I would start with
    Removing and reinstalling iTunes, QuickTime, and other software components for Windows Vista or Windows 7
    or
    Removing and Reinstalling iTunes, QuickTime, and other software components for Windows XP

Maybe you are looking for

  • Not able to view the values on the report viewed using Query Report Viewer

    Hi all, There seems to be some unknown problem with the Report Template I have prepared and uploaded on PIA XML Publisher. When I run it with some ID set, it doesn't seem to show the values from the Data I give. For example if I have a field called C

  • My itunes account is set up with an email that is no longer active.  How do I change the email on the account?

    I initially set up my itunes account with my work email address.  Now that I no longer work there, I want to have my apple ID associated with my personal email address.  Do I need to create an entire new account or is there a way to just change the e

  • Windows 7 Printer Sharing

    I have a HP PSC 2410 connected to a computer running windows 7.  Win 7 detected and installed the printer just fine and there are no issues printing.  I have a laptop running XP and I'll be darned if i can get the Win 7 computer to spool print jobs f

  • Data type 'Document' in Lifecycle Forms

    Hi all, i'd like to handle PDFs in Lifecycle ES. So I use data type 'Document' for input variable. To call my LC service I use once FLEX and JavaScript. Does anybody know how I can initiate my Lifecycle-PDF-variable of type 'Document' from FLEX (Acti

  • Reg:Insert the record in another ztable

    Hi Experts,                   My scenerio is that when a record is inserted from a program to a table i need to added the same values to another table which is on different server(ex: BW). Is it necessary to create events to do that because i don't n