Best Practice: continuously running db procedure

I've written a database procedure that pulls messages off an AQ. Then does some processing on them and stores the result on in a table. I'd like the procedure to run continuously. I also call the same procedure with different parameters which determine which messages will get pulled off. My questions are these:
1. what is the best practice for keeping this procedure running continuously? If the client side connection is eventually terminated will the process keep running? Set timeout somewhere for no timeout?
2. How to determine which procedure instances are running. I'm thinking I may need to create different schemas that have execute priviledge for the different instances so that I can atleast tell which process is which. Is there a better way to tell which is which if I need to kill one?
thanks,
dan

> 1. what is the best practice for keeping this procedure running continuously? If the client
side connection is eventually terminated will the process keep running? Set timeout
somewhere for no timeout?
DBMS_JOB or DBMS_SCHEDULER processes are ideal as these have no client part.
As for a client.. when it dies, it usually takes its server process with it. As soon as Oracle notices that the client is gone (usually when it attempts to send data to it), it will terminate the dedicated server process that serviced the client, or it will clean up the virtual circuit of the shared server session that serviced that client.
> 2. How to determine which procedure instances are running. I'm thinking I may need to
create different schemas that have execute priviledge for the different instances so that> I can atleast tell which process is which. Is there a better way to tell which is which if I
> need to kill one?
With DBMS_JOB/DBMS_SCHEDULER it is easy. You check the RUNNINGJOBS views. Details on these are in the [url http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/toc.htm]Oracle® Database Reference guide.

Similar Messages

  • Best practice to run Microsoft Endpoint Protection client in VDI environment

    We are using Citrix XenDesktop VDI environment. Symantec Endpoint Protection client (VDI performance optimised) has been installed on the “streamed to the clients” virtual machine image. Basically, all the files (in golden image) have been “tattooed” with
    Symantec signature. Now, when the new VM starts, Symantec scan engine simply ignores “tattooed” files and also randomise scan times. This is a rough explanations but I hope you’ve got the idea.
    We are switching from Symantec to Microsoft Endpoint Protection and I’m looking for any information and documentation in regards best practice for running Microsoft Endpoint Protection clients in VDI environment.
     Thanks in advance.

    I see this post is a bt old but the organization I'm with has a very large VDI deployment using VMware. We also are using SCEP 2012 for the AV.
    Did you find out what you were looking for or did you elect to take a different direction?
    We install SCEP 2012 into the base image and manage the settings using GPO and the updates for defs are through the normal route.
    Our biggest challenge is getting alert message from the client.
    Thanks

  • Best practices to use stored procedure

    Just wondering about best practices of using Stored procedures in TOPLink with respect to Objects. Any thoughts on this?
    I find the approach suggested at re:Coding for Stored Procedures is a lot of work! seems to be fine.
    Is there any thoughts on converting results directly into Java objects?
    Murali

    I encountered the same problems.
    See the topic I posted: Re: Mapping a Java attribute to the result of a function call
    The solution I used is good, but has its restrictions.
    I created a database view on the query that uses stored functions. Then, I mapped my object to the database view. Problem solved.
    However, like I said, this solution has its restrictions:
    1) Your database must support views
    2) This only works for read-only queries
    I hope this helps you any further.
    Kind regards,
    Erwin

  • Best practice for calling stored procedures as target

    The scenario is this:
    1) Source is from a file or oracle table
    2) Target will always be oracle pl/sql stored procedures which do the insert or update (APIs).
    3) Each failure from the stored procedure must log an error so the user can re-submit the corrected file for those error records
    There is no option to create an E$ table, since there is no control option for the flow around procedures.
    Is there a best practice around moving data into Oracle via procedures? In Oracle EBS, many of the interfaces are pure stored procs and not batch interface tables. I am concerned that I must build dozens of custom error tables around these apis. Then it feels like it would be easier to just write pl/sql batch jobs and schedule with concurrent manager in EBS (skip ODI completely). In that case, one could write to the concurrent manager log and the user could view the errors and correct.
    I can get a simple procedure to work in ODI where the source is the SQL, and the target is the pl/sql call to the stored proc in the database. It loops through every row in the sql source and calls the pl/sql code.
    But I can not see how to set which rows have failed and which table would log errors to begin with.
    Thank you,
    Erik

    Hi Erik,
    Please, take a look in these posts:
    http://odiexperts.com/?p=666
    http://odiexperts.com/?p=742
    They could help you in a way to solve your problem.
    I already used it to call Oracle EBS API's and worked pretty well.
    I believe that an IKM could be build to automate all the work but I never stopped to try...
    Does it help you?
    Cezar Santos
    http://odiexperts.com

  • Best practice for running pcastconfig --sync_library

    Every so often pcastconfig --sync_library fails with a ruby method not found error (uid) and if I run it again it might fail the same way but in a different place or it might run to completion with no errors. I've taken to turning off Time Machine before running pcastconfig --sync_library and sometimes I turn off podcast producer and xgrid as well just to 'feel' safer.
    Does anyone know what is the best practice? --sync_library isn't in the man page for pcastconfig and the docs don't mention anything about turning anything off before running it.
    another error I see sometimes is database locked
    any ideas or tips appreciated

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

  • Best practice for running multiple sites on 1 CF install?

    Hi-
    I'm setting up a new hosting environment (Windows Server 2008 Standard 64 bit VPS  configuration, MySQL, IIS 7, CF 9)
    Has anyone seen any docs or can anyone suggest best practices for configuring multiple sites in this environment? At this point I'm thinking simple is best, one new site in IIS for each client (domain) and point it to CF.
    Given this environment, is anyone aware of any gotchas within the setup of CF 9 on IIS 7?
    Thank you in advance,
    Rich

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

  • After Vcenter best practice documentation - running as a VM

    Hi there,
    We are currently running our vcenter as a physical machine.  A few weeks back I saw something on twitter saying vmware had changed their best practice to recommend running vcenter as a virtual machine.  We are looking into this as a way of running one less physical server.
    Can anyone point me in the direction of any (revised) good practice documentation from vmware?

    here's a couple things as well
    http://www.vmware.com/pdf/vsphere4/r40_u1/vsp_40_u1_esx_vc_installation_guide.pdf
    Although a little dated, it still applies
    http://www.vmware.com/pdf/vi3_vc_in_vm.pdf
    I would also consider setting your restart priority to HIGH for your vCenter VM.  If you run your vCenter DB instance on a VM, I would also consider setting us a DRS rule to keep them together for better performance, as well as setting a HIGH restart priority for your VCDB

  • Best practice to run BOBJ server

    Is that a best practice to install the BOBJ (BOE) server in Netweaver stack..
    may or may not use BI
    may or may not use Netwever Portal
    Have said above, I would appreciate best solution for running BOBJ server..
    Thanks-gopal

    I see this post is a bt old but the organization I'm with has a very large VDI deployment using VMware. We also are using SCEP 2012 for the AV.
    Did you find out what you were looking for or did you elect to take a different direction?
    We install SCEP 2012 into the base image and manage the settings using GPO and the updates for defs are through the normal route.
    Our biggest challenge is getting alert message from the client.
    Thanks

  • LCM best practice to run on it's own BOE instllation?

    Is it a best practice to install/run the Life Cycle Manager on it's own BOE instllation? The LCM installation documents seems to suggest that the LCM can run on an existing BOE installation. I assume that means it can run on a BOE instllation that also provides webi/deski/PM/etc. services that users access.
    I am just curious whether it is better to have a sperate host that only runs an instance of BOE and the LCM but does not host any other BOE reporting/dashboards.
    Also, is there a "LCM Best Practices" document floating around anywhere?
    Thanks,
    George

    Thanks for the reply!
    Do you know of any specifics as to 'why" it should run on it's own machine? I heard this was suggested by someone at a BO user conference but had not heard any details as to why. Does the LCM use a lot of resources when promoting or something?
    Thanks again.

  • What is the best practice for running a long report/query against an active database?

    We are using SQL Server 2012 EE but currently do not have the option to run queries on a R/O mirror though that is my long term goal. I am concerned I may still run into the below issue in that scenario as well since the mirror would also be updating data I
    am querying.
    I have a view that joins across several tables from two databases and is used by an invoicing program on existing data. Three of these tables are also actively updated by ongoing transactions. Running a report that used this view did not use to be a problem
    but now our database is getting larger and we have run into some timeout problems for the live transactions coming in.
    First the report query was timing out so I set command timeout to 0 and reran the query which pegged all 4 CPUs 100% for 90 minutes and so I finally killed it. Strangely there were no problems with active transactions during that time so I'm wondering if the
    query was really running doing anything useful or somehow spinning and waiting. I reviewed the view and found a field I was joining on that was not indexed so created an index on that field, reran the report, which then finished in three minutes and all the
    CPUs were busy but not at all pegged out. Same data queried both times. I figured problem solved. Of course later, my boss ran a similar invoice report, with the same amount of data, and our live transactions started timing out 100% while his query was running.
    I did not get a chance to see the CPU usage during that time.
    I looked at the execution plan of the underlying view and added the suggested index but that did not help. When I run the just the view at SQL Server it does not seem to cause any problems and finished in a couple seconds. Perhaps something else going on in
    the reporting tool using the view.
    My main question is - Given I have to use the live and active database, what is the proper way to run a long R/O query/report so that active transactions can still continue to update
    tables that I am querying? sp_who2 did show transactions being blocked so I guess a long query accessing the tables blocks live transactions accessing those same tables, but certainly I'm not the only one doing this. I
    am considering adding "with (nolock)" but am hoping there is a better standard practice as that clause can return dirty data and I understand why. Thx, Dave
    Thanks, Dave
    Dave

    Hello
    You can change the DB isolation level to Read uncommitted
    http://technet.microsoft.com/en-us/library/ms378149(v=sql.110).aspx
    or use WITH (NOLOCK)
    I do use NOLOCK option for the dirty reads to avoid locks on the tables
    Javier Villegas |
    @javier_vill | http://sql-javier-villegas.blogspot.com/
    Please click "Propose As Answer" if a post solves your problem or "Vote As Helpful" if a post has been useful to you

  • "Best Practice" for a stored procedure that needs to access two schemas?

    Greetings all,
    When my company's application is deployed, two schema owners are typically created and all database objects divided between the two. I'll call them FIRST and SECOND.
    In a standard, vanilla implementation there is never any reason for the two to "talk to each other". No rights to objects in one schema are ever granted to the other.
    I am currently charged, however, with writing custom code to roll up data from one of the schemas and update tables in the other with the rollups. I have created a user whose job it is to run this process, and this user has the proper permissions to all necessary objects in both schemas. I'll call this user MRBATCH.
    Typically, any custom objects, whether they be additional staging tables, temp tables or stored procedures are saved in the FIRST schema. I tried to save this new stored procedure in the FIRST schema and compile it, but got "Insufficient priviliges" errors whenever the code in the stored procedure tried to access any tables in the SECOND schema. This surprised me a little bit because I had no plans to actually EXECUTE the stored procedure as FIRST, but I guess I can understand it from the point of view of, you ought be able to execute something you own.
    So which would be be "better" (assuming there's any difference): Grant FIRST all of the rights it needs in SECOND and save the stored procedure in FIRST, or could I just save the stored procedure in the MRBATCH schema? I'm not sure which would be "better practice".
    Is there a third option I'm overlooking perhaps?
    Thanks
    Joe

    In this case I would put it again into schema THIRD. This is a kind of API schema. There are procedures in it that allow some customized functionality. And since you grant only the right to execute those procedures (should be packages of cause) you won't get into any conflicts about allowing somebody too much.
    Note that this suggestion seems very similiar to putting the procedure directly to the executing user MRBATCH. It depends how this schemauser is used. I always prefer separating users from schemas.
    By definition the oracle object to represent a schema is identical to the oracle object representing a user (exception: externally defined users).
    my definition is:
    Schema => has objects (tables, packages) and uses tables space
    User => has priviledges (including create session and connect) and uses temp tablespace only. Might have synonyms and views.
    You can mix both, but sometimes it makes much sense to separate one from the other.
    Edited by: Sven W. on Aug 13, 2009 9:51 AM

  • Best Practice: Application runs on Extend Node or Cluster Node

    Hello,
    I am working within an organization wherein the standard way of using Coherence is for all applications to run on extend nodes which connect to the cluster via a proxy service. This practice is followed even if the application is a single, dedicated JVM process (perhaps a server, perhaps a data aggregater) which can easily be co-located with the cluster (i.e. on a machine which is on the same network segment as the cluster). The primary motivation behind this practice is to protect the cluster from a poorly designed / implemented application.
    I want to challenge this standard procedure. If performance is a critical characteristic then the "proxy hop" can be eliminated by having the application code execute on a cluster node.
    Question: Is running an application on a cluster node a bad idea or a good idea?

    Hello,
    It is common to have application servers join as cluster members as well as Coherence*Extend clients. It is true that there is a bit of extra overhead when using Coherence*Extend because of the proxy server. I don't think there's a hard and fast rule that determines which is a better option. Has the performance of said application been measured using Coherence*Extend, and has it been determined that the performance (throughput, latency) is unacceptable?
    Thanks,
    Patrick

  • Best Practice for Running Number Table

    Dear All
    Thank you for your attention.
    I would like to generate number for each order
    AAAA150001
    AAAA is prefix
    1 is year and 0001 is he sequence number.
    I proposed the table as below
    Prefix    | Year     | Number
    AAAA    | 15        | 1
    Using  SQL query as below to get the lastest number
    SELECT CurrentNumber = Prefix + Year + RIGHT ('0000'+ CAST (Number+1 AS VARCHAR(4)), 4)
    FROM RunningNumber WHERE Prefix = 'AAAA'
    after all save process then update the running number table
    UPDATE RunningNumber SET Number = (Number +1) WHERE Prefix = 'AAAA' AND Year = '15'
    Is that a normal approach and good to handle concurrent saving?
    Thanks.
    Best Regards
    mintssoul

    Dear Visakh16
    Each year the number will reset, table will as below
    Prefix    | Year     | Number
    AAAA    | 15        | 8749
    AAAA    | 16        | 1
    I could only use option1 from your ref.
    To use this approach, I must make sure 
    a) the number will not be duplicated or jumped as there is multiple users using the system concurrently.
    b) the number will not increment when there is any error after get the new number
    Is that using the following methods could archive a) & b)? 
    1) .NET SqlTransaction.Rollback
    2) SQL
    ROLLBACK TRANSACTION Thanks.
    To prevent repeat information, details of 1) & 2) is not listed here, please refer to my previous reply to Uri
    thanks.
    Best Regardsmintssoul

  • Best practice for running multiple instances?

    Hi!
    I have a MMPRPG interface that currently uses shared objects to store several pages of information per account.  A lot of users have several accounts (some as many as 50 or more) and may access only one or several different game servers..  I am building a manager application in air to manage them and plan on putting all the information in several sql db's. 
    The original authors obviously had no idea what the future held.  Currently players have a separate folder for each account, with a copy of the same swf application in EACH folder.  So if a player has 20 accounts, he manually opens 20 instances of the same swf (or projector exe, based on personal prefrence).  I have spent the last year or so tinkering with the interface, adding functionality, streamlining, etc, and have gathered a large following of supporters.
    Each account is currently a complete isolated copy of a given interface (there are several different ones out there. It could shape up to be quite a battle)   In order to remedy this undesireable situation, I have replaced the login screen with a controller.  The question now is how to handle instansiating each account. The original application simply replaced the login screen with the main application screen in the top application container at login.
    My main (first) question is: If I replace the login screen with a controller is it more economical to have the controller open a window for each account and load an instance of the required classes or  to compile the main application and load instances of the swf?
    Each account can have up to 10 instances of about 30 different actionscript classes that get switched in and out of the main display.  I need to be able to both send and receive events between each instance and the main controller.
    I tenatively plan on using air to open windows, and simply alter the storage system using shared objects to storing the same objects in an sql table.
    Or should that be 1 row per account?   I am not all that worried about the player db, since it is basically file storage, but the shared db will be in constant use, possibly  from several accounts. (Map and player data is updated constantly)  I am not sure yet how I plan to handle updating that one. 
    I am at the point now where all the basic groundwork is laid, and the controller (though still rough around the edges) stands ready to open some accounts...  Had the first account up and running a couple days ago, but ran into trouble when the next one tried to access what used to be static infoirmation...  The  next step is to build some databases and I need to get it right the first time.  Once I release the app and it writes a db to the users machine, I do not want to have to change it
    I am an avid listener and and an eager student.  (I have posted here before under the name eboda_kcuf but was notified a few weeks ago that it was not acceptable in the forums....)  I got some great help from Alex amd a few others, so I am sure you can help me out here. 
    After all, you guys are the pro's! just point me in the right direction.... 
    Oh, almost forgot:  I use flashbuilder 4.5, sdk 4.5.1
    Message was edited by: the0bot  typo.

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

  • OSB best practices to run business service on two different environments

    Hi.
    I am using Service Bus 11gR1
    Oracle Service Bus Version: [Oracle Service Bus L10N Dependencies 11.1 Fri Dec 4 17:43:22 EST 2009 ]
    Oracle Weblogic Server Version: [WebLogic Server 10.3.5.0 Fri Apr 1 20:20:06 PDT 2011 1398638 ]
    I deploy my OSB services on two different environments (development, production).
    How to setup business service to run on two different environments without changing source (business service transport Endpoint URI)?
    Thanks in advance.

    I am not sure of any tutorial.
    For your case if you just have one URI and you want to change the URI for the business service you can simply use the OSB Customization file.This is straight forward.
    If you have complex routing logic based on inputs fields ,you can follow the below steps,
    Create a simple table with Business Service Name,Env and URI as columns.
    Create a select DBAdapter to return the URI
    Create a business svc out the DBAdapter files
    Use the business service to fetch the URI and finally
    Use the URI override( ref - http://www.oracle.com/technetwork/middleware/service-bus/learnmore/index.html)
    Edited by: Prabu on Feb 21, 2012 8:10 PM

Maybe you are looking for