PL/SQL After submit process - best practice?

I have after submit process which fires PL/SQL procedure. In this PL/SQL procedure I do some updates and would also like to generate some XML output and send it to browser so that user can save it in file. What I'm asking is, what is proper way to handle this.
I realize that starting procedure from "after submit process" is too late. If I understand correctly, the page is already rendered at that time so htp.p output from PL/SQL procedure in not showing (but procedure is executed). So I create branch to PL/SQL procedure (after button is pressed). That way procedure actualy creates new window and I can use htp.p functions. Altough now I have trouble closing window but I hope I could manage this.
Is there some other, better way to do export? Maybe javascript popup and calling procedure from there? Any suggestions?
Thanks!
Marko

How should I send this content to user so that his browser recognize this as a file (for opening or saving)?
Put that code in a onLoad process similar to how Scott shows at http://spendolini.blogspot.com/2006/04/custom-export-to-csv.html
With this in place, when you issue a show request on that page, your generated content will be offered by the browser using a open/save dialog box.

Similar Messages

  • Popup a window in the after-submit process

    I have created a java script function in html header of the page as following:
    <script language="JavaScript" type="text/javascript">
    function myPopup (p_search) {
    var val1 = document.getElementById(p_search).value;
    var url;
    url = 'f?p=&APP_ID.:3:&APP_SESSION.::::P3_ENAME:' + val1;
    w = open(url,"winLov","Scrollbars=0,resizable=0,width=700,height=400");
    if (w.opener == null)
    w.opener = self;
    w.focus();
    </script>
    I want to call this function in one after-submit process like this:
    begin
    select name into p_name
    from vehicle
    where id = 32;
    if SQL%ROWCOUNT = 1 then
    javascript:myPopup(p_name);
    end if;
    exception
    when others then
    null;
    end;
    When I apply the changes in the application builder, I got this error:
    1 error has occurred
    ORA-06550: line 7, column 16: PLS-00103: Encountered the symbol "" when expecting one of the following: := . ( @ % ; The symbol ":=" was substituted for "" to continue.
    How do I call the java script functions in the page process? What did I do wrong?
    Any help will be greatly appreciated.
    June

    O.K. I think I got it working now. See
    http://htmldb.oracle.com/pls/otn/f?p=31517:50
    There is an on submit process multiplying the number you enter by two and populating the target item. The process is running only if there is number entered. If you enter any other string a pop-up window will show you an error message. The approach is simple:
    1. create a process which does something on submit, if the supplied parameter is O.K. and nulls if not,
    2. created a branch to an empty page which fires on submit and it is conditional - using process exception(s),
    3. on the empty page there are two onload javascript calls:
    onload="javascript:popUp2('f?p=&APP_ID.:201:&SESSION.::&DEBUG.:::', 700, 700);javascript:redirect('f?p=&APP_ID.:50:&SESSION.::&DEBUG.::');"
    the first one is opening the required pop-up page and the next one is redirecting back to where we started.
    I don't know if there are any security issues with this approach. It is not so clean, since you need to create an additional empty page. However, it saves some coding of extra scripts.
    What you are saying I also had in mind.
    Denes Kubicek

  • Parallelizing after-submit processing

    I have a handful of after-submit processes on a page that have absolutely nothing in common. No data or state dependencies.
    But the way Apex accept processing works, it executes them sequentially. So, if process 1 takes 20 seconds and process 2 takes 20 seconds, total time taken would be 40 seconds!
    Yes, the processes themselves have been optimized and tuned as much as they can.
    Is there a way to fire off those 2 processes in parallel to save time? Something like the following construct in Unix Korn shell programming
    #!/bin/ksh
    echo Starting
    process1 &
    process2 &
    process3 &
    wait
    echo DoneI guess I could invoke each process as a DBMS_JOB but that makes it "too asynchronous". My page would come back instantly but the work would not be done. I would need to build some kind of polling system to check some status table for completion and stuff.
    Ideas? Thanks.

    Hm, yes, I am aware of all the non-Apex solutions I could use, but like I said, that makes my page "think" that the processing is complete, when it is really not. I would need to build additional stuff into my app to periodically poll to find out status, refresh the page, etc, etc.
    It starts to get really complicated.
    And, in my case, the bottom line benefit of all this is to reduce a page processing time from 40 seconds to 20 seconds. I wouldn't want to spend 20 hours to achieve that! ;)
    It would have been so sweet if the PL/SQL language natively offered the kind of shell programming construct I showed earlier! Sigh...
    Thanks for your response.

  • How can I navigate to a new page when after-submit process running proc

    I have a long running procedure and would like to provide the users with an animated gif to indicate progress that updates a description line to indicate the current step in the process.
    Currently I have a couple pages in this application in which a stored procedure in a package is called which performs a long-running process which updates progress in a table. These processes have a single parameter argument. In these cases I call the procedure via the Job Scheduler as part of my After-Submit process and the page navigation jumps to another page which shows a graphical representation of % complete based on the progress updates in the table and refreshes itself every 5 seconds until the job is complete. This works fine.
    I am now running into an issue where I have a more complex set of processes, with a number of parameters. To resolve this I used the same process as above, however, I first check to see if the process exists in the Job Scheduler, if not I create it. I then set all of the parameters and tell the job scheduler to execute the procedure. This should work similar to the process I am running on the other pages, however, in this page where I have multiple parameters and send an execute command rather than an execute immediate on job creation, the system runs the entire job prior to running the page branch, as a result the end user is stuck on a hanging page with no user feedback for two minutes after pressing submit.
    I am looking for how to call the procedure and have the branch execute so a progress screen can be viewed. I am not committed to the use of the job scheduler if there is a better way.
    Any help is greatly appreciated.

    The process involves
    (1) a detail table filled with phone usage data, approximately 175,000 records per month.
    (2) a table that stores what various combinations of codes in the detail table translate to for types of calls or data transmissions
    (3) a summary table for the months calls and billing
    (4) the E-Business Suite.
    (5) A GTT for temporary crossreference storage
    (6) A GTT for reporting data
    I have a parameter page where the user selects what data they are looking for and then submits it to generate the report.
    The generation of data is a four step process.
    (A) Retrieve the Code Combination ID's for the phone usage specified in the parameters from the summary billing table(1) into a GTT(5)
    (B) Query the department and Account Code Block Details from E-Business Suite(3) (using dblink) for the CCID's in step one and add to the GTT(5)
    (C) Run a query which uses the detail table(1), a function against the crossref table(2), and the crossreference GTT(5) to create the output in the report GTT(6)
    (D) An ApEx page process that counts the output and returns to a page without Export to Excel for over 65000 records or with Export for under.
    The parameter page is an ApEx page with some text fields, a couple date fields, and some checkboxes. An after submit process calls a packaged procedure which calls separate procedures for (A), (B), and (C), the page then branches to a page that shows an animated gif and current step of the process {this is what is not working right now}. Once the task completes this page branches as per (D) to a page that shows all the contents of the GTT report(6).
    The process works successfully with the exception that instead of going to a page to show the process the system simply hangs on the parameter page after the submit is pressed until the processing is done and then goes to the processing page just long enough to branch to the report page.
    I am beginning to think that I should alter the design to not use the GTT, but include the username as a field in the output table with a binary index on it for speed so that I can use the job scheduler to run a separate session and hence enable the processing page. The processing page is important as the query can take anywhere from 2 minutes to 2 hours to generate the report depending upon the parameters.

  • SQL Server 2008 / 2012 - Best practices document

    Hello Everyone
    Can anybody share SQL Server 2008 / 2012 - Best practices.
    Regards
    Prashanth
    SharePoint Administrator

    Take a look here:
    http://channel9.msdn.com/Series/Tuning-SQL-Server-2012-for-SharePoint-2013/Tuning-SQL-Server-2012-for-SharePoint-2013-01-Key-SQL-Server-and-SharePoint-Server-Integration-Conce (4 part video series)
    https://technet.microsoft.com/en-us/library/hh292622.aspx
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • After submit process vs. javascript

    I'd appreciate it if you all could help me with this problem. I've searched this forum and google for a resolution but have not found it yet. I'm a newbie with APEX so please bear with me. The issue is that I want a button - could be a delete or submit - to be disabled after it is clicked. Additionally I want a process (on submit after computations and validations) to execute when the button is clicked. The dilemma is that the javascript disable button function is called using a URL button (create a button in a region position). My guess is that the on submit process is not fired because the focus goes to the URL (disable button function). Is there any way to call a javascript from non-URL button (Create a button displayed among this region's items)? Or perhaps get the process to run with a URL button? So my issue is I need both the after submit process to run and the javascript to disable the button to run after the button is clicked. Suggestions appreciated.
    Kirk

    Like I indicated I'm a newbie to this web programming. And at this point I'm just learning APEX for possibilities here and because management wants a presentation on it at one of our upcoming meetings.
    After the branch to the URL - which my understanding would be to the same page - it would come up without a record until the user selects one from some mechanism like an LOV. And with no record displaying I'd want the Delete (and Update) button disabled until a record is selected. And now that I'm writing this it has me thinking I need to approach this from a different angle. I need the page to come up with the Delete and Update button already disabled and when a record is selected then (somehow) the buttons should be enabled. Do you think this a more viable approach?

  • Order Process Best Practice Suggestions?

    Hey CF World,
    I have to revamp an online order process. The process is broken into 4 steps.
    The app as it exists today was built by a different developer and for the life of me, I have wasted about 5 hours trying to figure out exactly what the person is doing in the code just so I can make some basic tweaks to the process.
    Could anyone offer what might be considered today's best practice for a step by step order process?
    The thought is, if the user could complete step 1, upon clicking next the data elements of the form would be validated and then they would be taken to step 2, etc, etc... until the end where upon submission, the order would then be written to the database and next process triggered internally.
    Should I have one page that upon submit of step 1 cycles back to itself, processes the data and then loads a separate div of info for step 2 or...?
    Any suggestions would be great.  Thank you so much in advance for your help, I sincerely appreciate it.
    Ciao'
    D.

    Hello,
    Thank you so much for that. Let me qualify a few things as I probably should have in the first place. (my apologies)
    Coldfusion 8
    SQL Server  2005
    There is no payment or credit card information being provided.
    The user comes online, goes through a basic order process for some work to be done. As mentioned, it is a multi step process for gathering their information.
    Once the entire order is in and all the fields validated along the way to ensure they were populated where required, the order is to be written into the pending orders table and an email is sent to the branch closest to the customer notifying them of the new order with a link into the details. The branch then calls them directly to confirm the details of the order before activating it.
    So, the code I received, is next to impossible to follow through, for the life of me I can not figure out what the former developer has done. I need to make some changes to the process and if I can not even follow the flow to figure out where to make my changes, that could pose a problem.
    I have not coded too much in Coldfusion for the past two years but did so quite extensively before that. I totally agree on the CFTransaction suggestion. I guess what I was looking for is, are there any best practices for coding that I should be aware of, especially considering what I want to accomplish? Previously we used the "fusebox" concept of coding and had most of our code in CustomTags in a very reusable and easy to follow structure and flow.
    Any thoughts/suggestions would be great! Thank you very much!
    D.

  • Deadline Branche in Correlation Process - Best Practice

    Hello,
    I have an integration process with a correlation - there is a asynchronous send step which activates a correlation and afterwards an asynchronous receive step that uses that correlation.
    Furthermore I have a deadline branch to cancel the process after 24 hours.
    My question now is:
    There could be (rare) cases where a message arrives later than 24 hours, so according to my understanding the received message will block the inbound queue as no active correlation can be found anymore. Is this correct? How can I avoid this situation, I guess a blocked queue would also block other messages that are sent to the integration process?
    What would be best practice to handle such a scenario? I could leave the process intance open for 1 month, however this might have a significant impact on system performance.....
    Thank you for your advice.

    There could be (rare) cases where a message arrives later than 24 hours, so according to my understanding the received
    essage will block the inbound queue as no active correlation can be found anymore
    No correlation found error will occur only when the BPM instance is running and the message tries to enter into the relevant receive step (not the first one)
    However when you say the process is cancelled you need not worry about the message going into the queue and blocking the BPM queue.
    Regards,
    Abhishek.

  • SQL Server 2012 Infrastructure Best Practice

    Hi,
    I would welcome some pointers (direct advice or pointers to good web sites) on setting up a hosted infrastructure for SQL Server 2012. I am limited to using VMs on a hosted site. I currently have a single 2012 instance with DB, SSIS, SSAS on the same server.
    I currently RDP onto another server which holds the BI Tools (VS2012, SSMS, TFS etc), and from here I can create projects and connect to SQL Server.
    Up to now, I have been heavily restricted by the (shared tenancy) host environment due to security issues, and have had to use various local accounts on each server. I need to put forward a preferred environment that we can strive towards, which is relatively
    scalable and allows me to separate Dev/Test/Live operations and utilise Windows Authentication throughout.
    Any help in creating a straw man would be appreciated.
    Some of the things I have been thinking through are:
    1. Separate server for Live Database, and another server for Dev/Test databases
    2. Separate server for SSIS (for all 3 environments)
    3. Separate server for SSAS (not currently using cubes, but this is a future requirement. Perhaps do not need dedicated server?)
    4. Separate server for Development (holding VS2012, TFS2012,SSMS etc). Is it worth having local SQL Server DB on this machine. I was unsure where SQL Server Agent Jobs are best run from i.e. from Live Db  only, from another SQL Server Instance, or to
    utilise SQL ServerAgent  on all (Live, Test and Dev) SQL Server DB instances. Running from one place would allow me to have everything executable from one place, with centralised package reporting etc. I would also benefit from some license cost
    reductions (Kingsway tools)
    5. Separate server to hold SSRS, Tableau Server and SharePoint?
    6. Separate Terminal Server or integrated onto Development Server?
    7. I need server to hold file (import and extract) folders for use by SSIS packages which will be accessible by different users
    I know (and apologise that) I have given little info about the requirement. I have an opportunity to put forward my requirement for x months into the future, and there is a mass of info out there which is not distilled in a way I can utilise. It would
    be helpful to know what I should aim for, in terms of separate servers for the different services and/or environments (Live/Test/Live), and specifically best practice for where SQL Server Agent jobs should be run from , and perhaps a little info on how to
    best control deployment/change control . (Note my main interest is not in application development, it is in setting up packages to load/refresh data marts fro reporting purposes).
    Many thanks,
    Ken

    Hello,
    On all cases, consider that having a separate server may increase licensing or hosting costs.
    Please allow to recommend you Windows Azure for cloud services.
    Answers.
    This is always a best practice.
    Having SSIS on a separate server allows you isolate import/export packages, but may increase network traffic between servers. I don’t know if your provider charges
    money for incoming traffic or outgoing traffic.
    SSAS on a separate server certainly a best practice too.
     It contributes to better performance and scalability.
    SQL Server Developer Edition cost about $50 dollars only. Are you talking about centralizing job scheduling on an on-premises computer than having jobs enable on a
    cloud service? Consider PowerShell to automate tasks.
    If you will use Reporting Services on SharePoint integrated mode you should install Reporting Services on the same server where SharePoint is located.
    SQL Server can coexist with Terminal Services with the exception of clustered environments.
    SSIS packages may be competing with users for accessing to files. Maybe copying them to a disk resource available for the SSIS server may be a better solution.
    A few more things to consider:
    Performance storage subsystem on the cloud service.
    How Many cores? How much RAM?
    Creating a Domain Controller or using active directory services.
    These resources may be useful.
    http://www.iis.net/learn/web-hosting/configuring-servers-in-the-windows-web-platform/sql-2008-for-hosters
    http://azure.microsoft.com/blog/2013/02/14/choosing-between-sql-server-in-windows-azure-vm-windows-azure-sql-database/
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • SQL 2012 service accounts best practice

    I'm installing SQL Server 2012 for ConfigMgr 2012 r2 and I wonder what is the best practice for SQL service accounts.
    During the installation of SQL Server, in the server configuration/Service accounts menu I'm allowed to configure following service accounts: SQL Server Agent, SQL Server Agent Database Engine, SQL Server Reporting Services, SQL Server Browser.
    Do I have to create separate domain user (not admin) accounts for each service and configure service principal name (SPN) for all of them?
    For example: Domain user account named SQLSA for SQL Server Agent, another domain user account
    SQLADBE for SQL Server Agent Database Engine etc.

    During the installation of SQL Server 2012, the user is prompted to provide service account
    credentials. The default service accounts suggested vary depending on whether SQL Server
    2012 is installed on a computer running Windows Vista or Windows Server 2008 or on a computer
    running Windows 7 or Windows Server 2008 R2. On computers running Windows Vista
    or Windows Server 2008 operating systems, the following default service accounts are used:
    - NETWORK SERVICE Database Engine, SQL Server Agent, Analysis Services,
    Integration Services, Reporting Services, SQL Server Distributed Replay Controller,
    SQL Server Distributed Replay Client
    - LOCAL SERVICE SQL Server Browser, FD Launcher (Full-Text Search)
    - LOCAL SYSTEM SQL Server VSS Writer
    On computers running Windows 7 or Windows Server 2008 R2 operating systems, the following
    default accounts are used:
    - Virtual Account or Managed Service Account Database Engine, SQL Server Agent,
    Analysis Services, Integration Services, Replication Services, SQL Server Distributed
    Replay Controller, SQL Server Distributed Replay Client, FD Launcher (Full-Text Search)
    - LOCAL SERVICE SQL Server Browser
    - LOCAL SYSTEM SQL Server VSS Writer
    For Windows 7 and Windows Server 2008 R2, you can use a Managed Service Account
    (MSA) or a Managed Local Account. The differences between these account types are as
    follows:
    - Managed Service Account (MSA) This special kind of domain account managed
    by a domain controller is assigned to a single member computer and used for running
    services. The MSA password is managed by the domain controller. MSAs can register
    a Service Principal Name (SPN) with Active Directory. MSAs use a $ name suffix; for
    example, CONTOSO\SQL-A-MSA$. You must create the MSA prior to running SQL
    Server Setup if you want to use an MSA with SQL Server services.
    - Virtual Accounts or Managed Local Accounts These virtual accounts can access
    the network in a domain environment and are used by default for service accounts
    during SQL Server 2012 setup when run on Windows 7 or Windows Server 2008 R2.
    Such accounts use the NT SERVICE\<SERVICENAME>format. You don’t need to specify
    a password when using virtual accounts with SQL Server 2012 because this is handled
    automatically by the operating system.
    You should run SQL Server services, using the minimum possible user rights, and use an
    MSA or virtual account when possible. If you are manually configuring service accounts, use
    separate accounts for different SQL Server services. If it is necessary to change the properties
    of service accounts used for SQL Server 2012, use SQL Server tools such as SQL Server
    Configuration Manager. This ensures that all necessary dependencies are
    updated, which does not happen if you use only the Services console.
    Although you can configure domain accounts as service accounts, this strategy requires
    more effort because you must ensure that service account passwords are changed regularly.
    You must also manage SPNs, which are required for Kerberos authentication.
    Best regads
    P.Ceglie

  • Idoc processing best practices - use of RBDAPP01 and RBDMANI2

    We are having performance problems in the processing of inbound idocs.  The message type is SHPCON, and transaction volume is very high.  I am a functional consultant, not an ABAP developer, but will try my best to explain our current setup.
    1)     We have a number of message variants for the inbound SHPCON message, almost all of which are set to trigger immediately upon receipt under the Processing by Function Module setting.
    2)      For messages that fail to process on the first try, we have a batch job running frequently using RBDMANI2.
    We are having some instances of the RBDMANI2 almost every day which get stuck running for a very long period of time.  We frequently have multiple SHPCON idocs coming in containing the same material number, and frequently have idocs fail because the material in the idoc has become locked.  Once the stuck batch job is cancelled and the job starts running again normally, the materials unlock and the failed idocs begin processing.  The variant for the RBDMANI2 batch job is currently set with a packet size of 1 and without parallel processing enabled.
    I am trying to determine the best practice for processing inbound idocs such as this for maximum performance in a very high volume system.  I know that RBDAPP01 processes idocs in status 64 and 66, and RBDMANI2 is used to reprocess idocs in all statuses.  I have been told that setting the messages to trigger immediately in WE20 can result in poor performance.  So I am wondering if the best practice is to:
    1)     Set messages in WE20 to Trigger by background program
    2)     Have a batch job running RBDAPP01 to process inbound idocs waiting in status 64
    3)     Have a periodic batch job running RBDMANI2 to try and clean up any failed messages that can be processed
    I would be grateful if somebody more knowledgeable than myself on this can confirm the best practice for this process and comment on the correct packet size in the program variant and whether or not parallel processing is desirable.  Because of the material locking issue, I felt that parallel processing was not desirable and may actually increase the material locking problem.  I would welcome any comments.
    This appeared to be the correct area for this discussion based upon other discussions.  If this is not the correct area for this discussion, then I would be grateful if the moderator could re-assign this discussion to the correct area (if possible) or let me know the best place to post it.  Thank you for your help.

    Hi Bob,
    Not sure if there is an official best practice, but the note 1333417 - Performance problems when processing IDocs immediately does state that for the high volume the immediate processing is not a good option.
    I'm hoping that for SHPCON there is no dependency in the IDoc processing (i.e. it's not important if they're processed in the same sequence or not), otherwise it'd add another complexity level.
    In the past for the high volume IDoc processing we scheduled a background job with RBDAPP01 (with parallel processing) and RBDMANIN as a second step in the same job to re-process the IDocs with errors due to locking issues. RBDMANI2 has a parallel processing option, but it was not needed in our case (actually we specifically wouldn't want to parallel-process the errors to avoid running into a lock issue again). In short, your steps 1-3 are correct but 2 and 3 should rather be in the same job.
    Also I believe we had a designated server for the background jobs, which helped with the resource availability.
    As a side note, you might want to confirm that the performance issues are caused only by the high volume. An ABAPer or a Basis admin should be able to run a performance trace. There might be an inefficiency in the process that could be adding to the performance issue as well.
    Hope this helps.

  • Re engineering of existed process / Best Practice (customization)

    Hi all of you,
    We are implementing SAP ECC 6.0 for one of our clients. Client is asking us to compare their existed business process with Best Practice / standard process and based on the result, asking to prepare a GAP analysis between the existed and best practice for his business.
    SAP itself is a best practice in the respective domains / business processes. By implementing SAP ERP,  the client will have best practice for his business processes as I know. But thing is, how can I explain to the client that SAP has given the best practice and based on which, client will consider the SAP practice as Best Practice for the business??
    Please give me a solution
    Regards,
    Ramki

    f l,
    I'm not sure deleting keys from the registry is ever a best practice, however Xcelsius has listings in:
    HKEY_CURRENT_USER > Software > Business Objects > Xcelsius
    HKEY_LOCAL_MACHINE > SOFTWARE > Business Objects > Suite 12.0 > Xcelsius
    The current user folder holds temporary settings, such as how you've modified your interface.
    The local machine folder holds more important information.
    As always, it's recommended that you backup the registry and/or create a restore point before modifying or deleting any keys.
    As for directories, the only directory Xcelsius uses is the one you install to.  It also places some install logs in the temp directory, but they have no effect on the application.

  • After Installation of Best Practice.

    Hello,
    We have installed Best Practice for Retail on ECC6.0 server (only the baseline package which comes as an ADD on),
    How to activate the senarios and Building blocks.
    Thanks in Advance.
    Warm Regards
    Sujith

    Hi,
    I also want to know what kind of monitoring scripts I can use to setup as cron jobs to monitor or detect any failure or problems?
    To monitor Cluster (OS Level):
    I suggest you use a powerful tool "CHM" that already comes with product Grid Infrastructure.
    What do you do to configure? Nothing ... Just use.
    Cluster Health Monitor (CHM) FAQ [ID 1328466.1]
    See this example:
    http://levipereira.wordpress.com/2011/07/19/monitoring-the-cluster-in-real-time-with-chm-cluster-health-monitor/
    To monitor Database:
    PERFORMANCE TUNING USING ADVISORS AND MANAGEABILITY FEATURES: AWR, ASH, and ADDM and Sql Tuning Advisor. [ID 276103.1]
    The purpose of this article is to illustrate how to use the new 10g manageability features to diagnose
    and resolve performance problems in the Oracle Database.
    Oracle10g has powerful tools to help the DBA identify and resolve performance issues
    without the hassle of analyzing complex statistical data and extensive reports.
    Hope this help,
    Levi Pereira
    Edited by: Levi Pereira on Nov 3, 2011 11:40 PM

  • SQL Server installation paths best practices

    In my company we're planning to setup a new (consolidated) SQL Server 2012 server (on Windows 2012 R2, VMWare). Current situation is there is a SQL Server 2000, a few SQL Server 2008 Express and a lot of Access databases. For the installation I'm wondering
    what the best selections for the various installation paths are. Our infra colleagues (offshore) have the following standard partition setup for SQL Server servers:
    C:\ OS
    E:\ Application
    L:\ Logs
    S:\ DB
    T:\ TEMPDB
    And during the installation I have to make a choice for the following
    Shared feature directory: x:\Program Files\Microsoft SQL Server\
    Shared feature directory (x86): x:\Program Files\Microsoft SQL Server\
    Instance root directory (SQL Server, Analysis Services, Reporting Services): x:\Program Files\Microsoft SQL Server\
    Database Engine Configuration Data Directories:
    Data root directory: x:\Program Files\Microsoft SQL Server\
    User database directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    User database log directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Temp DB directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Temp DB log directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Backup directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Analysis Services Configuration Data Directories:
    User database directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    User database log directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Temp DB directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Temp DB log directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Backup directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Distributed Replay Client:
    Working Directory: x:\Program Files (x86)\Microsoft SQL Server\DReplayClient\WorkingDir\
    Result Directory: x:\Program Files (86)\Microsoft SQL Server\DReplayClient\ResultDir\
    So I'd like some on assistance on the filling in the x drive letters. I understand it's best practice to seperate the data files and the logs files. But should that also be the case for TempDB? And should both the database and tempdb log files go to the
    same log paritition then? What about the backup directories? Any input is very much appreciated!
    Btw, I followed the http://www.sqlservercentral.com/blogs/basits-sql-server-tips/2012/06/23/sql-server-2012-installation-guide/ guide for the installation (Test server now).

    You can place all installation libraries on E:\ Drive.
    >>So I'd like some on assistance on the filling in the x drive letters. I understand it's best practice
    to seperate the data files and the logs files. But should that also be the case for TempDB? And should both the database and tempdb log files go to the same log paritition then? What about the backup directories? Any input is very much appreciated!
    You can place tempdb data files on T drive and I prefer to place tempdb log and user database log file
    on the same drive i.e is L:\ Drive.
    >>Backup directories
    If you are not using any third party tool then i would prefer to create separate drive for backup.
    Refer the below link for further reading
    http://www.brentozar.com/archive/2009/02/when-should-you-put-data-and-logs-on-the-same-drive/
    --Prashanth

  • Call javascript function after submit process

    Hi,
    How can I call javascript function after my submit process ?
    Thanks.

    Hi Carl,
    You say that I must have an Item or region that contains my js with conditionnal display. With this method I can execute my js with specific request value.
    My problem is that I tried to integrate the "save large value workaround" in my application with the APEX wysiwyg (fckeditor). To do that I have to call the javascript function clob_submit on the save button. This function save the value of my field but if I want to save other item value, how can I do that ?
    If I use your method, I will submit the page, save the other field and after that, I will branch on the same page. At this moment, I will execute my javascript file.
    Is it what did you said ?
    Sylvain Michaud
    Homepage : http://www.insum.ca
    InSum Solutions' blog : http://insum-apex.blogspot.com

Maybe you are looking for