SQLDeveloper running on Citrix - Best Practice ?

Dear all,
is there any kind of advice on how to configure SQLDeveloper when running on Citrix?
At the moment I am facing the problem that it is working allright, but every time I log on to Citrix all my configurations are lost.
My Windows-Home is mapped as drive H: on the CItrix, but it seems that it is not used by SQLDeveloper.
Any ideas?
Any entries in conf-files?
Any startup-parameters?
Any help is very welcome.
Thank you
Carsten

The settings are stored in the sqldeveloper folder under the user's profile. Maybe your Citrix user doesn't have permissions to write there.
If you want the settings to be stored in your home, edit \sqldeveloper\bin\sqldeveloper.conf:
AddVMOption -Dide.user.dir=H:\sqldeveloperHope that helps,
K.

Similar Messages

  • Error while running eCATT for Best Practice HR- US

    HI All,
    I am executing BC sets through eCATT for implementing Best Practice of HR for US. While running eCATT i am getting two errors, i am not in a positiong to find out a solution. Can you guys please share your throghts.
    Error messages are as follows:
    1) Error in eCATT command CHEVAR
        Condition not fulfilled
    2) Error in eCATT command ABAP
        LOCAL GENERATION LIMIT 36 SUBPOOL REACHED
    Looking forward to the response
    Thanks & Regards
    Shyam V

    Hi Shyam,
                  I was just wondering if this document may help the query you were looking for
    http://help.sap.com/saphelp_nw04/helpdata/en/43/2f34413f97f323e10000000a155106/frameset.htm
    Hope you decode the error.
    Have a best day ahead.

  • Develop on Windows, run on Solaris - Best Practices?

    We're moving from windows 2003 server to Solaris x86-64/sparc environment - 10gR2 app server and db. Currently, since our servers are in a remote datacenter, we just VPN onto one of our windows server, which has the dev tools and standalone oc4j running so we can develop and test forms. Simple.
    First thing we noticed is there is not a solaris x86-64 version of the dev tools, so we assume we'll have to develop on our windows desktops, and then recompile on the solaris server. So, the question is how best to do this?
    Initial thinking is that each developer needs to have all the .olb,.pll,.jar,.fmb etc. files on their local machine compiled for windows, and then have to copy these to the solaris servers and recompile. This raises all kinds of concerns about how to keep the files on the developers machines in sync with the versions on the solaris servers.
    Seems we are missing something here and there may be better methods to setup this environment. Perhaps use sparc workstations so we can develop natively in solaris, but that is an expensive option. Any ideas?

    Hi,
    Its better to develop in Windows and port it to
    solaris. Thats what we do for our implementations.
    All the developers has all the source code in the
    local desktop. And when ever they want to change
    /update a form, they download it from
    development/prodn in solaris to the local desktop.
    At a time usually one developer will work on a form
    and that can be put to the solaris and compiled there
    in order to test.
    Rajesh ALexWhat about the .pll, .jar, .olb files? Do your developers have to download to their local machines each time they open form builder to get the latest versions? Seems a maintenance nightmare to keep local versions of these files up to date with what is on the servers.
    Perhaps we need to look at some source control software that can manage this. Any out there that work with forms?

  • Office 365 Best Practices Analyzer requirements

    Can you install the office 365 BPA from a windows 7 workstation, or do you need to install it on the Exchange Server itself, and run it from there?

    Hi cf090,
    This depends on which BPA you are trying to run. If you are running Office 365 Best Practices Analyzer for Exchange Server 2013 then this needs to be installed on your Exchange server (you can see the requirements here: http://community.office365.com/en-us/w/deploy/office-365-best-practices-analyzer-for-exchange-server-2013-requirements.aspx)
    There are other BPAs for O365 such as Office 365 Best Practices Analyzer for your PC but this is designed to see if your PC supports O365 not Exchange. More details here: http://community.office365.com/en-us/w/deploy/office-365-best-practices-analyzer-for-your-pc.aspx
    Hope this answers your question! 
    Mike 
    Mike Parker | MCSE - Messaging | www.cloudbusiness.com</ a>

  • Best practice to run Microsoft Endpoint Protection client in VDI environment

    We are using Citrix XenDesktop VDI environment. Symantec Endpoint Protection client (VDI performance optimised) has been installed on the “streamed to the clients” virtual machine image. Basically, all the files (in golden image) have been “tattooed” with
    Symantec signature. Now, when the new VM starts, Symantec scan engine simply ignores “tattooed” files and also randomise scan times. This is a rough explanations but I hope you’ve got the idea.
    We are switching from Symantec to Microsoft Endpoint Protection and I’m looking for any information and documentation in regards best practice for running Microsoft Endpoint Protection clients in VDI environment.
     Thanks in advance.

    I see this post is a bt old but the organization I'm with has a very large VDI deployment using VMware. We also are using SCEP 2012 for the AV.
    Did you find out what you were looking for or did you elect to take a different direction?
    We install SCEP 2012 into the base image and manage the settings using GPO and the updates for defs are through the normal route.
    Our biggest challenge is getting alert message from the client.
    Thanks

  • Best practices to share 4 printers on small network running Server 2008 R2 Standard (service pack 1)

    Hello, 
    I'm a new IT admin at a small company (10-12 PCs running Windows 7 or 8) which has 4 printers. I'd like to install the printers either connected to the server or as wireless printers (1 is old enough to require
    a USB connection to a PC, no network capability), such that every PC has access to each printer.
    Don't worry about the USB printer - I know it's not the best way to share a printer, but it's not a critical printer; I just want it available when its PC is on.
    I've read a lot about the best way to set up printers, including stuff about group policy and print server, but I am not a network administrator, and I don't really understand any of it. I'd just like to install
    the drivers on the server or something, and then share them. Right now all the printers do something a little different: one is on a WSD port, two has a little "shared" icon, one has the icon but also a "network" icon... it's very confusing.
    Can anyone help me with a basic setup that I can do for each printer?
    p.s. they all have a reserved IP address.
    Thanks,
    Laura

    may need to set print server... maybe helpful.
    http://www.techiwarehouse.com/engine/9aa10a93/How-to-Share-Printer-in-Windows-Server-2008-R2
    http://blogs.technet.com/b/yongrhee/archive/2009/09/14/best-practices-on-deploying-a-microsoft-windows-server-2008-windows-server-2008-r2-print-server.aspx
    http://joeit.wordpress.com/2011/06/08/how-do-i-share-a-printer-from-ws2008-r2-to-x86-clients-or-all-printers-should-die-in-a-fire/
    Best,
    Howtodo

  • Best practice for running pcastconfig --sync_library

    Every so often pcastconfig --sync_library fails with a ruby method not found error (uid) and if I run it again it might fail the same way but in a different place or it might run to completion with no errors. I've taken to turning off Time Machine before running pcastconfig --sync_library and sometimes I turn off podcast producer and xgrid as well just to 'feel' safer.
    Does anyone know what is the best practice? --sync_library isn't in the man page for pcastconfig and the docs don't mention anything about turning anything off before running it.
    another error I see sometimes is database locked
    any ideas or tips appreciated

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

  • Best practice for running multiple sites on 1 CF install?

    Hi-
    I'm setting up a new hosting environment (Windows Server 2008 Standard 64 bit VPS  configuration, MySQL, IIS 7, CF 9)
    Has anyone seen any docs or can anyone suggest best practices for configuring multiple sites in this environment? At this point I'm thinking simple is best, one new site in IIS for each client (domain) and point it to CF.
    Given this environment, is anyone aware of any gotchas within the setup of CF 9 on IIS 7?
    Thank you in advance,
    Rich

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

  • After Vcenter best practice documentation - running as a VM

    Hi there,
    We are currently running our vcenter as a physical machine.  A few weeks back I saw something on twitter saying vmware had changed their best practice to recommend running vcenter as a virtual machine.  We are looking into this as a way of running one less physical server.
    Can anyone point me in the direction of any (revised) good practice documentation from vmware?

    here's a couple things as well
    http://www.vmware.com/pdf/vsphere4/r40_u1/vsp_40_u1_esx_vc_installation_guide.pdf
    Although a little dated, it still applies
    http://www.vmware.com/pdf/vi3_vc_in_vm.pdf
    I would also consider setting your restart priority to HIGH for your vCenter VM.  If you run your vCenter DB instance on a VM, I would also consider setting us a DRS rule to keep them together for better performance, as well as setting a HIGH restart priority for your VCDB

  • Running Best Practice Analyzer on remote 2008 R2 domain controllers

    Hello Powershell World,
    I'll start out by first mentioning that I am a powershell rookie so I gladly welcome any input to help me improve or work more efficiently.  Anyway, I recently used powershell to run the best practice analyzer for DNS on all of our domain controllers.
     The way I went about was pretty tedious and inefficient but still got the job done through a series of one-liners and exported the report to a UNC path as follows:
    Enable-PSremoting -Force (I logged into all of the domain controllers individually and ran this before running the one-liners below from my workstation)
    New-PSSession -Name <Session Name> -ComputerName <Hostname>
    Enter-PSSession -Name <Session Name>
    Import-Module bestpractices
    Invoke-BPAModel Microsoft/Windows/DNSServer
    Get-BPAResult Microsoft/Windows/DNSServer | Select ModelId,Severity,Category,Title,Problem,Impact,Resolution,Compliance,Help | Sort Category | Export-CSV \\server\share\BPA_DNS_SERVERNAME.csv
    I'm looking to do this again but for the Directory Services best practice analyzer without having to individually enable remoting on the domain controllers and also provide a lsit of servers for the script to run against. 
    Thanks in advance for all your help!

    What do you mean by "without having to individually enable remoting "?
    You cannot remote without enabling remoting.  You only need to enable remoting once.  It is a configuraiton change.  If you have done it once you do not need to do it again.
    Here is how to runfrom a list of DCs.
    $sb={
    Import-Module bestpractices
    Invoke-BPAModel Microsoft/Windows/DNSServer
    Get-BPAResult Microsoft/Windows/DNSServer |
    Select ModelId,Severity,Category,Title,Problem,Impact,Resolution,Compliance,Help |
    Sort Category |
    Export-CSV "\\server\share\BPA_DNS_$env:COMPUTERNAME.csv"
    Invoke-BPAModel Microsoft/Windows/DirectoryServices
    # etc...
    ForEach($dc in $listofDCs){
    Invoke-Command -ScriptBlock $sb -Computer $dc
    ¯\_(ツ)_/¯

  • Best Practice: continuously running db procedure

    I've written a database procedure that pulls messages off an AQ. Then does some processing on them and stores the result on in a table. I'd like the procedure to run continuously. I also call the same procedure with different parameters which determine which messages will get pulled off. My questions are these:
    1. what is the best practice for keeping this procedure running continuously? If the client side connection is eventually terminated will the process keep running? Set timeout somewhere for no timeout?
    2. How to determine which procedure instances are running. I'm thinking I may need to create different schemas that have execute priviledge for the different instances so that I can atleast tell which process is which. Is there a better way to tell which is which if I need to kill one?
    thanks,
    dan

    > 1. what is the best practice for keeping this procedure running continuously? If the client
    side connection is eventually terminated will the process keep running? Set timeout
    somewhere for no timeout?
    DBMS_JOB or DBMS_SCHEDULER processes are ideal as these have no client part.
    As for a client.. when it dies, it usually takes its server process with it. As soon as Oracle notices that the client is gone (usually when it attempts to send data to it), it will terminate the dedicated server process that serviced the client, or it will clean up the virtual circuit of the shared server session that serviced that client.
    > 2. How to determine which procedure instances are running. I'm thinking I may need to
    create different schemas that have execute priviledge for the different instances so that> I can atleast tell which process is which. Is there a better way to tell which is which if I
    > need to kill one?
    With DBMS_JOB/DBMS_SCHEDULER it is easy. You check the RUNNINGJOBS views. Details on these are in the [url http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/toc.htm]Oracle® Database Reference guide.

  • Best practice to run BOBJ server

    Is that a best practice to install the BOBJ (BOE) server in Netweaver stack..
    may or may not use BI
    may or may not use Netwever Portal
    Have said above, I would appreciate best solution for running BOBJ server..
    Thanks-gopal

    I see this post is a bt old but the organization I'm with has a very large VDI deployment using VMware. We also are using SCEP 2012 for the AV.
    Did you find out what you were looking for or did you elect to take a different direction?
    We install SCEP 2012 into the base image and manage the settings using GPO and the updates for defs are through the normal route.
    Our biggest challenge is getting alert message from the client.
    Thanks

  • Long running jod in SAP BI best practice installation..................

    Hi,
        Im installing the SAP Best practice for Financials. The job "Activating Business Content: InfoObject Catalogs" is running for a long time (Nearly 2 hrs).
    Reference: B03 - General Settings for BI Integration.
    Can anyone tell me how long this job will run?
    Any help reg this is appreciated.
    Regards
    Santhosh kumar.N

    That depends on how many objects are getting activated and also the job log can tell you something about what is going on...
    Arun

  • LCM best practice to run on it's own BOE instllation?

    Is it a best practice to install/run the Life Cycle Manager on it's own BOE instllation? The LCM installation documents seems to suggest that the LCM can run on an existing BOE installation. I assume that means it can run on a BOE instllation that also provides webi/deski/PM/etc. services that users access.
    I am just curious whether it is better to have a sperate host that only runs an instance of BOE and the LCM but does not host any other BOE reporting/dashboards.
    Also, is there a "LCM Best Practices" document floating around anywhere?
    Thanks,
    George

    Thanks for the reply!
    Do you know of any specifics as to 'why" it should run on it's own machine? I heard this was suggested by someone at a BO user conference but had not heard any details as to why. Does the LCM use a lot of resources when promoting or something?
    Thanks again.

  • Best practices for checked exceptions in Runnable.run

    Runnable.run cannot be modified to pass a checked exception to its parent, so it must deal with any checked exceptions that occur. Simply logging the error is inadequate, and I am wondering if there are any "best practices" on how to deal with this situation.
    Let me give a real-world example of what I'm talking about.
    When writing I/O code for a single-threaded app, I'll break the logic into methods, and declare these methods as throwing an IOException. Basically, I'll ignore all exceptions and simply pass them up the stack, knowing that Java's checked exception facility will force the caller to deal with error conditions.
    Some time later, I might try to improve performance by making the I/O code multithreaded. But now things get tricky because I can no longer ignore exceptions. When I refactor the code into a Runnable, it cannot simply toss IOExceptions to some future unnamed caller. It must now catch and handle the IOException. Of course, dealing with the problem by simply catching and logging the exception is bad, because the code that spawned the I/O thread won't know that anything went wrong. Instead, the I/O thread must somehow notify its parent that the exception occurred. But just how to do this is not straightforward.
    Any thoughts? Thanks.

    My suggestion: don't use Threads and Runnables like this.
    Instead implement Callable which can throw any Exception.
    Then use an ExecutorService to run that Callable.
    This will return a Future object which can throw an ExecutionException on get(), which you can then handle.
    This has the additional advantage that you can easily switch from a single-threaded serialized execution to a multi-threaded one by switching ExecutorService implementations (or even by tweaking the parameters of the ExecutorService implementation).

Maybe you are looking for