2 Essbase Servers On Same UNIX Server

Hi.I installed 2 Essbase Servers on the same UNIX Server.Both Essbase Servers work but that's when I want to point to 2nd Essbase Server for Planning and Business Rules that it's more difficult.In fact, in Essbase Administration Services, I managed to add 2nd Essbase Server indicating port it uses but with Planning and Business Rules the "server_name:port" style entry doesn't match. Planning and Business Rules don't understand these entries.I tried in other manner with a NODENAME in ESSBASE.cfg, but no result either.So, is there anyone who can suggest me something or anyone who managed this case?Best regards.Rzedinho,New HDN Member.

The only way that you can connect to the second server is to use client software that supports providing a port at login. We are doing this, but you have to have the latest versions of reports, planning, etc. If you have these versions of software, then all you have to do to connect is give it a string like: EssbaseServer2:2784

Similar Messages

  • Running 8i and 10g on same UNIX server

    Greetings,
    I am a developer playing DBA, so please forgive me if these are elementary questions. I have no DBA support.
    I have a Solaris 8 server currently running operational software using 8i. My requirement is to load and test 64bit 10g without interfering with the operational system, then test the s/w. Later, I need to switch from 8i to 10g with little or (preferably) no downtime. It would be nice run the two in parallel before the switchover.
    I have a duplicate setup in the development lab, which is what I am working on right now.
    I was able to get 10g loaded without any apparent problems. However, when I try to run 8i and 10g, the listener’s conflict.
    After reading through various forums this morning, I have a few questions that I am hoping someone can answer.
    1. Oracle 8i is owned by user oracle. I loaded 10g as user ora10g. Should I have loaded 10g as user oracle? If so, how do I keep it from overwriting critical 8i files that are in /usr/local/bin, etc?
    2. Oracle 8i’s ORACLE_HOME is /usr/opt/oracle. 10g’s is /export/ora10g. Does 10g need to be in /usr/opt/ora10g? If so, can I set up a link to simulate this, or do I need to reload?
    Thanks in advance,
    Linda

    Won't say much about the platform (solaris) aspect here but anyway...
    1. You could and sometimes should have a different oracle software owner user for installing/managing software, either is ok. You also might want a different group to own the oraInventory (software catalog).
    You can always enter somethng other than default when prompted by root.sh - why not e.g. /home/ora10g/bin?
    2. I dont think it needs to be installed anywhere, but /export is a bit unusual isn't it? Used for nfs etc.

  • Oracle 11g on Unix server needs to write files (.csv) on Windows server

    Hi,
    Currently we are using Oracle 10g which is installed on a Unix server and on the same server there is a directory under which some files are being exported/downloaded by the db.
    We are having DEDICATED DB INSTANCE on the SHARED server, and not a DEDICATED SERVER.
    Now we need to migrate from Oracle 10g to 11g, but due to some complaince issue, we have been asked to create those directories on some other server. We have identified a Windows server and can create directories.
    Now I request any expert to pls suggest/guide me that how can the db (on Unix server) export/write files on another Windows server?
    I read in a thread that the server (where files should be exported) should be MOUNTED on the server where db is physically installed.
    Pls help me here.....
    Edited by: 950010 on Jul 31, 2012 7:00 AM

    950010 wrote:
    As I wrote in my question that due to compliance issue we have been asked to create the directory (that is currently on the same unix server on which our db is physically installed) in any other server (no matter unix or windows).And if that remote server is not available? Or if network connectivity to the remote server fails? Or if there is severe network congestion between the Oracle server and remote server? What then?
    How is the process on the local server suppose to deal with errors when it attempts to create a CSV file on the remote server? Or deal with network bottlenecks that results in severe performance degradation when trying to create a CSV file? Or if there lacks freespace for creating the CSV file?
    What about security? How is the local Oracle server to authenticate itself with the remote server? How is the remote server to protect that directory share against unauthorised access?
    How is this remote server going to provide access to authorised s/w to these CSV files?
    Who (local or remote processes) is going to manage this directory share and ensure old CSV files are deleted and that there is sufficient freespace for new CSV files?
    There are a LOT of questions that need to be asked... BEFORE deciding on HOW technically to do it. As the technical decision will be based on the functional requirements and how to best meet these.
    Technically - there is Samba, NFS, FTP, SFTP, SCP, RDIST and a number of other methods that can be used. But without asking the above questions and getting proper business answers, selecting a specific technical method is very much premature.
    You are asking the wrong questions, and in the wrong forum. You need to determine the business requirements first.

  • Configuring DB2 connect for Bussiness Objects  XI on Unix server

    Hello,
    We have installed DB2 Connect on the same unix server where BO is running.
    We are trying to connect BO to DB2 on the Mainframe.
    We are able to connect from the db2 prompt on the Unix to the DB2 on Mainframe.
    We created the instance, and LD_LIBRARY PATH, CLASSSPATH, PATH, DB2INSTANCE Env. vars. are also pointing to the right location.
    Do I need perform additional setup(s) for the BO, for it to connect to the DB2?
    Anyone know?
    Thanks in advance.
    -Shaista

    I recommend that you check out the IBM Pattern book from the support site.  You can do a search at support.businessobjects.com for 'Setup DB2 XIR2' if the link below doesn't work.
    http://technicalsupport.businessobjects.com/KanisaSupportSite/search.do?cmd=displayKC&docType=kc&externalId=patternbookIBM03192008pdf&sliceId=&dialogID=20064845&stateId=1%200%2020066363
    As part of the setup, the instructions recommend sourcing a db2profile from the boe user's .profile.  It also has some information on testing the connectivity to the database, and minimum patch requirements.
    HTH.

  • Creating a new Essbase Cluster on the same Solaris server

    Hi All,
    I have two servers:
    Server1: Foundation services, APS, EAS
    Server2: Essbase Server, Essbase Studio Server on epminstance_1
    Due to business requirements I need to "rename" the Essbase cluster from "EssbaseCluster-1" to something else.. I know this is not possible and from the below document I understand that I need to create a new instance on Server2 and configure Essbase and Essbase Studio on it.
    "How to Rename Essbase Cluster (Doc ID 1434439.1)"
    Goal: In EPM System Release 11.1.2.x, is it possible to rename the Essbase instance and cluster names after they are configured?
    Solution: No, it is not possible to rename the Essbase cluster or instance names after the initial configuration. If you need to change the instance and cluster names, create new instance and cluster. Export the applications from the old cluster and import them into the new cluster.
    My doubt lies with configuring the 2nd Essbase server as I am not clear how a single environment with two Essbase standalone instances on the same physical Solaris server, with each belonging to their own cluster will behave. I know that they are independent clusters and the concept of active/passive and active/active clusters are for Essbase instances part of the same cluster.
    I plan to create a new epminstance_2 instance on Server2 and configure the 2nd Essbase server as follows: give it a *new* cluster name and not assign it to the existing Essbase cluster and deploy it in standalone mode.
    1. Now I plan to use the 1st instance only as a backup option.. In an event where for some reason the new instance were to fail I would start up the services on the older essbase instance. Is this possible without any additional configuration or changes to OPMN?
    2. Alternatively, say we want to remove the older instance. Kindly suggest ways in which I can safely "remove" the older cluster (other than uninstalling). Also when users log in using SmartView, they would see the older and new cluster.. Is there anyways I can get rid of the older cluster without having to uninstall everthing on Server2 and start fresh?
    Thanks!

    Thanks John!
    I have a doubt which I hope you can throw some light now..
    I created a new instance and configured Essbase Server and Studio server on it. So now I have two EPM instances on the same physical server both having Essbase server and Essbase studio server - both Essbase servers belong to different Essbase clusters. Now from the EPM docs I do not find any mention that we can or cannot have multiple instances of Essbase Studio on the same server. Kindly correct me if I am wrong..
    In the deployment report I see two identical entries for the Essbase Studio server in the older epminstance but I do not see Essbase Studio of the 2nd Instance.
    epmsystem_1 (Server2)
                    Essbase Studio Server - 9080
    epmsystem_1 (Server2)
                    Essbase Studio Server - 9080
    After I start up the 2nd Essbase Studio server I tried connecting to it from Essbase Studio Console and got the error "Read permission denied to object folder:\'system'."
    Similarly when I run start_BPMS_bpms1_CommandLineClient.sh and issue the reinit command I get the same error.
    So I stopped the 2nd Essbase Studio server on epmsystem_2 and started up the first Essbase Studio server on epmsystem_1. I was able to connect to it fine from Essbase Studio console and reinit worked too.. I ran a drill through report which worked fine.
    -> So is it that we can have only one instance of Essbase Studio on a single server?
    Also say I want to use the Essbase Studio on the 2nd instance.. Could I re-configure just Essbase server on epmsystem_1 and re-configure Essbase server and Essbase studio server on epmsystem_2 ?
    Thanks,
    Kent

  • Installing all MDM servers on same physical host/server

    Dear All
    For production MDM environment we have installed all mdm servers- MDIS, MDSS, MDS and MDLS on the same physical server.
    Will this have any impact on the performance. We are using MDM 7.1 and on HP Unix.
    Another question is- If we have multiple repositories Vs single repository with multiple main tables( Vendor, Customer and Materials) why the choice should have performance impacts on the server level.
    Thanks-Ravi

    Ravi,
    I don't mean to muddy the waters here, but performance tuning and system sizing is a slightly more complex exercise than simply determining how many records your repository will contain and how many fields it will have.  There are many, many considerations that need to be taken into account.
    For example. the MDM server's performance can be dramatically improved or worsened depending on things like sort indices, accelerator files, data types and fill rates, validation logic, remote key generation, number of users, number of Java connections, web service connections, etc.  The list goes on and on, and that's not even taking into consideration the hardware (multi-processor, RAM, the physical disk configuration, etc.)
    With regards to the Import Server and Syndication Server you have to take into account things like map complexity: are you doing free-form searching to filter records in maps?  Are your maps designed for main tables, qualified lookup tables, or reference data?  How often do imports / syndications occur?  What keys are you matching on when importing?  Do you plan on importing by chunks?  What is the import volume, etc?
    Once again, I don't want to scare you, but I also wanted to bring up a few topics for you to think about.  There is a reason why SAP and other vendors charge a fee for doing system sizing.
    These are just a few small examples, but the list goes on and on.  I hope this helps to get you thinking in the right direction when designing your architecture and system landscape.  Good luck!
    Edited by: Harrison Holland on Dec 10, 2010 2:34 AM

  • Server 2012 - Problem with multiple instances of RDS looking to the same license server. Server Pools requiring the wrong servers.

    Hi All,
    I have two domains, prod.local and test.local.  On prod.local, I have a license server living on the connection broker.  I have created a two way trust between the domains so that the RDS instance on test.local can use the license server on prod.local.
     When I first setup the RDS instance on test.local and point it to use the license server on prod.local everything appears to work, all servers recognize the license server.  
    However when I log off and back on, and click the RDS dashboard in test.local, the Remote Desktop Services overview states I need to add every server from prod.local to the server pool.  This is bothersome because the Remote Desktop Services on test.local
    is still functioning, but I am no longer able to manage it.  When I add the servers from prod.local to the server pool I am displayed the setup for prod.local instead of test.local
    Is there a way around this?  Can I use the same RDS license server key multiple times?
    tldr: Using a different domain's license server causes my RDS instance in Server Manager to display the wrong RDS farm.

    Hi Nathan,
    Thank you for posting in Windows Server forum.
    Please check the required ports must be opened on firewall.
    - To issue RDS Per User CALs to users in other domains, there must be a two-way trust between the domains, and the license server must be a member of the Terminal Server License Servers group in those domains.
    - To restrict the issuance of RDS CALs, you can add RDS Host Servers into Terminal Server Computers group on RDS Licensing servers.
    - Configure RDS licensing server on all RDS Host Servers in each domain/forest. You can do it through RDS host configuration snap-in or through a group policy. 
    - Add administrators group of each domain/forest in the local administrators of RDS licensing server. This way, you’ll not get a prompt to enter your credentials when you’ll open RDS host configuration snap-ins in trusted domains/forests.
    More information:
    Best practices for setting up Remote Desktop Licensing (Terminal Server Licensing) across Active Directory Domains/Forests or Workgroup
    http://support.microsoft.com/kb/2473823
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • I have multiple accounts using the same smtp server, I can only reply with one as the settings wont allow me 2 servers

    I have tried to set my email to reply from my iphone 5 both of which use the same SMTP server. I can't change the settings on the others as it will only allow one mail account. Can I add multiple  with the same smtp?

    I Am not using gmail I use seperate company for my mails both use the same smtp server. The problem is when I set my first account it them uses these details and passwords when I create the second account saying that the smtp server is already in use by the primary account. I therefor can't set the password.

  • Restore BizTalk Servers on the same SQL Server

    Hi
    If I need to do a recovery of the BizTalk databases to a point in time before a hardware failure on the same SQL Server, how do I achive this?I have seen the only recommended method is to use log shipping, but is this to restore Databases onto another SQL
    Server? If I just want to restore all Databases onto the same server, how would I achive this.
    Thanks in advance

    The only supported Backup/Restore scenario for BizTalk Server, is using the SQL Agent job: 
    Backup BizTalk Server (BizTalkMgmtDb)
    If you had this job running before the hardware failure, it will have backed up your databases both Full and Log (Differential). 
    Use these backup files for restore, just as you normally would when restoring SQL Server databases.
    Morten la Cour

  • Essbase migration procedure from Unix to NT

    Does anyone have a procedure to migrate essbase from unix to wondows NT? I know that is a procedure in installation guide Ch.10, but I think it is just for the NT to NT..

    we did this last year. export the data from the databases to a text file, use level 0 if you only load at level 0.after installing essbase on nt, add the same name apps and dbs in nt, connect to the unix & nt servers in appman, copy the outlines/dbs from unix to nt. load the data from your exports.copy all the object files eg .csc, .rpt, etc from the unix server, reset all server, app and db settings, including cfg file and recalc.we recreated security ids, groups and filters

  • Is it possible to share an ODBC.INI across Essbase servers?

    h5. Summary
    Our planned production environment may have as many as 15 Essbase servers so it would be useful if the odbc.ini file could be maintained centrally. Is this possible?
    h5. Problem
    The servers are implemented as a high availability cluster. The infrastructure team telll me that a side-effect of this is that the Essbase install builds the server node name into the path for EPM_ORACLE_HOME. Because this path has to be hard-coded in the odbc.ini Driver specification (see below) this apparently forces you to maintain a separate odbc.ini for each Essbase server.
    +Driver=/ htesb1 /u03/Oracle/Middleware/EPMSystem11R1/common/ODBC-64/Merant/6.0/lib/ARora24.so+
    h5. Requirement
    As part of the landscape we will have an OCFS shared file system mounted across all the Essbase servers. What I'd like to be able to do is place a single odbc.ini file on this file system and for that file to be used by all the Essbase nodes. I've tried using an environment variable in the Driver path specification. I've also tried using a symbolic link.
    Does anybody know of a way to avoid the hard-coded paths in the odbc.ini?
    h5. Alternative
    Another possibility is to use OCI instead though I saw in another thread that this doesn't work with parallel loads. Assuming this problem can be got around it means that the connection information will be consistent regardless of the essbase server and the odbc.ini maintenance problem then goes away. The catch is that the connection information is then embedded in the rules file meaning that we have to update same whenever they are moved between environments (e.g. dev to test) and whenever an environment's connection details change. I tried using substitution variables but these only apply to ODBC connection names. Does anyone know if there is some way to parameterise the OCI connection information?
    h5. Versions
    Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
    Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit
    Edited by: blackadr on 20-Jun-2012 16:45

    I had the same question some time ago. Perhaps you can modify the suggestion given to me to meet your needs? Here's the thread:
    http://discussions.apple.com/thread.jspa?threadID=342589&tstart=0

  • Too Many Files Open (Errno: 24) - HP Unix server

    Hi All,
    We have many processes (nearly 30) running on HP unix server. Among these 30 processes, 10 processes runs for every 15 mins and transfer files between NT server and UNIX server.
    All these 10 processes are same processes. Only difference is, each one transfer files from/to different NT servers. During the file transfers, we are opening and closing the streams. But, we are not closing the file objects. Without closing the file objects, we are assigning NULL to the file objects after use. I read at many places that it is recommended to assign NULL to the file object as assigning null will be removed by garbage collector.
    Coming to the issue..... after our processes runs for 1 or days, we are getting an error saying "java.io.FileNotFoundException: Too many open files (errno:24)" and the corresponding process is going down.
    it seems, the setting specified to limit the file objects on UNIX server need to be increased. But, as we are closing all the streams, I don't think increasing the setting on UNIX will solve the problem.
    Could any of you throw some light on this and let me know if you already have come across this type of issue before?
    One more doubt I have is.... I read as ...... The file object limit set on UNIX is the sum of file objects as well as open sockets count. Do any can help me in letting me know what else will be considered as part of these fileobject count? I mean, whether the connection to DB from unix also are considered as part of these file object count?
    Thanks in advance and looking for your reply....
    Kamal

    Under Unix, sockets, named and unnamed pipes etc. all use file descriptors as teh open files.
    Unix systems usually provide the /proc pseudo file system.
    ls -l /proc/[process id]/fd
    might yield something interesting.
    total 31
    lr-x------ 1 oracle oinstall 64 Jan 3 12:01 0 -> /dev/null
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 1 -> /opt/oracle/product/AS/10g/R2/opmn/logs/OC4J~ebank~default_island~1
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 10 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/application-deployments/dev/ebank_default_island_1/application.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 11 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/application-deployments/dev-sme/ebank_default_island_1/application.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 12 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/application-deployments/dev-sme-fun/ebank_default_island_1/application.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 13 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/application-deployments/smea-dev/ebank_default_island_1/application.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 14 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/application-deployments/dev-noncash/ebank_default_island_1/application.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 15 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/application-deployments/fun-noncash/ebank_default_island_1/application.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 16 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/application-deployments/act-trambulin/ebank_default_island_1/application.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 17 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/application-deployments/ebank-dev/ebank_default_island_1/application.log (deleted)
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 18 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/log/ebank_default_island_1/default-web-access.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 19 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/log/ebank_default_island_1/jms.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 2 -> /opt/oracle/product/AS/10g/R2/opmn/logs/OC4J~ebank~default_island~1
    lrwx------ 1 oracle oinstall 64 Jan 3 12:01 20 -> socket:[2833421]
    lrwx------ 1 oracle oinstall 64 Jan 3 12:01 21 -> socket:[2833419]
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 22 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/log/ebank_default_island_1/rmi.log
    lrwx------ 1 oracle oinstall 64 Jan 3 12:01 23 -> socket:[2833423]
    lrwx------ 1 oracle oinstall 64 Jan 3 12:01 24 -> socket:[2833424]
    lrwx------ 1 oracle oinstall 64 Jan 3 12:01 25 -> socket:[2833425]
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 26 -> /opt/oracle/product/AS/10g/R2/j2ee/home/velocity.log
    lrwx------ 1 oracle oinstall 64 Jan 3 12:01 27 -> socket:[5553789]
    lrwx------ 1 oracle oinstall 64 Jan 3 12:01 28 -> socket:[5786563]
    lr-x------ 1 oracle oinstall 64 Jan 3 12:01 3 -> pipe:[2833350]
    lr-x------ 1 oracle oinstall 64 Jan 3 12:01 32 -> /proc/24214/stat
    lr-x------ 1 oracle oinstall 64 Jan 3 12:01 34 -> /dev/random
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 4 -> pipe:[2833350]
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 5 -> /opt/oracle/product/AS/10g/R2/j2ee/home/velocity.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 6 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/log/ebank_default_island_1/server.log
    lr-x------ 1 oracle oinstall 64 Jan 3 12:01 7 -> /proc/uptime
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 8 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/log/ebank_default_island_1/global-application.log
    l-wx------ 1 oracle oinstall 64 Jan 3 12:01 9 -> /opt/oracle/product/AS/10g/R2/j2ee/ebank/application-deployments/PaymentGatewayTestDevel/ebank_default_island_1/application.log

  • Is there any statement to transfer the data from unix server to another

    Hi All,
    Is there any statement or function module avaliable in sap to transfer the data from one unix server to another unix server apart from FTP(file transfer protocol).?
    My requirement :  I need to fetch the data from one unix server to another sap server. i have a option of FTP but i need to transfer the unix data to another server internal table.
    I need to move the unix data to another sap sevrver internal table
    Regards
    Raja

    not sure what your exact requirement is, but
    if both servers are in the same system group, you could potentially just mount the unix directory from the source machine to the target with nfs mount or something.
    if not, you could consider remote function call. create a remotable function module on the source machine to read the data into an internal table, and call that function module from the target machine. this requires creating entries in the RFCDES table (i think via SM59?). on the target machine, you would call function y_whatever destination xyz, where xyz is the RFCDES you set up. it works great.
    dave

  • Ways to Configure Which UNIX Server a PC Client Application Communicates With

    We have several different MS VC++ "fat client" applications that we want to run
    on the same NT 4.0 PC.
    Each application uses the Tuxedo 7.1 client to communicate with Tuxedo services
    located on a UNIX server.
    Each application needs to communicate with a different UNIX server (e.g., application
    A1 needs Tuxedo
    service T1 located on UNIX server S1, application A2 needs Tuxedo service T2 located
    on UNIX server
    S2). We'd like to load the Tuxedo 7.1 client software in such a way that each
    individual application
    controls which server it uses. One way to do that is through registry entries
    specific to each application.
    We are looking for some documentation or tips on other/better ways to configure
    which server the PC
    application communicates with. We are also looking for some documentation or
    tips on how to best
    configure an application that needs to subscribe to services from several different
    servers (e.g.,
    application A needs Tuxedo service T1 on server S1 and Tuxedo service T2 on server
    S2). Thanks.

    Matt,
    This sounds quite unusual, and I am not sure why you want to do things this way.
    Generally, I would expect that the services would be distributed on the server side over
    different boxes as you describe, but the location would be transparent to a client app.
    which would tpinit once, and Tuxedo would route the requests appropriately. Maybe that's
    not how you want to do things because the apps are all logically independent? I'm not
    sure about that though, since you describe needing services on different servers in
    individual clients... Can you do the integration at the back end?
    To do what you describe, however, you need to control the value of the WSNADDR
    environment variable before you call tpinit() - it is the network address in this
    variable that tells the client libraries which server to connect to. Simply set the
    value (from a command line parameter, the registry, an ini file or wherever) with the
    tuxputenv() API before you call tpinit()
    In Tuxedo 7.1 and higher, it is also possible to connect to multiple different servers
    simultaneousy by calling tpinit multiple times and having multiple contexts in the
    client.
    I hope that helps.
    Regards,
    Peter.
    Got a Question? Ask BEA at http://askbea.bea.com
    The views expressed in this posting are solely those of the author, and BEA
    Systems, Inc. does not endorse any of these views.
    BEA Systems, Inc. is not responsible for the accuracy or completeness of the
    information provided
    and assumes no duty to correct, expand upon, delete or update any of the
    information contained in this posting.
    Matt wrote:
    We have several different MS VC++ "fat client" applications that we want to run
    on the same NT 4.0 PC.
    Each application uses the Tuxedo 7.1 client to communicate with Tuxedo services
    located on a UNIX server.
    Each application needs to communicate with a different UNIX server (e.g., application
    A1 needs Tuxedo
    service T1 located on UNIX server S1, application A2 needs Tuxedo service T2 located
    on UNIX server
    S2). We'd like to load the Tuxedo 7.1 client software in such a way that each
    individual application
    controls which server it uses. One way to do that is through registry entries
    specific to each application.
    We are looking for some documentation or tips on other/better ways to configure
    which server the PC
    application communicates with. We are also looking for some documentation or
    tips on how to best
    configure an application that needs to subscribe to services from several different
    servers (e.g.,
    application A needs Tuxedo service T1 on server S1 and Tuxedo service T2 on server
    S2). Thanks.

  • Preview Data in EAS across different essbase Servers

    Hello All,
    We are on 11.1.2.3 of EPM. In our Prod EAS I have Signed in using my AD user ID and added the two(one prod, one dev) essbase servers to the list of essbase servers. I was able to successfully add the two essbase servers. Now when I preview data on cube in dev it gives me the error "Cannot connect to olap service. Cannot connect to Essbase Server. Error: Essbase Error(1051293): Login fails due to invalid login credentials". This cannot be true because I would not be able to expand the essbase servers list and drill down to the database in order to preview it if the credentials were wrong. Also I am able to see the outlines and Calc scripts just fine. I am able to preview data on the prod cube just fine.
    Now, The same happens when I try to preview data in prod essbase cube when coming from Dev EAS. Which means am able to preview data only on dev essbase server cubes when coming from dev EAS and when I try to preview data on a prod cube when coming from Dev EAS i get the same invalid login credentials error.
    Could someone please help me understand if this is a bug or if this is how they intended it to work? Because we just upgraded from 11.1.1.4 and we were able to preview data across any essbase server no matter which EAS service we used.
    No SSL or Single Sign-on were used. Just active directory credentials. It happens only with an AD ID and not with a native directory ID. Please suggest on how this can be fixed.
    Thanks,
    Ted.

    I logged this with Oracle and the following is their response copy/pasted
    The issue which you have reported in this SR is a desired behavior. This behavior has been seen in latest versions only 11.1.2.x.
    In earlier version we can preview the data of any Essbase server added to any eas console.
    But form 11.1.2.x environment this behaviour has been changed.
    We did checked with development team on this earlier, and according to development team its a behavior by design.
    As dev environment and Prod environment have different registries , the preview data would work if Dev Essbase server added to Dev EAS only. like same holds good for prod or test environments.
    As same issue has been reported earlier , we form support has raised a bug with development team after reproducing the issue .
    But development team did not accept this as bug as this behavior was by design .
    So later we changed this bug to a enchantment request with development team to change this behavior.
    Bug 14622693 - PREVIEW DATA ON REMOTE ESSBASE HOST FAIL WHEN CONNECTED FROM EAS
    If this enhancement requested gets approved by development team ,Then it would be incorporated in future release.
    Thanks,
    Ted.

Maybe you are looking for

  • Change the colour of search result highlight?

    Hi i am wondering 2 things. Firstly, why the default colour that a search result is highlighted in is a strange medium-dark blue, and secondly why there is no option to change it. Its really hard to see any search result on almost any document, and i

  • Can we add a new attachment to an existing SC from any other custom page in SoCo PO View

    Hi Experts, Need your expert advice on the below requirement. We have created a custom view in SoCo PO page. There we have a New custom view called Notes&attachement.which contains same value from Notes & attachment of Shopping Cart. We are updating

  • Apple dvd player / external dvd drive issue

    i recently bought a LaCie d2 external dvd drive, since my desktop g4 only has an internal cdr drive. I am running osx 10.4.3, which i installed through this dvd drive. yet now, when i put a DVD movie into the drive and try to open Apple DVD Player, i

  • Qualifing to be an immutible object

    Question: Is it possible to design an immutable object having final object references to mutable objects and still be 'viewed' as immutable. From the book JAVA Concurrency in Practice by Brian Goetz the following statements are made (that have confus

  • Workshop 8.1 - netui-data:repeater and netui:checkBox

    Hi everybody. How can i use netui-data:repeater and netui:checkBox to build a group of checkboxes ? I need something like Struts "indexed and mapped properties" (http://struts.apache.org/struts-doc-1.1/faqs/indexedprops.html) I've tried something lik