[XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

Business Objects XI (3.1) - SP3.
Running on Red Hat Enterprise Linux OS.
7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
All reports connecting to Oracle 10g databases.
==================
In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
1.) Oracle Native connection
2.) ODBC connection
3.) JDBC connection
Here's what we have determined so far -
1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
==================
We would appreciate some advice from anyone who has been down this road before.
What were your Best Practices?
What can you add to the Pros and Cons listed above?
How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
As always, thanks in advance for your comments.

Hi,
I just saw this article and I would like to add some infos.
First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
Database name will be resolved differently regarding to the environment and therefore will access a different database.
Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

Similar Messages

  • Want the best practices docs on Oracle database Admin provided by Oracle

    Hi there,
    I looked at everywhere and didn’t find the best practices docs on Oracle database administration especially on creating db in oracle 10g dbs. I can find bits and pieces here and there. But I didn’t find all incorporated in one. Could somebody direct/provide me on this?

    ok, I'm not looking for the oracle provided manual to find out the best practices in db field. I'm looking for the best practices when creating the db in oracle db, for example. I can read all the oracle manual in Oracle tihiti world. But I don't know the most used and practical things to do when creating the db:- If I have to define what should be the best practices when creating the db, here are the checklist:
    Here are the best practices when creating the Oracle 10 DBs:
    1.     Create meaningful database name
    2.     Create the directory structure following the Optimal Flexible Architecture(OFA)
    a.     Give suffix for the redo log .LOG
    b.     Give suffix for the data file .DBF
    c.     Give suffix for the Control file .CTL
    3.     Enable Password complexity
    4.     Enable ARCHIVELOG Mode
    5.     Use User Oracle Managed File
    6.     Create separate tablespace for data files and Indexes
    7.     Put archive log in different multiple drives
    8.     Multiplex redo log files, and control file
    etc.
    I want to see what other db gurus considers the best practices in the field.

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • What are the best practices to migrate VPN users for Inter forest mgration?

    What are the best practices to migrate VPN users for Inter forest mgration?

    It depends on a various factors. There is no "generic" solution or best practice recommendation. Which migration tool are you planning to use?
    Quest (QMM) has a VPN migration solution/tool.
    ADMT - you can develop your own service based solution if required. I believe it was mentioned in my blog post.
    Santhosh Sivarajan | Houston, TX | www.sivarajan.com
    ITIL,MCITP,MCTS,MCSE (W2K3/W2K/NT4),MCSA(W2K3/W2K/MSG),Network+,CCNA
    Windows Server 2012 Book - Migrating from 2008 to Windows Server 2012
    Blogs: Blogs
    Twitter: Twitter
    LinkedIn: LinkedIn
    Facebook: Facebook
    Microsoft Virtual Academy:
    Microsoft Virtual Academy
    This posting is provided AS IS with no warranties, and confers no rights.

  • Best practice standard User Acess Test for WIN2012 AD

    What is the Best practice standard User Acess Test  for WIN2012 AD

    Hello,
    as before, add a computer to the domain and log on with a domain user account to the computer.
    You should be able from the client machine to open the sharedfolders on the DCseither with:
    \\DCName\sysvol
    \\DCName\netlogonor \\NetBiosDomainName\sysvol
    \\NetBiosDomainName\netlogon
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://blogs.msmvps.com/MWeber
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
    Twitter:  

  • Best practice to Deployment Oracle WebCenter Suite for enterpsie

    I have a lead with enterprise client; and we need to proposed to this client best practice to deploy high availability on cluster environment contains the following components:
    - oracle web center content: it will used for WebCenter portal (spaces) repository for x-trantent portal as well as it will used to build internet website using WCM
    - oracle WebCenter portal; to build x-intranet portal
    - oracle access manager for single sign on authentication
    - oracle web tier for HTTP server and web cache.
    i reviewed the enterprise deployment "http://docs.oracle.com/cd/E23943_01/core.1111/e12037/intro.htm" and contains rich information on the configuration.
    However; my question is could you provide us a best practice to deploy above components on a high availability cluster environment "on a Linux environment prefared" to support and tested around 20k users? By the way client already had oracle exadata 11g server and it will used for this deployment.

    AW,
    One way is creating EJBs.Please refer to the threads below for that
    https://forums.sdn.sap.com/click.jspa?searchID=2936002&messageID=1082087
    You can create a javabean and you can import that as a model .
    Check the following project which will generate javabean (MaX DB)
    https://www.sdn.sap.com/irj/sdn/softwaredownload?download=/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/business_packages/a1-8-4/simple_javabean_generator_project.zip
    This project will generate a javaben out of the tables in MAXDB.
    You can follow any one of the above.
    Regards,Anilkumar

  • ESS MSS Best Practice Methods

    When implementing employee self service and manager self service what are the best practices when creating ids? Do most use active directory? Or employee Ids and or generated numbers?
    Or would it be beneficial to use Employee IDs so that ABAP programming could make updates automatically?
    I would like to know what the best methods some may recommend.
    Thanks.

    Thanks for clarifying!
    Most companies which I have observed use the AD alias, which is also the email address SMTP name and easier to associate to the IT 0105 pernr via the employee first name and last name, and that in the user master address data as well.
    First 7 characters are last name, last character is first character of first name, etc => 'BUSSCHEJ'
    But then again, if your AD name is a generated number or cryptic value, then why not call yourself S123456789 like here at SDN, or R2D2 for that matter.
    Using the pesonnel number is another option, but you should first check where else it is used. Perhaps it is like the US Social Security Number, which is meant to be kept "top secret" like a password...

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best Practices to configure the connectivity with Bank and EBS

    Hi All,
    We are working on a requirement where Middleware(SOA) has to read the file from Oracle E-biz server and send it to BANK.
    For Connectivity setup between Oracle E-biz and Middleware(SOA), Middleware(SOA) and BANK do we need to use oracle as user or we can use any user to communicate between E-biz and Bank?
    Could you please share best practices/documents if you have any on how we should establish the connectivity and setup the workflow with Bank and E-Biz. E-Business
    Thanks & Regards
    Narendra

    Hi Narendra,
    Oracle User and the FTP user are 2 different users.
    I'm assuming you'll be reading the file from R12 through File Adapter and writing it to Bank using FTP Adapter.
    Oracle User is able to login into R12, do some operations, submit some concurrent programs/requests based on responsibilities and generate the file to be transferred (like in my case it did by running a Concurrent Request). The file so generated should be placed at a location from where File Adapter can read it within the BPEL process. Now to read the file, the user that is used is a SOA server user (again different from R12 user). This is the same user that you use to login into your SOA server physical box. Hence to be able to read the file, your file should have appropriate privileges (we set that as 777) so that it can be read by the SOA process (using SOA user).
    FTP user, on the other hand, is the user that allows connection to Bank FTP server. This has absolutely no connection with R12. Bank who hosts the FTP server must give you the FTP user details that you'll use inside your FTP JNDI Configuration on Weblogic. When you deploy and run your process (you don't deploy adapter), it picks up the connection details from FTP JNDI properties that you defined in weblogic.
    Hence both the jusers can be different and I don't think any best practices are required or do exist for this.
    Regards,
    Neeraj Sehgal

  • Best Practice: A J2EE Blue-Print for a Typical Web App

    Consider a typical synchronous Struts-based Web application which does a simple DB search and post. What are some of the main patterns and components that should be used if following the �industry best practices�
    Does the following flow seem accurate?
    Strust Action creates a TransferObject , and passes it to a Business Delegate. Delegate finds the appropriate BusinessObject, the Business Object uses the Data Access Object�.the CRUD operation happens and the result is sent back to the Action in the same TransferObject.
    Which one of these components need an interface?
    What's the best way for this components to interact with each other (factory, etc.)?
    Message was edited by:
    kmkiani
    Message was edited by:
    kmkiani

    There are 3 tiers in a Java EE application. (Presentation, Business, Integration).
    The BusinessDelegate in this scenario would be a Presentation-tier business delegate. This guy would interact with a Session Facade who lives on the Business-tier. The SessionFacade is the abstraction on the Business-tier and the Business Delegate is the abstraction on the Presentation-tier. It is these guys that have direct communication. This design enables low coupling between the actual implementations of each area. If done properly, you could go from EJB to Web Service to POJO business models without ever having to change anything in the Presentation-tier.
    These object-oriented design patterns are primarily for Enterprise applications with extensive Quality-of-Service requirements.
    In your scenario, the Presentation-tier would contain a MVC-based web application, i.e. Struts. The business model and business/domain requirements would be implemented in the Business-tier.
    Presentation Tier - Struts Web Application
    Business Tier - (EJB | POJO | WEB SERVICES) Application
    Integration Tier - (Relational Database | File System | XML Database | EIS)

  • Best practice to use Time capsule for back-up of 3 different products (MBP 15 OSX lion, MBP 13 OSX lion and MBA 13 OSX mountain lion)?. Only the MBP15 is back-up regularly.

    When I want to save data of the MBA13 on mountain lion (wireless) with time capsule, hois there any best practice to perform?
    After that, assuming that data are back-up, can we easily differentiate data in time capsule belonging to MBP15/13 and MBA13?

    Unfortunately, Apple left off the Ethernet port....the most important port in networking....on the MBA, so your first backup of the entire Mac will need to be done using wireless.
    That may take a day or two unless your MBA has a Thunderbolt port on it in which case you could add a Thunderbolt to Ethernet adapter and connect the MBA to the Time Capsule for the first backup using an Ethernet cable.  It will probably only take 3-4 hours or less doing it this way.
    Once you have the first complete backup done, other subsequent backups can be done using wireless since they will ony take a few minutes, on average.
    Both Macs will backup to the Time Capsule using Time Machine automatically. Backups will be kept completely separate, so one Mac will normally only be able to "see" its own backups.

  • Best Practices in use of ABAP for SRM and/or CRM Configuration

    I was wondering if there is a document that defines best practices for the use of ABAP with the installation and customization of SRM and/or CRM. Such as amount of ABAP coding typically required, and best practices around the use of ABAP for customization and configuration.
    Thanks.

    Hi, Johnson
    Sorry, Please don't mind, you are not at right place to ask the Question like this
    Please read "The Forum Rules of Engagement" before posting!  HOT NEWS!!
    Thanks and Regards,
    Faisal

  • Best Practices? Rendering to Flash for streaming web....

    I am always impressed with the flash based videos I see streaming on YouTube, FastCompany.Tv and other sites....
    My question... can you please either explain or point me in the right direction for streaming video best practices? Specifically, I am looking for info on best settings to produce the flash video (codecs and/or FCP render settings) and then what do people use as a flash player on their websites to show the end result.
    My goal is to create internal instructional videos for corporate training and then host them on my site (or streaming from Akamai). I would like people to be able to watch it in a flash player embedded on my site (and have it look good even if they click on a full screen button) or download to their iPod.
    Examples of what I like, but I don't know how to do:
    http://www.fastcompany.tv/video/getting-government-work
    Thank you in advance for your expertise and insight.
    -Steven

    I would like people to be able to watch it in a flash player embedded on my site (and have it look good even if they click on a full screen button) or download to their iPod.
    Use the H.264 setting for iPod in Compressor. The h.264 file will play in a JW Flash Player and it's able to be downloaded for iPod viewing.

  • Best practice in Infoprovider & Query design for access by BO Universe

    Hello Experts,
    Are there any best practices identified by practitioners or suggested by SAP for development of Infoprovider and queries for access by BO Universe.
    Best practices should be from the prospective of performance, design simplicity, adaptability to change etc.
    Appreciate your help.
    Regards,
    Pritesh.
    Edited by: pritesh prakash on Jul 19, 2010 10:51 AM

    Thanks Suresh.
    My project plan is to build Infocubes & queries which will be then used to build Universe upon it. Thus I am looking for do's & dont's while designing infocubes & queries such that there wont be any issues(performance or other) when accessed by Universe built on it.
    Hope I have made it more clear now.
    Regards,
    Pritesh.

  • Best practice DNS in VPN environment for Lync2013 clients

    So I do have those site2site VPNs to connect the small branch offices to the main office. Internal DNS makes sure, that the branch offices can acess all the servers/services in the main office with their domain.local namespace.
    In such a scenario will the Lync2013 clients connect through the VPN to the internal sites due to both lyncdiscover and lyncdiscoverinternal being available?
    Wouldn't it cause way less burden on the VPN routers if clients would simply go out to the internet and connect from the external side so all the Lync traffic does not have to be stuffed through the VPN pipe? I dont see the point to encrypt the traffice
    once more.
    Thanks for your suggestions about best practices!
    HST

      Hi,
    When users connect to the corporate network using a VPN client, Lync media traffic is sent through the VPN tunnel. This configuration can create additional latency and jitter because media traffic must pass through an additional layer of encryption and
    decryption. The issue is compounded when the VPN concentrator is busy.
    If you want to connect Lync server from public network you need to deploy an Edge server.
    The solution to force VPN traffic through the Edge Servers must allow external Lync clients connected through VPN, you can refer to the part of "Solution Configuration" in the link below:
     http://blogs.technet.com/b/nexthop/archive/2011/11/15/enabling-lync-media-to-bypass-a-vpn-tunnel.aspx
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

Maybe you are looking for

  • Can't print - "Error during printing"

    All test pages are printed OK, but I can't print from any app all of a sudden. Re-installing the driver didn't help. Canon IP4300 Thank you for some advice!

  • Manual Transfer

    I have recently got a new computer, and have downloaded iTunes. A lot of the music on my iPod is only on my old computer, which doesn't work anymore. Is there any way to configure iTunes so that I won't lose all my songs when I connect my iPod (I hav

  • Re: Error while throwing OAException message

    Hi, I am facing the same issue, doing the controller extension and when throwing an OAEXCEPTION im getting this error. Please let me know if there is any work around. Thanks Divya

  • Link to Oracle Form in Discoverer Viewer

    Can we create a link in Discoverer Viewer to a transaction in Oracle. On clicking on the link the Oracle Transaction form will open. Thanks in advance

  • Unable to copy+paste from Illustrator CC 2014 to Fireworks CS6

    It seems to me like Illustrator CC 2014 is riddled with bugs and compatibility issues. The latest one I have found is that I am unable to copy+paste directly from Illustrator CC 2014 to Fireworks CS6. However it does work when I copy+paste from Illus