NetApp direct connect to UCS best practices

Folks,
I have installed many FlexPods this year but all involved either Nexus 5Ks or 7Ks with vPC. This protects the NFS LUN connections from both network outages or UCSM FIM outages. What is not clear to me is the best practice on connecting a NetApp directly to the UCS via applaince ports. Appliance ports seem like a great idea but they seem to ass issues to designs, both in VMware and the network.
Does anyone have a configuration example on both the NetApp, UCS & Vmware side?
I thought I would ask the group their opinion.
Cheers,
David Jarzynka

Hi David
Can you clarify your last posting a little bit.
I never installed direct attached NetApp Storage to the UCS as well.
One drawback i see is, if you have your NetApp System in active/standby mode connected to the FI, then for example all Servers connect through FI A to the NetApp System (NIC with VMKernel for all Server active on FI A). If the NetApp Link fails and switches over to standby link on Fabric B all the Traffic will go form Server to FI A to Uplink Switch to FI B and then to the NetApp System - because the Server is not aware that the NetApp System did a failover. So not all companies will have 10G Uplink Switches which will cause a bottleneck in that case.
What other things do you see? I agree completely with you - all say is a wonderful feature on the slides - but i don't think it's that smart in practise?
Thanks for a short replay.
Cheers
Patrick

Similar Messages

  • Direct connect to UCS FI from EMC VNX 5300

    Hi,
    I'm looking for any configuration recommendations or best practices for configuring a VNX 5300 unified for direct connect to UCS Fabric Interconnects.
    The direct connect wil be 10GB on both the file and block side.
    Are their any guides or recommendations anyone may have?
    I've looked and cannot find much.

    Hi Manuel,
    Two things:
    1) Take a look at the following matrix for the supported version
    http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperability/matrix/Matrix8.html
    2) Currently the direct attch FC is supported under a restricted to topologies in which the zoning  database is provided from an upstream Cisco MDS 9000 switch or Nexus  5000 or 5500 switch. Hence you would still have to conenct an MDS or a N5K to the FIs for the zoning info.
    ./Abhinav

  • Dell MD3620i connect to vmware - best practices

    Hello Community,
    I've purchased a Dell MD3620i with 2 x ports 10Gbase-T Ethernet on each controller (2 x controllers).
    My vmware environment consists of 2 x ESXi hosts (each with 2ports x 1Gbase-T) and an HP Lefthand storage( also 1Gbase-T). The switches I have are Cisco3750 that have only 1Gbase-T Ethernets.
    I'm going to replace this HP Storage with a DELL storage.
    As I have never worked with DELL storages, I need your help to answer my questions:
    1. What is teh best practices to connect vmware hosts to the Dell MD3620i ?
    2. what is the process to create a LUNs?
    3. Can I create multiply LUNs on only one disk group? or is the best practice to create one LUN on one disk group?
    4. How to set iSCSI 10GBase-T ports working on 1Gbps switch?
    5. Is the best practice to connect the Dell MD3620i directly to the vmware Hosts without switch?
    6. The old iscsi on HP storage is in a different network, can I do vmotion to move all virtual machines from one iSCSI network to another and then change the iSCSI IP addresses on vmware hosts without virtual machines interruption?
    7. Can I bundle two iSCSI ports to one 2Gbps interface and conenct to the switch? I'm using two switches, so I want connect each controller to each switch by bounding their interfaces to 2Gbps. My Question is, would be controller switched over to another controller if the Ethernet link falls on the switch?(in case one switch is rebooting)
    tahnks in advanse!

    TCP/IP basics: A computer cannot connect to 2 different (isolated) networks (e.g. 2 directly-attached cables between the server and a SAN's iSCSI port) that share the same subnet.
    Data corruption is highly unlikely if you were to share the same vlan for iSCSI, however, performance and overall reliability would be impacted.
    With a MD3620i, here are a few setup scenarios using the factory default subnets (and for direct-attached setups I had to add 4 additional subnets):
    Single switch (not recommended as the switch becomes your single point of failure):
    Controller 0:
    iSCSI port 0: 192.168.130.101
    iSCSI port 1: 192.168.131.101
    iSCSI port 2: 192.168.132.101
    iSCSI port 4: 192.168.133.101
    Controller 1:
    iSCSI port 0: 192.168.130.102
    iSCSI port 1: 192.168.131.102
    iSCSI port 2: 192.168.132.102
    iSCSI port 4: 192.168.133.102
    Server 1:
    iSCSI NIC 0: 192.168.130.110
    iSCSI NIC 1: 192.168.131.110
    iSCSI NIC 2: 192.168.132.110
    iSCSI NIC 3: 192.168.133.110
    Server 2: <end in 120>
    All ports plug into that 1 switch (obviously).
    If you only want to use 2 NICs for iSCSI, have server 1 use the 130 and 131 subnet, and server 2 use 132 and 133, server 3 then uses 130 and 131 again. This spreads the IO load between the iSCSI ports on the SAN.
    Dual switches (one VLAN for all the iSCSI ports on that switch though):
    NOTE: Do NOT link the switches together. This helps prevent issues that occur on one switch from affecting the other switch.
    Controller 0:
    iSCSI port 0: 192.168.130.101 -> To Switch 1
    iSCSI port 1: 192.168.131.101 -> To Switch 2
    iSCSI port 2: 192.168.132.101 -> To Switch 1
    iSCSI port 4: 192.168.133.101 -> To Switch 2
    Controller 1:
    iSCSI port 0: 192.168.130.102 -> To Switch 1
    iSCSI port 1: 192.168.131.102 -> To Switch 2
    iSCSI port 2: 192.168.132.102 -> To Switch 1
    iSCSI port 4: 192.168.133.102 -> To Switch 2
    Server 1:
    iSCSI NIC 0: 192.168.130.110 -> To Switch 1
    iSCSI NIC 1: 192.168.131.110 -> To Switch 2
    iSCSI NIC 2: 192.168.132.110 -> To Switch 1
    iSCSI NIC 3: 192.168.133.110 -> To Switch 2
    Server 2: <end in 120>
    Same note about using just 2 NICs per server for iSCSI. In this setup each server will still use both switches so that a switch failure should not take any of your servers' iSCSI connectivity down.
    Quad switches (or 2 VLANs on each of the 2 switches above):
    iSCSI port 0: 192.168.130.101 -> To Switch 1
    iSCSI port 1: 192.168.131.101 -> To Switch 2
    iSCSI port 2: 192.168.132.101 -> To Switch 3
    iSCSI port 4: 192.168.133.101 -> To Switch 4
    Controller 1:
    iSCSI port 0: 192.168.130.102 -> To Switch 1
    iSCSI port 1: 192.168.131.102 -> To Switch 2
    iSCSI port 2: 192.168.132.102 -> To Switch 3
    iSCSI port 4: 192.168.133.102 -> To Switch 4
    Server 1:
    iSCSI NIC 0: 192.168.130.110 -> To Switch 1
    iSCSI NIC 1: 192.168.131.110 -> To Switch 2
    iSCSI NIC 2: 192.168.132.110 -> To Switch 3
    iSCSI NIC 3: 192.168.133.110 -> To Switch 4
    Server 2: <end in 120>
    In this case using 2 NICs per server means the first server uses the first 2 switches and the second server uses the second set of switches.
    Direct attach:
    iSCSI port 0: 192.168.130.101 -> To server iSCSI NIC 1 (on an example IP of 192.168.130.110)
    iSCSI port 1: 192.168.131.101 -> To server iSCSI NIC 2 (on an example IP of 192.168.131.110)
    iSCSI port 2: 192.168.132.101 -> To server iSCSI NIC 3 (on an example IP of 192.168.132.110)
    iSCSI port 4: 192.168.133.101 -> To server iSCSI NIC 4 (on an example IP of 192.168.133.110)
    Controller 1:
    iSCSI port 0: 192.168.134.102 -> To server iSCSI NIC 5 (on an example IP of 192.168.134.110)
    iSCSI port 1: 192.168.135.102 -> To server iSCSI NIC 6 (on an example IP of 192.168.135.110)
    iSCSI port 2: 192.168.136.102 -> To server iSCSI NIC 7 (on an example IP of 192.168.136.110)
    iSCSI port 4: 192.168.137.102 -> To server iSCSI NIC 8 (on an example IP of 192.168.137.110)
    I left controller 1 on the "102" IPs for easier future changing back to just 4 subnets.

  • SAP BI4 SP2 Patch 7 Webi Connection to BW Best Practice

    We are working with the version 4.0 SP 2 patch 7 of  BI4 and developing some reports with WEBI and we are wondering about wich is the best method to access to BW Data.
    At the moment we are using BICS because read in no few places that this is the best method to consume BW DATA cause have improvements is perforance, hierarchies, etc, but i don't know if this is really true.
    Are BICS the best method to access to BW Data, this is the way recomended by SAP?
    In the fillter panel of a webi document, we cant use "OR" clause, is not possible use this clause????
    When we working with hierarchies and change the hierarchy for the dimension value or viceversa the report throw an error of AnwserPromts API (30270)
    When we working with BEX queries containning variables and try to merge that variable with a Report Prompt(From another query) , and execute the queries shows an error indicating that one prompt has no value.
    (fAnyone experienced this problems too? anyone find out a solutions to this issues?
    Best Regards
    Martin.

    Hi Martin
    In BI 4.0 BICS is the method to access BW not universes.  .UNV based on BW are there for legacy.
    Please look at this forum ticket with links on Best practices BI 4.0 - BW and if you do a search in SDN you can find many tickets on this topic.
    How to access BEx directly in WEBI 4.0
    Regards
    Federica

  • Is construction of webi directly in production a best practice?

    with bex-query and universes well consolidated and tested by a IT group,
    can be considered the construction of webis directly in production without going through test and quality a Business Objects best practice?
    is possible allow end-users (non IT personal) construct this webis?
    Is there a document of good practices that SAP made this recommendation?.
    thanks in advance by the answer.
    Ramón Mediero

    If universe and all has been tested and signed-off; also end user are familiar with Webi report development and they want their ad-hoc reports instead of pre-developed report set; There will be no issue to allowing end user to develop the Webi reports in production. However there we have to take care of few points like
    > need to check where report creation in production's public folder is feasible or not ? If yes how? is we need to create separate folders for individual user or what else ? and if No then what will be alternative like they can create in favorite folder?
    > also we need to take control on number of report that users will create. however may be users will create so many reports with huge amount of data refreshes and etc and PROD will face performance issues etc etc...
    like this there can be so many considerations needs to consider
    Hope this will give u some idea...
    Vills

  • Unity Connection 7.x - Best Practice for Large Report Mailboxes?

    Good morning, We have 150 mailboxes from Nurses to give shift reports in. The mailbox quota is 60MB and the message aging policy is on. Deletede messages are deletede after 14 days. The massage aging policy is system wide, and increasing the quota would cause storage issues. Is there a way to keep the message aging policy and reduce it for 1 group of users? Is there a way to bulk admin the mailbox quota changes?
    Version 7.1.3ES9.21004-9
    Thanks

    As for UC 8x, you're not alone.  I don't typically recommend going to an 8.0 release (no offense to Cisco).  Let things get vetted a bit and then start looking for the recommended stable version to migrate to.
    As for bulk changes to mailbox store configurations for users, Jeff (Lindborg) may be able to correct me if I am wrong here.  But with the given tools, I don't think there is a way to bulk edit or update the mailbox info for users (i.e., turn on/off Message Aging Policy).  No access to those values via Bulk Edit and no associated fields in the BAT format either.
    Now, with that said - no one knows better than Lindborg when it comes to Unity.  So I defer to him on that point.
    Hailey
    Please rate helpful posts!

  • Best Practices? Rendering to Flash for streaming web....

    I am always impressed with the flash based videos I see streaming on YouTube, FastCompany.Tv and other sites....
    My question... can you please either explain or point me in the right direction for streaming video best practices? Specifically, I am looking for info on best settings to produce the flash video (codecs and/or FCP render settings) and then what do people use as a flash player on their websites to show the end result.
    My goal is to create internal instructional videos for corporate training and then host them on my site (or streaming from Akamai). I would like people to be able to watch it in a flash player embedded on my site (and have it look good even if they click on a full screen button) or download to their iPod.
    Examples of what I like, but I don't know how to do:
    http://www.fastcompany.tv/video/getting-government-work
    Thank you in advance for your expertise and insight.
    -Steven

    I would like people to be able to watch it in a flash player embedded on my site (and have it look good even if they click on a full screen button) or download to their iPod.
    Use the H.264 setting for iPod in Compressor. The h.264 file will play in a JW Flash Player and it's able to be downloaded for iPod viewing.

  • Best practice for sqlldr -- direct to core or to stage first?

    We want to begin using sql loader to load simple (but big) tables that have, up to this point, been loaded via perl and it's DBI connection to Oracle. The target tables typically receive 10-20 million rows per day (parsed log data from many thousands of machines) and at any one time can hold more than a billion total records PER TABLE. These tables are pretty simple (typically 5-10 columns, 2 or 3 part primary keys). They are partitioned BY MONTH (DAY is always one of the primary key columns) and set up on very large SAN disk arrays, stripped, etc. I can use sqlldr to load the core tables directly, OR, I could use sqlldr to load a staging table on a daily basis, then PL/SQL and SQL+ to move data from the staging table to the core. My instinct tells me that the second route is SAFER, that is there is less chance that something catastrophic could corrupt the core table, but obviously this would (a) take more time to develop and (b) reduce our over-all throughput.
    If I go the first route, loading the core directly with sqlldr, what is the worst thing that could possibly happen? That is, in anyone's experience, can a sqlldr problem corrupt a very large table? Does the likelihood of a catastrophic problem increase in proportion to the number of rows already in the target table? Are there strategies that will mitigate potential catastrophies besides going to staging and then to core via pl/sql? For example, if my core is partitioned by month, might I limit potential damage only to the current month? Are there any known potential pitfalls to using sqlldr directly in this fashion?
    Thanks
    matthew rapaport
    [email protected]

    Wow, thanks everyone!
    1. External tables... I'd thought of this, but in our development group we have no direct access to the DBMS server so we'd have to do some workflow to move the data files to the dbms server and then write the merge. If sql loader will do the job directly (to the core) without risk, then that seems to be the most straight-forward way to go.
    2. The data in the raw files is very clean, this being done in the step that parses the raw logs (100-500mb each) to the "insert files" (~20mb each), and there would be no transformations in moving data from staging to core, so again that appears to argue for direct-to-core loading.
    3. The data is collected by DAY, but reported on mostly by MONTH (e.g., select day, sum(col), count(col), from TABLE where day between A and B, group by day, order by day, etc where A and B are usually the first and last day of the month) and that is why the tables are partitioned by month, but perhaps this is not the best practice (???). I'm not the DBA, but I can make suggestions... What do you think?
    4. Time to review my sqlldr docs! I haven't used it in a couple of years, and I'm keeping my fingers crossed that it can handle the particular delimiter used in these files (pipe-tab-pipe expressed in perl as "|\t|". If I recall it can, but I'm not sure how to express the tab...
    Meanwhile, thank you very much, you have all been a BIG help... Strange no one asked me how it was that a Microsoft company was using Oracle :-) ... I work for DANGER INC (was www.danger.com if anyone interested) which is now owned (about 9 months now) by Microsoft, and this is the legacy reporting system... :-)
    matthew rapaport
    [email protected]
    [email protected]

  • Best Practice for Networking in UCS required

    Hi
    We are planning to deploy UCS n our environment. The Fabric Interconnects A and B will need to connect to pair of Catalyst 4900 M switch. Whats is the best practice to connect? How should the 4900 switch be configured? Can I do port channel in UCS?
    Appreciate your help.
    Regards
    Kumar

    I highly recommend you review Brad Hedlund's videos regarding UCS networking here:
    http://bradhedlund.com/2010/06/22/cisco-ucs-networking-best-practices/
    You may want to focus on Part 10 in particular, as this talks about running UCS in end-host mode without vPC or VSS.
    Regards,
    Matt

  • Best Practices for Connecting to WebHelp via an application?

    Greetings,
    My first post on these forums, so I appologize if this has already been covered (I've done some limited searching w/o success).  I'm developing a .Net application which is accessing my orginazation's RoboHelp-generated webhelp.  My organization's RoboHelp documentation team is still new with the software and so it's been up to me to chart the course for establishing the workflow for connecting to the help from the application.  I've read up on Peter Grange's 'calling webhelp' section off his blog, but I'm still a bit unclear about what might be the best practices approach for connecting to webhelp.
    To date, my org. has been delayed in letting me know their TopicIDs or MapIDs for their various documented topics.  However, I have been able to acquire the relative paths to those topics (I achieved this by manually browsing their online help and extracting out the paths).  And I've been able to use the strategy of creating the link via constructing a URL (following the strategy of using the following syntax: "<root URL>?#<relative URI path>" alternating with "<root URL>??#<relative URI path>").  It strikes me, however, that this approach is somewhat of a hack - since RoboHelp provides other approaches to linking to their documentation via TopicID and MapID.
    What is the recommended/best-practices approach here?  Are they all equally valid or are there pitfalls I'm missing.  I'm inclined to use the URI methodology that I've established above since it works for my needs so far, but I'm worried that I'm not seeing the forest for the trees...
    Regards,
    Brett
    contractor to the USGS
    Lakewood, CO
    PS: we're using RoboHelp 9.0

    I've been giving this some thought over the weekend and this is the best answer I've come up with from a developer's perspective:
    (1) Connecting via URL is convenient if (#1) you have an established naming convention that works for everyone (as Peter mentioned in his reply above)
    (2) Connecting via URL has the disadvantage that changes to the file names and/or folder structure by the author will break connectivity
    (3) Connecting via TopicID/MapID has the advantage that if there is no naming convention or if it's fluid or under construction, the author can maintain that ID after making changes to his/her file or folder structure and still maintain the application connectivity.  Another approach to solving this problem if you're working with URLs would be to set up a web service that would match file addresses to some identifier utilized by the developer (basically a TopicID/MapID coming from the other direction).
    (4) Connecting via TopicID has an aesthetic appeal in the code since it's easy to provide a more english-readable identifier.  As a .Net developer, I find it easy and convenient to construct an enum that matches my TopicIDs and to utilize that enum to construct my identifier when it comes time to make the documentation call.
    (5) Connecting via URL is more convenient for the author, since he/she doesn't have to worry about maintaining IDs
    (6) Connecting via TopicIDs/MapIDs forces the author to maintain those IDs and allows the documentation to be more easily used into the future by other applications worked by developers who might have their own preference in one direction or another as to how they make their connection.
    Hope that helps for posterity.  I'd be interested if anyone else had thoughts to add.
    -Brett

  • SQL Server Best Practices Architecture UCS and FAS3270

    Hey thereWe are moving from EMC SAN and physical servers to NetApp fas3270 and virtual environment on Cisco UCS B200 M3.Traditionally - Best Practices for SQL Server Datbases are to separate the following files on spearate LUN's and/or VolumesDatabase Data filesTransaction Log filesTempDB Data filesAlso I have seen additional separations for...
    System Data files (Master, Model, MSDB, Distribution, Resource DB etc...)IndexesDepending on the size of the database and I/O requirements you can add multiple files for databases.  The goal is provide optimal performance.  The method of choice is to separate Reads & Writes, (Random and Sequential activities)If you have 30 Disks, is it better to separate them?  Or is better to leave the files in one continous pool?  12 Drives RAID 10 (Data files)10 Drives RAID 10 (Log files)8 Drives RAID 10 (TempDB)Please don't get too caught up on the numbers used in the example, but place focus on whether or not (using FAS3270) it is better practice to spearate or consolidate drives/volumes for SQL Server DatabasesThanks!

    Hi Michael,It's a completely different world with NetApp! As a rule of thumb, you don't need separate spindles for different workloads (like SQL databases & logs) - you just put them into separate flexible volumes, which can share the same aggregate (i.e. a grouping of physical disks).For more detailed info about SQL on NetApp have a look at this doc:http://www.netapp.com/us/system/pdf-reader.aspx?pdfuri=tcm:10-61005-16&m=tr-4003.pdfRegards,Radek

  • Limitations associated with Direct Connecting arrays to UCS FIs.

    I understand that in order to direct connect an array to UCS, the FIs have to be put into Switch mode(NPIV).  But once the FI's are in Switch Mode, is it impossible to attach other SAN switches to the fabric interconnects?
    It is my understanding that you could accomplish this because it wouldn't be much different then an extended SAN fabric in which an array's traffic would have to travel through 2 FC switches to arrive at a host.  But perhaps I'm missing something.
    Thanks in advance.

    You can have directly attached storage and FC switches at the same time, in fact until recently it was required since UCS did not do zoning. 
    http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-infrastructure-ucs-manager-software/116082-config-ucs-das-00.html
    The questions is whether you really want to do it that way, I think for 95% of deployments, if not more, end host mode is recommended. 
    If you're already planning to attach FC switches, best to place storage on them. 
    Have a look at FlexPod (of Vblock) designs.

  • FC port channels between MDS and UCS FI best practice?

    Hi,
    We would like to create FC port channels between our UCS FI's and MDS9250 switches.
    At the moment we have 2 separate 8Gbps links to the FI's.
    Are there any disadvantages or reasons to NOT do this?
    Is it a best practice?
    Thanks.

    As Walter said, having port-channels is best practice.  Here is a little more information on why.
    Let's take your example of two 8Gbps links, not in a port-channel ( and no static pinning ) for Fibre Channel connectivity:
    Hosts on the UCS get automatically assigned ( pinned ) to the individual uplinks in a round-robin fashion.
    (1)If you have some hosts that are transferring a lot of data, to and from storage, these hosts can end up pinned to the same uplink and could hurt their performance. 
    In a port-channel, the hosts are pinned to the port-channel and not individual links.
    (2)Since hosts are assigned to an individual link, if that link goes down, the hosts now have to log back into the fabric over the existing working link.   Now you would have all hosts sharing a single link. The hosts will not get re-pinned to a link until they leave and rejoin the fabric.  To get them load balanced again would require taking them out of the fabric and adding them back, again via log out, power off, reload, etc...
    If the links are in a port-channel, the loss of one link will reduce the bandwidth of course, but when the link is restored, no hosts have to be logged out to regain the bandwidth.
    Best regards,
    Jim

  • Error while Connecting report Best Practices v1.31 with SAP

    Hello experts,
    I'm facing an issue while trying to connect some of my reports from Best Practices for BI with SAP.
    It only happens when it's about info sets, the other ones that are with SAP tables go smoothly without a problem.
    The most interesting is I have already one of the reports connected to SAP info sets.
    I have already verified the document of steps of creation of additional database that comes with BP pack. They seem ok.
    Here goes what Crystal Reports throws to me after changing the data source to SAP:
    For report "GL Statement" one of the Financial Analysis one which uses InfoSet: /KYK/IS_FIGL_I3:
    - Failed to retrieve data from the database; - click ok then...
    - Database connector error: It wasn't indicated any variant for exercise (something like this after translating) - click ok then
    - Database connector error: RFC_INVALID_HANDLE
    For report "Cost Analysis: Planned vs. Actual Order Costs" one of the Financial Analysis one which uses InfoSet: ZBPBI131_INFO_ODVR and ZBPBI131_INFO_COAS; and also the Query CO_OM_OP_20_Q1:
    - Failed to retrieve data from the database; - click ok then...
    - Database connector error: check class for selections raised errors - click ok then
    - Database connector error: RFC_INVALID_HANDLE
    Obs.: Those "Z" infosets are already created in SAP environment.
    The one that works fine is one of the Purchasing Analysis reports:
    - Purchasing Group Analysis -> InfoSet: /KYK/IS_MCE1
    I'm kind of lost to solve this, because I'm not sure if it can be in the SAP JCO or some parameter that was done wrongly in SAP and I have already check possible solutions for both.
    Thanks in advance,
    Carlos Henrique Matos da Silva - SAP BusinessObjects BI - Brazil.

    I re-checked step 3.2.3 - Uploading Crystal User Roles (transaction PFCG) - of the manual where it talks about CRYSTAL_ENTITLEMENT and CRYSTAL_DESIGNER roles, I noticed in the Authorizations tab that the status was saying it hadn't been generated and I had a yellow sign, so then that was what I did (I generated) as it says in the manual.
    Both statuses are now saying "Authorization profile is generated" and the sign is now green on the tab.
    I had another issue in the User tab (it was yellow as Authorizations one before generating)....all I needed to do to change to green was comparing user (User Comparison button).
    After all that, I tried once more to refresh the Crystal report and I still have the error messages being thrown.
    There's one more issue in one of the tabs of PFCG transaction, it is on the Menu one where it is with a red sign, but there's nothing talking about it in the manual. I just have a folder called "Role menu" without anything in it.
    Can it be the reason why I'm facing errors when connecting the report to SAP infoSets? (remember one of my reports which is connected to an infoSet works good)
    Thanks in advance,
    Carlos Henrique Matos da Silva - SAP BusinessObjects BI - Brazil.

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

Maybe you are looking for

  • Error(23,19): method getName(java.lang.String) not found in class javax.swi

    Hi , when i try to run my program, i keep getting the error msg: Error(23,19): method getName(java.lang.String) not found in class javax.swing.JTextField I have checked the java API and it inherits from the awt.Component class and should be accessibl

  • SFSY-FORMPAGES not displayed in XSF output

    I am trying to display a smartform as an html output using XSF output option.  However, I am unable to display the total no. of pages.  I am using SFSY-FORMPAGES to display Page x of y.  Does anybody have a suggestion to solve this issue?  We are in

  • Need help in resolving CONNECT BY loop in user data in Oracle 9.2.0.5 Ver

    Hi Everyone, Below are the scripts to reproduce the error CREATE TABLE TESTING C1 NUMBER, C2 NUMBER, GRP NUMBER Insert into TESTING (C1, C2, GRP) Values (1, 2, 100); Insert into TESTING (C1, C2, GRP) Values (1, 3, 100); Insert into TESTING (C1, C2, G

  • FCP: "There was a problem connecting to the server"

    I receive this dialog (every 10 seconds or so when dismissed) whenever I open FCP 10.1.3 on my MacBook Pro (Retina, Mid 2012).  The server is a Drobo5N, and when I mount it, the dialog goes away.  Also, when I shut down the server when FCP is open, t

  • Oracle 9iFS configuration assistent

    I have installed oracle 9iFS configuration assistent and after installation i got an error message "areasQueries". I tried also to run ifsconfig.bat, but only ms-dos window appears for a few seconds. I was searching the forum, documentation, web, but