Database creation best practices.

Hi,
We are planning to setup new database, oracle-10G on Sun and AIX. It is a datawarehose environment.
Can anyone please share me the documents which speaks about best practices to be followed during database creation/setup. I googled and got some douments but not satisfied with them, so thought of posting this query.
Regards,
Yoganath.

YOGANATH wrote:
Anand,
Thanks for your quick response. I went thru the link, but it seems to be a brief one. I need a sort of crisp/summary document for my presentation, which speaks about:
1. Initial parameter settings for an datawarehouse to start with, like block_size, db_file_multiblock_read_count, parallel server etc...
2. Memory parameters, SGA, PGA (say for an sever with 10GB RAM).
3. How to split tablespaces, like Large, small.
If someone has a just a crisp/outline document which speaks about the above mentioned points, it will be grateful.
Regards,
YoganathYou could fire up dbca, select the 'data warehouse' template, walk through the steps, and at the end do not select 'create a database' but simply select 'create scripts', then take a look at the results, especially the initialization file. Since you chose a template instead of 'custom database' you won't get a CREATE DATABASE script, but you should still get some stuff genned that will answer a lot of the questions you pose.
You could even go so far as to let dbca create the database. Nothing commits you to actually using that DB. Just examine it to see what you got, then delete it.
Edited by: EdStevens on Feb 10, 2009 10:41 AM

Similar Messages

  • Developers access to apps database user(Best Practice)?

    Hello all,
    I'm trying to understand a developers need to have access to the "apps" database user. The arguent can be made that the need is there for all development efforts, but is that really the case? Should all updates/changes by the "apps" user be executed by the apps dba even in the development environment? I'm trying to get a better understanding of how other organizations are set up. Our shop currently allow developers free raign to the apps user, but there are known security risks in doing so with "apps" being such a powerful user of the database environment.
    Thanks in advance and any recommendations(Best Practices) will be greatly appreciated.

    Hi,
    We usually give developers access to APPS schema on the development instances. The access to this schema (and other schemas) is revoked on the UAT/PROD (and other instances where the data is not sanitized). When giving such access we are not much worried about the data as much as we are about the objects, but this is not a big issue as we can recover those objects from other instances. Some organizations do not provide their developers with access to APPS schema on the development instances and all the tasks are done by the Apps DBA team. Another approach would be creating a read only APPS schema (search the forum for details) to allow developers to view the objects/data without bugging the DBAs with such routine tasks.
    Thanks,
    Hussein

  • Database Administration - Best Practices

    Hello Gurus,
    I would like to know various best practices for managing and administering Oracle databases. To give you all an example what I am thinking about - for example, if you join a new company and would like to see if all the database conform to some kind of standard/best practices, what would you look for - for instance - are the control files multiplexed, are there more than one member for each redo log group, is the temp tablespace using TEMPFILE or otherwise...something of that nature.
    Do you guys have some thing in place which you use on a regular basis. If yes, I would like to get your thoughts and insights on this.
    Appreciate your time and help with this.
    Thanks
    SS

    I have a template that I use to gather preliminary information so that I can at least get a glimar of what is going on. I have posted the text below...it looks better as a spreedsheet.
    System Name               
    System Description               
         Name      Phone     Pager
    System Administrator               
    Security Administrator               
    Backup Administrator               
    Below This Line Filled Out for Each Server in The System               
    Server Name               
    Description (Application, Database, Infrastructure,..)               
    ORACLE version/patch level          CSI     
              Next Pwd Exp     
    Server Login               
    Application Schema Owner               
    SYS               
    SYSTEM               
         Location          
    ORACLE_HOME               
    ORACLE_BASE               
    Oracle User Home               
    Oracle SQL scripts               
    Oracle RMAN/backup scripts               
    Oracle BIN scripts               
    Oracle backup logs               
    Oracle audit logs               
    Oracle backup storage               
    Control File 1               
    Control File 2               
    Control File 3                    
    Archive Log Destination 1                    
    Archive Log Destination 2                    
    Datafiles Base Directory                    
    Backup Type     Day     Time     Est. Time to Comp.     Approx. Size
    archive log                    
    full backup                    
    incremental backup                    
    As for "Best" practices, well I think that you know the basics from your posting but a lot of it will also depend on the individual system and how it is integrated overall.
    Some thoughts I have for best practices:
    Backups ---
    1) Nightly if possible
    2) Tapes stored off site
    3) Archives backed up through out day
    4) To Disk then to Tape and leave backup on disk until next backup
    Datafiles ---
    1) Depending on hardware used.
    a) separate datafiles from indexes
    b) separate high I/O datafiles/indexes on dedicated disks/lungs/trays
    2) file names representative of usage (similar to its tablespace name)
    3) Keep them of reasonable size < 2 GB (again system architecture dependent)
    Security ---
    At least meet DOD - DISA standards where/when possible
    http://iase.disa.mil/stigs/stig/database-stig-v7r2.pdf
    Hope that gives you a start
    Regards
    tim

  • ASM and Databases Instances (best practices)

    Hello
    Platform AIX 5 TL8,
    Oracle 10.2.0.4
    Context RAC ASM (2 nodes)
    We have 25 Oracle databases working under the same ASM Instance on our RAC. I think that this is too much and split at least by creating a new ASM instance on another RAC environment should be better.
    Any comment, advises ?
    Bests Regards
    Den

    user12067184 wrote:
    Hello
    Platform AIX 5 TL8,
    Oracle 10.2.0.4
    Context RAC ASM (2 nodes)
    We have 25 Oracle databases working under the same ASM Instance on our RAC. I think that this is too much and split at least by creating a new ASM instance on another RAC environment should be better.
    Hi Den ,
    It is not advisable to have 25 databases in single RAC . Instead of databases , you can also think of creating different schemas in same database.
    For ASM best practice please follow :
    ASM Technical Best Practices [ID 265633.1]
    Regards
    Rajesh

  • Mapping creation best practice

    What is the best practice while designing OWB mappings.
    Is it best to have less number of complex mappings or more number of simple mappings particularly when accessing remote DB to
    extract the data.
    A simple mapping may be having lesser number of source tables and the complex mapping may be one
    which will have more source tables and more expresssions.

    If you're an experienced PL/SQL (or other language) developer then you should adopt similar practices when designing OWB mappings i.e. think reusability, modules, efficiency etc. Generally, a single SQL statement is often more efficient than a PL/SQL procedure therefore in a similar manner a single mapping (that results in a single INSERT or MERGE statement) will be more efficient than several mappings inserting to temp tables etc. However, it's often a balance between ease of understanding, performance and complexity.
    Pluggable mappings are a very useful tool to split complex mappings up, these can be 'wrapped' and tested individually, similar to a unit test before testing the parent mapping. These components can then also be used in multiple mappings. I'd only recommend these from 10.2.0.3 onwards though as previous to that I had a lot of issues with synchronisation etc.
    I tend to have one mapping per target and where possible avoid using a mapping to insert to multiple targets (easier to debug).
    From my experience with OWB 10, the code generated is good and reasonably optimised, the main exception that I've come across is when a dimension has multiple levels, OWB will generate a MERGE for each level which can kill performance.
    Cheers
    Si

  • [iPhone SDK] Database/SQLite Best Practices

    Taking a look at the SQLiteBooks example, the idea of "hydrating" objects on-the-fly doesn't seem like a best practice to me. If you have a large number of books, for example, and they all need to be hydrated at the same time, I would want to lump the hydrating statements into one large query instead of each Book doing a separate query for it's own data.
    I'm imagining these two scenarios:
    1. A large query that loops through the rows of it's result set and creates Book objects from them, adding them to an array.
    2. A query to get the primary keys, creating Book objects with only that data. When the book hydrates, it queries the rest of the data specifically for itself. (SQLiteBooks example)
    I can see how the hydrating of method 2 would be easier to manage the memory stored in each Book object, but I'm really concerned with the speed it would take hundreds of Books to hydrate themselves.

    If you know you're going to need many of the objects hydrated at the same time you may want to create some mechanism that allows you to do a single query and hydrate everything at once. In the case of SQLiteBooks, the hydrate method is only called when the user clicks through to the details view for a particular item, so hydrating objects en masse is not necessary.
    I'd be very careful to avoid premature optimization here. We're working with very limited resources on these devices, and it is definitely important to optimize your code, but if you're dealing with a large number of items you could easily shoot yourself in the foot by hydrating a bunch of items at once and using up all your memory. If you dehydrate your objects when you receive a memory warning (as I believe the SQLiteBooks example does) you could end up thrashing - reading all your data from the DB, hydrating your objects, receiving a memory warning, dehydrating, and repeating. As the previous reply states, indexed lookups in SQLite are extremely fast. Even on the iPhone hardware you can probably do hundreds of lookups per second (I haven't run a test, but that'd be my guesstimate).

  • Vendor Creation Best Practices?

    Can anyone that has the Purchasing dept. (one who is able to issue PO's) also has the ability to create vendors MK01(local) XK01(central). I'm trying to understand the risks with one who can issue PO's also can setup vendors.
    Thanks in advance!
    Best, Michael

    Hi,
    This depends totally on the organisational set up. Each company has its own checks for curtailing mal practices. There are also international standards available for controlling these.
    There is nothing wrong in the same person creating a vendor master and also issuing a P.O so long as the requisite checks and controls ( e.g release strategy or authorisation control ( to name a few )) are in place.
    If the organisation and the responsible / competant authorities are satisfied that a fool proof control mechanism in place to control mal practices it is fine for the same person to create vendormaster as well as P.O ( e.g a small organisation may not have the luxury to keep 2 different people for this activity ).
    I hope you are getting my point.  SAP is just an enabler for business to run. The checks and controls have to be there anyway.
    Regards,
    Rajeev

  • File Creation - Best Practice

    Hi,
    I need to create a file daily based on a single query. There's no logic needed.
    Should I just spool the query in a unix script and create the file like that or should I use a stored procedure with a cursor, utl_file etc.?
    The first is probably more efficient but is the latter a cleaner and more maintainable solution?

    I'd be in favour of keeping code inside the database as far as possible. I'm not dismissing scripts at all - they have their place - I just prefer to have all code in one place.

  • Question about database structure - best practice

    I want to create, display, and maintain a data table of loan
    rates. These
    rates will be for two loan categories - Conforming and Jumbo.
    They will be
    for two loan terms - 15year and 30year. Within each term,
    there will be a
    display of -
    points (0, 1, 3) - rate - APR
    For example -
    CONFORMING
    30 year
    POINTS RATE APR
    0 6.375 6.6
    1 6.125 6.24
    3 6.0 6.12
    My first question is -
    Would it be better to set up the database with 5 fields
    (category, term,
    points, rate, apr), or 13 fields (category, 30_zeropointRate,
    30_onepointRate, 30_threepointRate, 30_zeropointAPR,
    30_onepointAPR,
    30_threepointAPR, 15_zeropointRate, 15_onepointRate,
    15_threepointRate,
    15_zeropointAPR, 15_onepointAPR, 15_threepointAPR)?
    The latter option would mean that my table would only contain
    two records -
    one for each of the two categories. It seems simpler to
    manage in that
    regard.
    Any thoughts, suggestions, recommendations?
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================

    Thanks, Pat. I'm pretty sure that this is a dead-end
    expansion. The site
    itself will surely expand, but I think this particular need
    will be
    informational only....
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================
    "Pat Shaw" <[email protected]> wrote in message
    news:[email protected]...
    > But if the site ever wants to expand on it's
    functionality etc. it can be
    > very difficult to get round a de-normalised database.
    You can find that
    > you have tied yourself in knots and the only solution is
    to go back and
    > redesign the database which often includes major
    redesigning of the
    > fron-end too.
    >
    > If you are confident that this will not be the case then
    go with your
    > initial thoughts but don't be too lenient just in case.
    Leave yorself a
    > little scope. I always aim for 3rd normal form as this
    guarantees a robust
    > database design without being OTT.
    >
    > Pat.
    >
    >
    > "Joris van Lier" <[email protected]> wrote in
    message
    > news:[email protected]...
    >>
    >>
    >> "Murray *ACE*"
    <[email protected]> wrote in message
    >> news:[email protected]...
    >>> I want to create, display, and maintain a data
    table of loan rates.
    >>> These rates will be for two loan categories -
    Conforming and Jumbo.
    >>> They will be for two loan terms - 15year and
    30year. Within each term,
    >>> there will be a display of -
    >>>
    >>> points (0, 1, 3) - rate - APR
    >>>
    >>> For example -
    >>>
    >>> CONFORMING
    >>> 30 year
    >>> POINTS RATE APR
    >>> ----------- --------- ------
    >>> 0 6.375 6.6
    >>> 1 6.125 6.24
    >>> 3 6.0 6.12
    >>>
    >>> My first question is -
    >>>
    >>> Would it be better to set up the database with 5
    fields (category, term,
    >>> points, rate, apr), or 13 fields (category,
    30_zeropointRate,
    >>> 30_onepointRate, 30_threepointRate,
    30_zeropointAPR, 30_onepointAPR,
    >>> 30_threepointAPR, 15_zeropointRate,
    15_onepointRate, 15_threepointRate,
    >>> 15_zeropointAPR, 15_onepointAPR,
    15_threepointAPR)?
    >>>
    >>> The latter option would mean that my table would
    only contain two
    >>> records - one for each of the two categories. It
    seems simpler to
    >>> manage in that regard.
    >>>
    >>> Any thoughts, suggestions, recommendations?
    >>
    >> In my opinion, normalizing is not necessary with
    small sites, for example
    >> the uber-normalized database design I did for the
    telcost compare matrix
    >> (
    http://www.artronics.nl/telcostmatrix/matrix.php
    ) proved to be totally
    >> overkill.
    >>
    >> Joris
    >
    >

  • Windows Azure SQL Databases Statistics Best practices

    ON SQL Azure is it a good practice to have Statistics auto-update disabled? or otherwise.
    Pl do not compare Azure to on premise SQL Engine.. Those who have worked on Azure know what i mean..
    It is a pain to maintain the indexes specially if they are have BLOB Type Columns.. No Index online rebuilds are allowed in Azure. I was targetting statistics as i see the data being frequently updated.. so maybe i can have the developers update the stats
    as soon as they do major update/insert or i can have a job that can do it on weekly basis if i turn off the suto stats update.
    I execute a Stats FULLSCAN Update every week, but i think it is overwritten by the Auto update stats .. So Now back to my question does anyone have any experience with turning off stats on Azure.. (any Benefits)

    You can't disable auto stats in WASD.  They're on by default and have to stay that way.
    Rebuilding indexes is possible, but you have to be careful how you approach it.  See my blog post for rebuilding indexes:
    http://sqltuna.blogspot.co.uk/2013/10/index-fragmentation-in-wasd.html
    As a rule I wouldn't have LOB columns as part of an index key - is that what's causing you issues?
    Statistics work the same as on-premises, in that they are triggered when a certain threshold of changes is reached (or some other triggers).  That's not a bad thing though, as it means they're up to date.  Is there any reason you think this is
    causing you issues?

  • Lync backend databases and Lync local databases placement best practices

    We are deploying Lync 2013 for 30,000 Lync users across 2 pools in 2 datacenters. Dedicated SQL server for Lync to host all databases for FE, PC and monitoring roles.
    Can anyone provide guidance around disk drives for SQL databases and local databases?
    Lync backend databases
    Planning for Lync Server 2013 requires critical thinking about the performance impact that the system will have on your current infrastructure. A
    point of potential contention in larger enterprise deployments is the performance of SQL Server and placement of database and log files to ensure that the performance of Lync Server 2013 is optimal, manageable, and does not adversely affect other database
    operations.
    http://blogs.technet.com/b/nexthop/archive/2012/11/20/using-the-databasepathmap-parameter-to-deploy-lync-server-2013-databases.aspx
    Is it recommended to place all Lync DBs on one drive and logs on one drive or separate onto multiple drives using Databasemappath?
    Lync 2013 local databases 
    In the Capacity Planning Guidance Microsoft
    describes that the disk requirements for a Lync 2013 front end server, given our usage model, is eight disks configured as two drives.
    One drive will use two disks in RAID 1 to store the Windows operating system and the Lync 2013 and SQL Express binaries
    One drive will use six disks in RAID1+0 to store databases and transaction logs from the two SQL Express instances (RTCLOCAL and LYSS) 
    Is this enough or should we separate the local DBs and logs onto separate drives as well? Also how big do the local DBs get?

    During the planning and deployment of Microsoft SQL Server 2012 or Microsoft SQL Server 2008 R2 SP1 for your Lync Server 2013 Front End pool, an important consideration is the placement of data and log files onto physical hard disks for performance. 
    The recommended disk configuration is to implement a 1+0 RAID set using 6 spindles. Placing all database and log files that are used by the Front End pool and associated server roles and services (that is, Archiving and Monitoring Server, Lync Server Response
    Group service, Lync Server Call Park service) onto the RAID drive set using the Lync Server Deployment Wizard will result in a configuration that has been tested for good performance. The database files and what they are responsible for is detailed in the
    following table.
    http://technet.microsoft.com/en-us/library/gg398479.aspx
    Microsoft's technet recommendation contradicts the blog recommendation for Lync 2010.

  • IS-Retail: Site Creation Best Practice

    Hi !
    I have 3 basic questions on this topic:
    1. In IS-Retail is a Site considered 'config' or 'master data'?
    2. Do all sites need to exist in the golden config client, or is it sufficient to have only the DCs (and not the Stores) in it?
    3. After go-live, is it ok to create new stores directly in production? (I hope the answer to this q is yes!)
    Thanks,
    Anisha.

    Hi my answers for your qestions as follows
    1) In IS Retail Site is a master data, but to create Site/DC/stores need to do some config in SPRO, Like...
    Number ranges for Sites, Account groups, create DC and site profiles and assign account groups to them.
    then you Create DC or site using WB01, now site master data will consider.
    2) yes you should have all Sites in golden client ( i am not clear this questn)
    3) After golive better need to create new stores on devleopment in then export (using WBTI/WBTE) to prodcution, in case any future changes requires to do something on stores like assign/delete merch category etc... you can do its easliy to aviod data mismatch in the production.
    regards
    satish

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

Maybe you are looking for