Database Administration - Best Practices

Hello Gurus,
I would like to know various best practices for managing and administering Oracle databases. To give you all an example what I am thinking about - for example, if you join a new company and would like to see if all the database conform to some kind of standard/best practices, what would you look for - for instance - are the control files multiplexed, are there more than one member for each redo log group, is the temp tablespace using TEMPFILE or otherwise...something of that nature.
Do you guys have some thing in place which you use on a regular basis. If yes, I would like to get your thoughts and insights on this.
Appreciate your time and help with this.
Thanks
SS

I have a template that I use to gather preliminary information so that I can at least get a glimar of what is going on. I have posted the text below...it looks better as a spreedsheet.
System Name               
System Description               
     Name      Phone     Pager
System Administrator               
Security Administrator               
Backup Administrator               
Below This Line Filled Out for Each Server in The System               
Server Name               
Description (Application, Database, Infrastructure,..)               
ORACLE version/patch level          CSI     
          Next Pwd Exp     
Server Login               
Application Schema Owner               
SYS               
SYSTEM               
     Location          
ORACLE_HOME               
ORACLE_BASE               
Oracle User Home               
Oracle SQL scripts               
Oracle RMAN/backup scripts               
Oracle BIN scripts               
Oracle backup logs               
Oracle audit logs               
Oracle backup storage               
Control File 1               
Control File 2               
Control File 3                    
Archive Log Destination 1                    
Archive Log Destination 2                    
Datafiles Base Directory                    
Backup Type     Day     Time     Est. Time to Comp.     Approx. Size
archive log                    
full backup                    
incremental backup                    
As for "Best" practices, well I think that you know the basics from your posting but a lot of it will also depend on the individual system and how it is integrated overall.
Some thoughts I have for best practices:
Backups ---
1) Nightly if possible
2) Tapes stored off site
3) Archives backed up through out day
4) To Disk then to Tape and leave backup on disk until next backup
Datafiles ---
1) Depending on hardware used.
a) separate datafiles from indexes
b) separate high I/O datafiles/indexes on dedicated disks/lungs/trays
2) file names representative of usage (similar to its tablespace name)
3) Keep them of reasonable size < 2 GB (again system architecture dependent)
Security ---
At least meet DOD - DISA standards where/when possible
http://iase.disa.mil/stigs/stig/database-stig-v7r2.pdf
Hope that gives you a start
Regards
tim

Similar Messages

  • Developers access to apps database user(Best Practice)?

    Hello all,
    I'm trying to understand a developers need to have access to the "apps" database user. The arguent can be made that the need is there for all development efforts, but is that really the case? Should all updates/changes by the "apps" user be executed by the apps dba even in the development environment? I'm trying to get a better understanding of how other organizations are set up. Our shop currently allow developers free raign to the apps user, but there are known security risks in doing so with "apps" being such a powerful user of the database environment.
    Thanks in advance and any recommendations(Best Practices) will be greatly appreciated.

    Hi,
    We usually give developers access to APPS schema on the development instances. The access to this schema (and other schemas) is revoked on the UAT/PROD (and other instances where the data is not sanitized). When giving such access we are not much worried about the data as much as we are about the objects, but this is not a big issue as we can recover those objects from other instances. Some organizations do not provide their developers with access to APPS schema on the development instances and all the tasks are done by the Apps DBA team. Another approach would be creating a read only APPS schema (search the forum for details) to allow developers to view the objects/data without bugging the DBAs with such routine tasks.
    Thanks,
    Hussein

  • Cisco Administration Best Practice - TACACS+ or RADIUS

    I'm new to cisco and currently building a midsize environment and wanted to know what is the best practices for administration management of cisco equipment?
    Thanks!

    Using TACACS+ with ACS especially gives you all of the AAA's - this is better/best practice for mgmt access to Cisco devices imho.
    Bilal

  • Scheduled Tasks - Administrator Best Practices

    Hi all,
    I've gotten assistance this week with a couple scripts and scheduling them as tasks. I actually have well over a dozen running on our Exchange server using a special user with a complex password. This user is not used for logging into any machine, but it
    is a member of 'Administrators' group and can be used for tasks requiring elevated privileges.
    What I am interested in learning is what the best practice is for running scheduled tasks. We have several, such as querying AD for members of select OUs or users who meet certain criteria. We also have automated emails regarding certain mailbox metrics,
    etc. You get the idea.
    Despite the complex credentials, this account is still discoverable and could be used in nefarious ways. Is it possible be running tasks on Server 2008 R2 (2012 possibly) without administrator credentials? Are there certain restrictions for the tasks (like
    is a scheduled reboot allowed by a standard account, but not querying Active Directory?).
    I also have noticed a checkbox with 'Run with  highest privileges' and do not fully understand what this means.
    When I try to run the task as a regular user (no remote permissions) and it says 'Logon failure: the user has not been granted the requested logon type as this computer.'
    In short, can I safely remove our special user account from 'Administrators' and place into regular users without breaking all of our tasks?

    Hi KSI-IT,
    Firstly, based on my research, if you want to run the task scheduler with a user account, the user account must have the corresponding permission, in other words, you can also manually run the script with the user account.
    1.  For the error you posted 'Logon failure: the user has not been granted the requested logon type as this computer', please make sure the task account has "logon as a batch job" privilege.
    To add the privilege of the account, please go to
    [Local Security policy\Local Policies\User Rights Assignment]
    -Log on as a batch job.
    Add the domain\username account and any others you may need and retry.
    2.  For the setting 'Run with  highest privileges', this means that it runs with the highest privileges available to that user. This is different from the context menu's 'Run As Admin'.
    It generates the highest privilege token for the specific user, however, it cannot run as a different user, for a standard user with no elevated permissions, 'Run with highest privileges' does not do anything.
    Reference from:
    What
    effect does "run with highest priviledges" in task scheduler have on powershell scripts?
    I hope this helps.

  • Database creation best practices.

    Hi,
    We are planning to setup new database, oracle-10G on Sun and AIX. It is a datawarehose environment.
    Can anyone please share me the documents which speaks about best practices to be followed during database creation/setup. I googled and got some douments but not satisfied with them, so thought of posting this query.
    Regards,
    Yoganath.

    YOGANATH wrote:
    Anand,
    Thanks for your quick response. I went thru the link, but it seems to be a brief one. I need a sort of crisp/summary document for my presentation, which speaks about:
    1. Initial parameter settings for an datawarehouse to start with, like block_size, db_file_multiblock_read_count, parallel server etc...
    2. Memory parameters, SGA, PGA (say for an sever with 10GB RAM).
    3. How to split tablespaces, like Large, small.
    If someone has a just a crisp/outline document which speaks about the above mentioned points, it will be grateful.
    Regards,
    YoganathYou could fire up dbca, select the 'data warehouse' template, walk through the steps, and at the end do not select 'create a database' but simply select 'create scripts', then take a look at the results, especially the initialization file. Since you chose a template instead of 'custom database' you won't get a CREATE DATABASE script, but you should still get some stuff genned that will answer a lot of the questions you pose.
    You could even go so far as to let dbca create the database. Nothing commits you to actually using that DB. Just examine it to see what you got, then delete it.
    Edited by: EdStevens on Feb 10, 2009 10:41 AM

  • ASM and Databases Instances (best practices)

    Hello
    Platform AIX 5 TL8,
    Oracle 10.2.0.4
    Context RAC ASM (2 nodes)
    We have 25 Oracle databases working under the same ASM Instance on our RAC. I think that this is too much and split at least by creating a new ASM instance on another RAC environment should be better.
    Any comment, advises ?
    Bests Regards
    Den

    user12067184 wrote:
    Hello
    Platform AIX 5 TL8,
    Oracle 10.2.0.4
    Context RAC ASM (2 nodes)
    We have 25 Oracle databases working under the same ASM Instance on our RAC. I think that this is too much and split at least by creating a new ASM instance on another RAC environment should be better.
    Hi Den ,
    It is not advisable to have 25 databases in single RAC . Instead of databases , you can also think of creating different schemas in same database.
    For ASM best practice please follow :
    ASM Technical Best Practices [ID 265633.1]
    Regards
    Rajesh

  • [iPhone SDK] Database/SQLite Best Practices

    Taking a look at the SQLiteBooks example, the idea of "hydrating" objects on-the-fly doesn't seem like a best practice to me. If you have a large number of books, for example, and they all need to be hydrated at the same time, I would want to lump the hydrating statements into one large query instead of each Book doing a separate query for it's own data.
    I'm imagining these two scenarios:
    1. A large query that loops through the rows of it's result set and creates Book objects from them, adding them to an array.
    2. A query to get the primary keys, creating Book objects with only that data. When the book hydrates, it queries the rest of the data specifically for itself. (SQLiteBooks example)
    I can see how the hydrating of method 2 would be easier to manage the memory stored in each Book object, but I'm really concerned with the speed it would take hundreds of Books to hydrate themselves.

    If you know you're going to need many of the objects hydrated at the same time you may want to create some mechanism that allows you to do a single query and hydrate everything at once. In the case of SQLiteBooks, the hydrate method is only called when the user clicks through to the details view for a particular item, so hydrating objects en masse is not necessary.
    I'd be very careful to avoid premature optimization here. We're working with very limited resources on these devices, and it is definitely important to optimize your code, but if you're dealing with a large number of items you could easily shoot yourself in the foot by hydrating a bunch of items at once and using up all your memory. If you dehydrate your objects when you receive a memory warning (as I believe the SQLiteBooks example does) you could end up thrashing - reading all your data from the DB, hydrating your objects, receiving a memory warning, dehydrating, and repeating. As the previous reply states, indexed lookups in SQLite are extremely fast. Even on the iPhone hardware you can probably do hundreds of lookups per second (I haven't run a test, but that'd be my guesstimate).

  • Question about database structure - best practice

    I want to create, display, and maintain a data table of loan
    rates. These
    rates will be for two loan categories - Conforming and Jumbo.
    They will be
    for two loan terms - 15year and 30year. Within each term,
    there will be a
    display of -
    points (0, 1, 3) - rate - APR
    For example -
    CONFORMING
    30 year
    POINTS RATE APR
    0 6.375 6.6
    1 6.125 6.24
    3 6.0 6.12
    My first question is -
    Would it be better to set up the database with 5 fields
    (category, term,
    points, rate, apr), or 13 fields (category, 30_zeropointRate,
    30_onepointRate, 30_threepointRate, 30_zeropointAPR,
    30_onepointAPR,
    30_threepointAPR, 15_zeropointRate, 15_onepointRate,
    15_threepointRate,
    15_zeropointAPR, 15_onepointAPR, 15_threepointAPR)?
    The latter option would mean that my table would only contain
    two records -
    one for each of the two categories. It seems simpler to
    manage in that
    regard.
    Any thoughts, suggestions, recommendations?
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================

    Thanks, Pat. I'm pretty sure that this is a dead-end
    expansion. The site
    itself will surely expand, but I think this particular need
    will be
    informational only....
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================
    "Pat Shaw" <[email protected]> wrote in message
    news:[email protected]...
    > But if the site ever wants to expand on it's
    functionality etc. it can be
    > very difficult to get round a de-normalised database.
    You can find that
    > you have tied yourself in knots and the only solution is
    to go back and
    > redesign the database which often includes major
    redesigning of the
    > fron-end too.
    >
    > If you are confident that this will not be the case then
    go with your
    > initial thoughts but don't be too lenient just in case.
    Leave yorself a
    > little scope. I always aim for 3rd normal form as this
    guarantees a robust
    > database design without being OTT.
    >
    > Pat.
    >
    >
    > "Joris van Lier" <[email protected]> wrote in
    message
    > news:[email protected]...
    >>
    >>
    >> "Murray *ACE*"
    <[email protected]> wrote in message
    >> news:[email protected]...
    >>> I want to create, display, and maintain a data
    table of loan rates.
    >>> These rates will be for two loan categories -
    Conforming and Jumbo.
    >>> They will be for two loan terms - 15year and
    30year. Within each term,
    >>> there will be a display of -
    >>>
    >>> points (0, 1, 3) - rate - APR
    >>>
    >>> For example -
    >>>
    >>> CONFORMING
    >>> 30 year
    >>> POINTS RATE APR
    >>> ----------- --------- ------
    >>> 0 6.375 6.6
    >>> 1 6.125 6.24
    >>> 3 6.0 6.12
    >>>
    >>> My first question is -
    >>>
    >>> Would it be better to set up the database with 5
    fields (category, term,
    >>> points, rate, apr), or 13 fields (category,
    30_zeropointRate,
    >>> 30_onepointRate, 30_threepointRate,
    30_zeropointAPR, 30_onepointAPR,
    >>> 30_threepointAPR, 15_zeropointRate,
    15_onepointRate, 15_threepointRate,
    >>> 15_zeropointAPR, 15_onepointAPR,
    15_threepointAPR)?
    >>>
    >>> The latter option would mean that my table would
    only contain two
    >>> records - one for each of the two categories. It
    seems simpler to
    >>> manage in that regard.
    >>>
    >>> Any thoughts, suggestions, recommendations?
    >>
    >> In my opinion, normalizing is not necessary with
    small sites, for example
    >> the uber-normalized database design I did for the
    telcost compare matrix
    >> (
    http://www.artronics.nl/telcostmatrix/matrix.php
    ) proved to be totally
    >> overkill.
    >>
    >> Joris
    >
    >

  • Windows Azure SQL Databases Statistics Best practices

    ON SQL Azure is it a good practice to have Statistics auto-update disabled? or otherwise.
    Pl do not compare Azure to on premise SQL Engine.. Those who have worked on Azure know what i mean..
    It is a pain to maintain the indexes specially if they are have BLOB Type Columns.. No Index online rebuilds are allowed in Azure. I was targetting statistics as i see the data being frequently updated.. so maybe i can have the developers update the stats
    as soon as they do major update/insert or i can have a job that can do it on weekly basis if i turn off the suto stats update.
    I execute a Stats FULLSCAN Update every week, but i think it is overwritten by the Auto update stats .. So Now back to my question does anyone have any experience with turning off stats on Azure.. (any Benefits)

    You can't disable auto stats in WASD.  They're on by default and have to stay that way.
    Rebuilding indexes is possible, but you have to be careful how you approach it.  See my blog post for rebuilding indexes:
    http://sqltuna.blogspot.co.uk/2013/10/index-fragmentation-in-wasd.html
    As a rule I wouldn't have LOB columns as part of an index key - is that what's causing you issues?
    Statistics work the same as on-premises, in that they are triggered when a certain threshold of changes is reached (or some other triggers).  That's not a bad thing though, as it means they're up to date.  Is there any reason you think this is
    causing you issues?

  • Lync backend databases and Lync local databases placement best practices

    We are deploying Lync 2013 for 30,000 Lync users across 2 pools in 2 datacenters. Dedicated SQL server for Lync to host all databases for FE, PC and monitoring roles.
    Can anyone provide guidance around disk drives for SQL databases and local databases?
    Lync backend databases
    Planning for Lync Server 2013 requires critical thinking about the performance impact that the system will have on your current infrastructure. A
    point of potential contention in larger enterprise deployments is the performance of SQL Server and placement of database and log files to ensure that the performance of Lync Server 2013 is optimal, manageable, and does not adversely affect other database
    operations.
    http://blogs.technet.com/b/nexthop/archive/2012/11/20/using-the-databasepathmap-parameter-to-deploy-lync-server-2013-databases.aspx
    Is it recommended to place all Lync DBs on one drive and logs on one drive or separate onto multiple drives using Databasemappath?
    Lync 2013 local databases 
    In the Capacity Planning Guidance Microsoft
    describes that the disk requirements for a Lync 2013 front end server, given our usage model, is eight disks configured as two drives.
    One drive will use two disks in RAID 1 to store the Windows operating system and the Lync 2013 and SQL Express binaries
    One drive will use six disks in RAID1+0 to store databases and transaction logs from the two SQL Express instances (RTCLOCAL and LYSS) 
    Is this enough or should we separate the local DBs and logs onto separate drives as well? Also how big do the local DBs get?

    During the planning and deployment of Microsoft SQL Server 2012 or Microsoft SQL Server 2008 R2 SP1 for your Lync Server 2013 Front End pool, an important consideration is the placement of data and log files onto physical hard disks for performance. 
    The recommended disk configuration is to implement a 1+0 RAID set using 6 spindles. Placing all database and log files that are used by the Front End pool and associated server roles and services (that is, Archiving and Monitoring Server, Lync Server Response
    Group service, Lync Server Call Park service) onto the RAID drive set using the Lync Server Deployment Wizard will result in a configuration that has been tested for good performance. The database files and what they are responsible for is detailed in the
    following table.
    http://technet.microsoft.com/en-us/library/gg398479.aspx
    Microsoft's technet recommendation contradicts the blog recommendation for Lync 2010.

  • Best practices of BO/BW SSO SAP Authentication transports

    Hi Friends,
    We are going to integrate BW system with BO (SAP authentication). All the queries are built through BICS connections. And we have various reporting tools to implement SSO SAP authentication (Webi,Crystal,Dashboard.Design studio…etc)
    As per the process there are certain activities which has to be performed at BW level
    e.g -- BW Roles creation (PFCG---Crystal role enablement) and assigning to BO users
    Once it is created in BW , we have to do  integration at BO level( in CMC application) by selecting authentication and roles import followed by ……Groups..Users…folder and access level...
    My question here is
    Transports of BW objects for BO SSO (SAP) authentication (such as roles created for Users, Keystore certificate, uploads). Will these objects be transported by BW team or they will be separately downloading or uploading the certificate in different systems (like QAS  ...PROD….)
    And at BO level, once I integrate BO SSO, Do I need to do manual integration in QAS and Production system as well or it can be transported with promotion management of BO tool
    Will these SSO(SAP) authentication can be applied to all tools in BI Launchpad such as (Design studio,Webi,Web application,Crystal….etc)  as all users  are required to have SSO to all BO tool
    Regarding LUMIRA tool , Can we do SSO authentication
    Please share your thoughts and experience.
    I t would be great if I get BO administration best practices document for BW BO SSO and Users and Group management  in CMC for implementing
    Thanks in advance

    Hi ,
    Please find my answers below:
    1. The roles will be created in BW and should automatically appear in BO CMC Authentication SAP roles, if there is a connectivity setup between BO and BW irrespective of the SSO.The roles are transported by the BW security team.
    2. Every environement will have a unique connection to the corresponding SAP BW environment.For example SAP BW DEV will be mapped to BO DEV, SAP BW PROD will be mapped to BO PROD.So these settings cannot be migrated through Promotion Management.
    3.This authentication can be applied to all tools , the SSO does not depend on the tool ,it depends on the integration between two systems which in this case are BO and SAP BW
    As mentioned earlier, after integration all tools can have SSO
    You can refer to a lot of help documents on this site which will help you to setup the integration between SAP BW AND SAP BO.
    Kind Regards,
    Priyanka

  • Best practice for tracking database changes...?

    Dear Oracle gurus,
    I'm still relatively new to database administrating, and recently I ran into a situation which I'm not sure if there's some text-book scenario analysis or practice.
    I find it hard to track all the database changes across different servers. Our company develop software that uses the Oracle database, so we have development and test servers set up here and there, with really minimal control on them. Problem arises when we make rapid design changes to our system, which required multiple and rapid changes to the databases. I find it really hard to keep track of everything, because sometimes I can't patch some server because of people still using it for development/testing/investigation/etc.
    So, is there some kind of good practices for tracking database changes (which we even write patches for), monitoring schema modifications, or maybe even versioning database objects? I've tried to find some information but I think I did not look in the right places or ask the right questions.
    Any help is appreciated.
    Best regards,
    Peter Tung

    The first thing I would start with is:
    Find a version control system that will allow you to store files and version them (PVCS for example). You could for example, store all the sql scripts. Whenever a change is needed, the user could check the program out from the version control tool and make changes and check it back in. Besides sql scripts, you could also store binary files or any type of source code files in a version control system. This would at least put some things in order. In a version control system, you could associate a number or a string with all the files within a patch.

  • Want the best practices docs on Oracle database Admin provided by Oracle

    Hi there,
    I looked at everywhere and didn’t find the best practices docs on Oracle database administration especially on creating db in oracle 10g dbs. I can find bits and pieces here and there. But I didn’t find all incorporated in one. Could somebody direct/provide me on this?

    ok, I'm not looking for the oracle provided manual to find out the best practices in db field. I'm looking for the best practices when creating the db in oracle db, for example. I can read all the oracle manual in Oracle tihiti world. But I don't know the most used and practical things to do when creating the db:- If I have to define what should be the best practices when creating the db, here are the checklist:
    Here are the best practices when creating the Oracle 10 DBs:
    1.     Create meaningful database name
    2.     Create the directory structure following the Optimal Flexible Architecture(OFA)
    a.     Give suffix for the redo log .LOG
    b.     Give suffix for the data file .DBF
    c.     Give suffix for the Control file .CTL
    3.     Enable Password complexity
    4.     Enable ARCHIVELOG Mode
    5.     Use User Oracle Managed File
    6.     Create separate tablespace for data files and Indexes
    7.     Put archive log in different multiple drives
    8.     Multiplex redo log files, and control file
    etc.
    I want to see what other db gurus considers the best practices in the field.

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

Maybe you are looking for

  • Addition of new field in Data Source.

    Dear Experts, I want to add new field in full upload data source  based on Table view. After adding field in table  & i have modifited data sorce in R/3 220 client. It is showing that field but in r/3 200 client it not showing that field , when i usi

  • How do I get rid of a box that pops up saying Account expired?

    There is a pop up box that comes up on my desktop that has a red stop sign shape with a white exclamation point. It says in bold at top "Account expired" and below that "Renew subscription now to resume service". The only option is to click a button

  • Sim on my iphone 3GS locked by accident. I do have an att account with this phone

    My 3GS Iphone Sim locked by accident. it says to unlock- call my service company which is att-- to get the PIU? I have had iphones through Att for the last 6 years-- I cannot use my phone to call at-t because it is locked. 

  • Using Time Machine & Superduper as backup solutions

    I have a 2 TB firewire drive that I have partitioned into a partition for Time Machine and Superduper.  My thought is to have a bootable backup using Superduper and use Time machine for individual user file storage and recovery.  I also have Parallel

  • Watermarking program with time stamps?

    Hi all, Just curious to see if Acrobat Pro has a feature where I can add a watermark both at the top, bottom and middle-diagonal with name and timestamp of date opened/accessed or created. I've been looking all over and have only found a watermarking