External Table Authorization Best practices

Hi,
I am working on OBIEE External table Authorization. I am able to successfully implement for one Project (catalog). The field for Authorization table (AuthTable) are
Windows_ID     Employeeid     Name     EmpEMail     GroupName     Process_ID     Process_Name     Portal_Path
Here as per requirement a user should see data for a few process. So, I put a column for Process_ID and subsequently I created a INIT block in repository where query are like
Select 'PROCESS_ID',AuthTable. Process_id
From AuthTable
WHERE upper(AuthTable.AD_ID) = upper(':USER')
Then for User Groups I applied FILTERs for all the tables E.G for every Logical Table I applied Filter
Dim_Process."Process ID" = VALUEOF(NQ_SESSION."PROCESS_ID")
I checked data and every thing is correct. But My question is:
We have many projects/catalog for which Filter Criteria will be different so shall we insert a new column for each criteria in SAME AuthTable or there is any other and better way to maintain it. Because if we maintain one table for all the projects/catalog it will be very messy I would prefer to keep different tables for different projects/catalog as there data marts are different.
But Problem is for all other session variables we may use different INIT BLOCKS and hence different tables BUT for PORTALPATH there should be only one INIT BLOCK so only for PORTALPATH sake we need to keep every thing in same table ?
Tell me if I am wrong some where in my understanding or there is a better way to do it.
Regards
Saurabh

Hi,
Pls refer to this link. Kumar explained it very clearly
http://obieeblog.wordpress.com/category/obiee/obiee-security/
Pls award points, if helpful
Regards,
Sarat Nallapati

Similar Messages

  • User Authorization, best practices for this custom application requirement?

    JDeveloper 12c (12.1.2)
    We want to use external LDAP (active directory) with ADF security to authenticate and authorize users.
    One of our custom application requirements is that there is a single page with many interactive components. It has probably about 15 tables and each table would need to have the following buttons (or similar components):
    - delete: (if certain row is selected) to delete it
    - edit: (if certain row is selected) takes user to 'edit page' where changes can be made
    - create: to create new record for this particular VO (table)
    So let's say that would be 3 x 15 = 45 different actions that single user can possibly perform. Not all users have same 'powers' ie some users can only edit CERTAIN tables, and delete from one or two. Most users can create and edit most VOs etc
    Back when this application was originally developed using (I believe) 10g JDeveloper with UIX, the way it was done is that we maintained a table in database with 'user credentials' as Y or N flags.
    For example: DEL_VO1, EDIT_VO1, ADD_VO1....
    So when user is authenticated we would then pull all these credentials from the DB table and load them into the session variables. Then we would use EL to render or not render certain buttons on the page. For example: rendered="#{sessionScope.appDelVo1 == 'Y'}"
    Moving forward into latest ADF technology, what would be the best practice to achieve described functionality?

    Hi,
    ADF BC could have permissions added to the entity level (includes remove and update). So you can create permissions for the entity (as it doesn't matter for data security how data is accessed. If as a user you are nit allowed to change a database table then this is for tables and forms). You can then use EL to check the permission, thus no need to keep the privileges in the database.
    If a user is allowed to update an entity then you can check this using EL in the UI
    <af:inputText value="#{bindings.DepartmentName.inputValue}"
    readOnly="#{!bindings.DepartmentName.hints.updateable}">
    whatch this for a full coverage of ADF Security: Oracle ADF Security Overview - Oracle JDeveloper 11g R1 and R2
    Frank

  • Varying table columns, best practices

    I've been wondering about this for quite sometime now. JTable is very complex, but it has a lot of funcationality that hints at reusable models. The separation of TableModel and ColumnModel seems to hint at being able to reuse a TableModel that stores some sort of objects, and apply different ColumnModels to view the data in different ways. Which is really cool.
    However, who is in charge of managing the columns? The default implementation is usually good enough. But, it doesn't do anything special to the columns like: assign renderers, or editors. Should the column model be in charge of this? But, then you have to swap full on column models out when you want to change the look of the table. What if you just want to vary the renderer on a column, or remove one column. Would you build a whole new ColumnModel for this?
    Should the JTable be in charge of setting himself up in these matters? But that seems to impose the view's representation on the model. What if you change views in some way that affects your model's structure.
    Should there be some external controller in charge of this?
    Sometimes you don't plan for these things at it hurts you when you need to reuse models, but maybe modify them in some way. What are your best practices?
    charlie

    The practice you described is what I'm doing right now, and I feel that it is cumbersome for reuse.The practice you described is what I'm doing right now, and I feel that it is cumbersome for reuse.
    What I'm wondering is that if anyone has come up with a very elgant way to organize their class' responsibilities between who populates the column model. I know I can subclass and fill it in the subclass, but it seems that I might NOT need to subclass, use the default, and have another class ( maybe the JTable or a controller ), that populates the ColumnModel.
    Then I can get better reuse between TableModel's and ColumnModels.
    charlie

  • Transfer iphoto library to external harddrive. Best practice?

    Need to transfer iphoto library to external harddrive due to space issues. Best practice?

    Moving the iPhoto library is safe and simple - quit iPhoto and drag the iPhoto library intact as a single entity to the external drive - depress the option key and launch iPhoto using the "select library" option to point to the new location on the external drive - fully test it and then trash the old library on the internal drive (test one more time prior to emptying the trash)
    And be sure that the External drive is formatted Mac OS extended (journaled) (iPhoto does not work with drives with other formats) and that it is always available prior to launching iPhoto
    And backup soon and often - having your iPhoto library on an external drive is not a backup and if you are using Time Machine you need to check and be sure that TM is backing up your external drive
    LN

  • Help with external table authorization

    Hi Every One,
    I am using OBIEE 11.1.1.6.
    I have setup MSAD authentication through rpd and every user is able to login to analytics.
    And there is an external table in the database where I have all the user and their groups( all users in MSAD are in this table)
    I have created session variable called GROUP to have these usergroups for authorization.
    I have created the groups in the front end with exact names that are in the external table.
    But I cant set up the Required privilages
    every user is seeing all the reports and subject areas.
    Do I need to create the application roles with exact names as groups names in rpd?
    Do i need to create groups in weblogic console?
    Please help me in this regard.

    Hi,
    I have created the groups in the front end with exact names that are in the external table.
    Do I need to create the application roles with exact names as groups names in rpd?No need to create any groups or application roles in rpd.
    Test Authorization init block properly.
    Create application roles under console, which are nothing but groups in your external table. Apply security to dashboards accordingly.
    Regards,
    Srikanth

  • Table partitioning best practices

    Just looking for some ideas.
    We have a large information warehouse table that we are looking to partition on 'service_period' id. We have been requested to partition on every month of every year which right now will create approximately 70 partitions. The other problem is that this is a rolling or dynamic partition meaning we will have a 'new' partition vale with each new month. I understand in 11g there is a rolling partition functionality but we are not there yet.
    So right now we are looking for a best practice for this scenario. We are thinking of possibly creating a partition on each year and indexing on the service period within each partition, maybe hash partitioning on the service period id (although does not seem to group the service periods distinct within each partition), somehow creating the partition dynamically via pl/sql (creating the table with a basic partition and then running an alter table on the data creating the proper number of partitions within a list partition.
    I am also wondering if there is a point of too many partitions on a table. I am thinking 70 may be a little extreme but not sure. We are going to do some performance testing but would be nice to hear from the community. We have 5,000,000 over approx 70 partitions giving us approx 70,000 records per partition. The other option would be to create the partition based on year and then apply an index over top on the service period to reduce the number of partitions.
    Thanks in advance,
    Bruce Carson

    This is not a lot of data, so the effort of partitioning may not be worth the benefit you receive. 70 partitions is not unreasonable. Do you have performance problems ? Do the majority of your queries reference service_period ? Do you have a lot of full table scans ?
    Partitioning strategies depend on the queries you plan to run, your data distributions, and your purge / archival strategy.
    Think about whether you should pre-create partitions for years in advance. Think about whether you should put every partition in a seperate tablespace for easy purging and archival. Think about what indexing you will use (can you use local indexes, or do you need global ?) Think about what data changes are happening ? Are they all on the newest data ? Can you compress older partitions with PCT_FREE 0 to improve performance ?

  • Authorization best practices in AS Java

    I have been assigned the responsibility to create an authorization structure on the java stack.
    We would like to create groups with corresponding roles for developers and system administrators.
    Are there any best practices out there regarding this subject?
    I have currently started with looking at the standard actions and roles available in EP and will start from there, any other ideas?

    Dear Colleague,
    SAP NetWeaver Application Server (AS) Java includes the [identity management|http://help.sap.com/saphelp_nw70ehp1/helpdata/en/48/5069e9d6253912e10000000a42189b/frameset.htm] application for administration of users, groups, and roles. This [section|http://help.sap.com/saphelp_nw70ehp1/helpdata/en/48/ad6a169eff35b7e10000000a42189d/frameset.htm] lists administrative tasks, general and specific, for the management of users, groups, and roles.
    Regards
    Alvaro Raminelli

  • Azure table design best practice

    What's the best practice for designing tables in Azure Tables to optimize query performance?

    Hi Raj,
    When we design the azure table, we need to consider the scalability of the azure table.
    and selecting the PartitionKey is very more important to scalability.
    Basically, we have two options which have their advantages and disadvantages:
    One Option: having a single partition by having the same value for PartitionKey for all entities to
    Second Option: having a unique value for PartitionKey for every entity
    More information about how to get the most out of windows azure tables ,please refer to the link below:
    http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx
    There is  also a detailed article which explain how to design a scalable partitioning strategy for Windows Azure Storage,please refer to the link below:
    http://msdn.microsoft.com/en-us/library/hh508997.aspx
    Best Regards,
    Kevin Shen

  • External table authorization

    I have done external table authentication by creating user related details in db, but i'm unable to view user specific data (row level data security) ie external table authoriztion. I have not used user groups..It is showing details pertaining to all users
    Looking forward for your valuable suggestion....

    Hi,
    Pls refer to this link. Kumar explained it very clearly
    http://obieeblog.wordpress.com/category/obiee/obiee-security/
    Pls award points, if helpful
    Regards,
    Sarat Nallapati

  • OBIEE11g1.5 External Table Authorization

    Hi,
    I have integrated LDAP for authentication.
    But for roles I have created an external table and by initialize block, I am populating it dynamically.
    My cache is disabled in NQSConfig file and Cache is unchecked in initialize block also.
    It is populating properly in My Account-->Roles and Catalog Groups.
    But my problem is:
    If user1 first logs in and he has access to AP Subject Areas, he is able to see it.
    But after it User2 logs in, who has AR Subject Areas, is seeing AP Subject Areas, rather than AR.
    Could anybody help me.
    Thanks,
    Sunil
    Edited by: 990324 on Mar 5, 2013 11:28 PM

    Hi Sunil,
    Couple of questions here.
    1. If you do not assign the roles through an external table and manage them still through EM, do you still see this issue?
    2. If you still see that user2 could see AP Suject Area then, there is a high chance of an issue with your roles and their relationships.
    3. How about the user2 roles relationship with roles of user1. I mean, by any chance user2 belongs to a role which has access even to AP Subject Area. Did you try explicitly setting the role 'NO Access' restriction in your dashboards security?
    4. What happens if the user2 logs in first? Does he still see AR Dashboard, but this time if user1 logs in later, he too sees AR Dashboard?
    Thank you,
    Dhar

  • Authorization best practices

    Hi all,
    I need to set up authorization checks within a BSP application but I'm not really sure how to do this in detail. Would it be the best way to create new authorization objects and user roles? Or would it fit to manage user rights with simple customizing tables?
    Does anybody have some helpful documents about this issue or experience in the management of user rights?
    Thanks in advance for any posts to this topic!
    Best regards
    Dominik

    We felt it was better to stick with the approach of authorization objects and user roles.  In our model classes (or any place else you put application logic) we also have calls to authorization checks coded in ABAP. Likewise as we call BAPIs many of these have authorization checks within them already.
    To add to this we have also setup authorization checks on the service node in transaction SICF.  This is similar to setting the security on the TCode within the SAPGui environment.  This is the gateway to say that you can even launch an application.  This uses authorization object S_ICF.  In SICF you can also set the Error level in a failure:
    The values have the following meanings:
    1 = an A message is dispatched (program abort)
    2 = an X message is dispatched (program error)
    Any other value in this field causes an E message to be created in the case of an error.

  • Internal vs. external directory services best practices

    Hello everyone,
    We have two distinct directory services here where I work, one that supports 'internal' needs, and one that is used for external clients, the people who use our web-facing applications. We are limited by the separation of the directory services. E.g., our internal users cannot use the external directory service to look up email addresses.
    I have been asked to look into design options and best practises. Is it common to have distinct services like this? Or are those external users usually part of the same service as the internal users? Is my online banking account information in the same directory service (assuming it is in a directory service at all) as the employees at my bank? Does it make sense to run separate services like this? What are some alternatives?
    Part of the integration problem is AD vs. Sun Directory Server. The external service is in Sun Directory Server and predates AD. The AD service is obviously here for the Windows environment. Some organizations I have worked with in the past used Sun LDAP as the authoritative source of data, and synced in one way or another into AD.
    Any feedback is appreciated,
    Mark

    No, what I am looking for is architectural input regarding the use of AD and a separate LDAP server. In my case I am talking about AD and the SJS Directory Server, but this would apply to any environment that has AD plus some other LDAP server.
    I need to be able to reasonably answer the general question: Why should we keep the SJS Directory Server, when we could just put all our LDAP data into AD?
    I also need to answer the more specific question: Given our LDAP data is external users only (customer, partners), does it make sense to keep them there? Again, why not just put these "external" entities into AD?
    I'm not trying to figure out how to get AD and LDAP to work together. I'm trying to figure out why I have two directories, and why I should or should not keep two directories. I've found nothing online dealing with what should be a very common scenario.
    Mark

  • Premiere Pro + External Hard Drive Best Practices

    For best performance when editing in premiere, is it recommended to keep raw files and project files on seperate drives?  Does this make workflow and response time quicker? What are the most efficient and safest options? Thanks

    I see this repeated over and over. I have not found it to be true. I have found over and over again that a single dedicated 7200 internal drive for media will work just fine. This is editing RT employing mainly DV, HDV, Cannon XF and Sony XdCam up to 50 Mbps. This includes multicamera projects up to four cameras.
    Since 2002 I have set up at least 10 separate Premiere based NLE PCs and have witnessed hundreds of projects across these many systems edited with Premiere 6.0, 6.5, Pro 1.5 Pro 2.0, CS4, CS5x, and CS6.  The only time when hard discs became an issue was a misadventure using USB external drives.
    I experimented, early in DV editing days, with advising editors to save their graphics, music, and project files on a separate internal media drive. However I could never get a consistent compliance with that, so for file management sake, I began having users keep all scratch disc dialogues checked to "same as project". That way they just needed to be certain to create their project folder on a Media drive, and save their project into that folder.
    Things have been smooth with that. I think that over the years I've had enough testing to say with confidence that anyone will be fine with it for general use with codecs up to 50Mbps per stream.
    The only place I've found some benefit is in targeting a separate internal drive for exports. This can speed up exports a bit, but I haven't found wild differences.
    So a typical system I might set up or use would map like this:
         C: System (usually a raid0)
         D: internal 7200 rpm Media 1 (Active Premiere Projects)
         G: USB\Firewire External Drives  Storage (Inactive premiere projects and offline file storage)
          Additional video edit space set up as:
         E: internal 7200 rpm Media 2 (Active Premiere Projects)
    Again, lots of experience with this set up and no drive performance issues.

  • Setup internal and external DNS namespaces best practice

    Is external name space (e.g. companydomain.com) and internal name space (e.g. corp.companydomain.com or companydomain.local) able to run on the same DNS server (using Microsoft Windows DNS servers)?
    MS said it is highly recommended to use a subdomain to handle internal name space - say corp.companydomain.com if the external namespace is companydomain.com.  How shall this be setup?  Shall I create my ADDS domain as corp.companydomain.com directly
    or companydomain.com then create a subdomain corp?
    Thanks in advanced.
    William Lee
    Honf Kong

    Is external name space (e.g. companydomain.com) and internal name space (e.g. corp.companydomain.com or companydomain.local)
    able to run on the same DNS server (using Microsoft Windows DNS servers)?
    Yes, it is technically feasible. You can have both of them running on the same DNS server(s). Just only your public DNS zone can be published for external resolution.
    MS said it is highly recommended to use a subdomain to handle internal name space - say corp.companydomain.com
    if the external namespace is companydomain.com.  How shall this be setup?  Shall I create my ADDS domain as corp.companydomain.com directly or companydomain.com then create a subdomain corp?
    What is recommended is to avoid having a split-DNS setup (You internal and external DNS names are the same). This is because it introduces extra complexity and confusion when managing it.
    My own recommendation is to use .local for internal zone and .com for external one.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • External Portal - Security Best Practice

    We will be initiating an external portal for ESS access. For those using ESS from home, what type of additional security access is anyone using if the person happens to lock themselves out of their ESS account? Do you have a security question built into ESS? Are you using a security grid to reset their password? I'm looking to see what other alternatives people are using.
    Thanks
    Pam Major

    Hi Tim: Here's my basic approach for this -- I create either a portal dynamic page or a stored procedure that renders an HTML parameter form. You can connect to the database and render what ever sort of drop downs, check boxes, etc you desire. To tie everything together, just make sure when you create the form, the names of the fields match that of the page parameters created on the page. This way, when the form posts to the same page, it appends the values for the page parameters to the URL.
    By coding the entire form yourself, you avoid the inherent limitations of the simple parameter form. You can also use advanced JavaScript to dynamically update the drop downs based on the values selected or can cause the form to be submitted and update the other drop downs from the database if desired.
    Unfortunately, it is beyond the scope of this forum to give you full technical details, but that is the approach I have used on a number of portal sites. Hope it helps!
    Rgds/Mark M.

Maybe you are looking for