Access rights, best approach?

One of our business rules restricts updates on 3 fields in a table for one type of user, and 5 fields in that same table to another type of user. I could use 2 views to implement these restrictions, but is there another approach? I'd like to create a single Oracle Form for all users, and avoid the complexity of conditionally using one of many views during UPDATE.
With object privileges, is there a way to restrict UPDATE/INSERT/DELETE on particular columns in a table, while not restricting other columns, without the use of views?
-Thanks
cf

An extension of this question:
We have a user which is allowed to insert a record into a table, but can only create data in 5 of 10 available fields, but we want default values inserted into the 'unavailable' fields.
I tried to test this out by starting with the following, but I couldn't get data to load:
SQL> create table chuck
2 (myname varchar2(10),
3 age number default 26);
Table created.
SQL> insert into chuck values ('Chuck');
insert into chuck values ('Chuck')
ERROR at line 1:
ORA-00947: not enough values
SQL> insert into facet.chuck values ('Chuck',1)
1 row created.
I wouldn't ask if I could figure out how to test the concept myself.
-Chuck

Similar Messages

  • Concurrent-access logging; best approach

    I have a script that runs on multiple machines at once, and I would like to provide some ongoing single source logging of these activities. As a general rule I think the chance of two machines touching the log at once are slim, but when the script could
    be running on hundreds of machines at once, it's an issue I think I need to at least be cognizant of.
    Currently I am logging with a separate txt file for each machine using the basic Out-File cmdlet, but for a common log I am thinking an XML file would be better, as I could provide current info based on machine name, which could then be reviewed in Excel,
    for example. My question is, is this a viable option? Perhaps with some code that checks to see if the file is already being written to and waits? I am thinking system.io.filestream.lock() may be an option.
    Or do I really need to look at either a database approach, or perhaps individual files for each machine, and some sort of external aggregator utility to update the XML? My test environment is not hundreds of machines, so I want to make sure I get this right
    from the get go, rather than going down a road that works for me and then blows up for a larger environment.

    OK, it seems I was right in my initial interpretation, that Collectors must be running a Server OS, and I am certain I am not going to have access to a server for my purposes. I'll be lucky to get folks to actually us a VM running regular Windows. But, I
    like the event  log idea. Not least of which because I recently tried testing file locks and it doesn't seem to work across a network, one machine can lock a file, but it appears unlocked to another machine.
    So I started looking at remote Events, and I would think that this would work.
    write-eventlog -logName:PxTools -source:MachineContext -computerName:Px_Server -message:'Testing' -id:111
    I am currently logged in on one machine as an admin user, and that user has rights on the remote machine. Th event log already exists on the remote machine as well. But when I run the script (Using Run as Administrator) I get an error that the path can't
    be found. However, I can ping the remote machine with no problems.
    I looked in to Jobs, but it seems you can't easily run remote jobs without enabling demoting. And I found a Scripting Guy article that talk about using Invoke-Command -asJob, but this failed with no meaningful error information.
    So, am I barking up another empty tree? Or am I missing something? I do understand that I could enable demoting on all machines but it's that level of prep work that I can't depend on. And again, I am looking at logging updates from hundreds of machines,
    but the vase majority of the time there will likely be minutes between logging activity. I just want to account for the possibility of concurrency.
    FWIW, the kludge I am looking at is to pass each machine the name of the log file, and when ready to log the script looks to see if there is a file there with that name. If there is it renames the file with the machine name on the end, write's its log info
    and then renames the file back. If it doesn't find the file there, but it does find a file that has been renamed then it waits a few seconds and tries again. It seems like a kludge, but if remote events can't be made to work within the constraints I have,
    that seems like a viable solution unless someone sees a problem I don't?

  • What is the best approach to store "dynamic" user accessibility ?

    Hi all,
    We are implemennting security in our ADF BC + Faces application. There is always requirement to hide/disable functionalities that a user is not allowed / authorized to access.
    Usually we do this during development time, based on what role the user is in. Using this approach, there is no way to change that , or give access to new role during runtime (after the deployment). This is what I call "static accessibility".
    In our apps, we need the give / revoke access to some functionalities during runtime. This is what I call "dynamic accessibility".
    One approach that comes to my mind is :
    We define the accessibility to each function that we want to protect (hide/unhide) in database tables. Then every time a use enter a page, read these tables through JDBC calls then store tha data in Managed Bean.
    Has anybody here implement this "dynamic accessibility" ?
    Is there a better approach ?
    Thank you very much,
    xtanto

    Saeed,
    SRDemo uses a managed bean that checks is user in role when called and returns true or false. Another approach - more elegant - is the use of a security property resolver as available
    http://jsf-security.sourceforge.net
    Regarding dynamic permissions, the use of JAAS seems to be a good solution. ADF Security uses JAAS permissions to assign component access to users.
    E.g. if the user role manager has access to edit the salary column, then the security constraint added to the update button could be
    #{!bindings.<attribute binding>.updateable}
    Note that ADF Security sets the updateable flag on an attribute.
    Or you use
    #{bindings.<iterator binding>.permissionInfo.create}
    #{bindings.<attribute binding>.permissionInfo.update}
    #{bindings.permissionInfo['pageDefName'].view}
    etc. to determine what a user can do or can't.
    Note that I haven't tested if the permissions are cached for a specific application or if they are checked each time again. If they are checked each time then this would be a performance penalty but allows to dynamically set permissions to user groups as obviously needed in your applications.
    No, we don't have tutorial for this. But a Oracle By Example for end-to-end security implementation is on my collateral plan for JDeveloper 11 (just need to write a doc writer ;-) )
    Frank

  • What is the best approach to setup intranet and internet sites in SharePoint 2013?

    I am planning to setup a internet and intranet website for one of our client.  What is the best approach to setup this kind of environment?
    Some of the users (registered users) from the internet should be able to access information in the intranet site.  I have created two web applications for intranet and internet.  Is it the right way to go forward?
    Thanks in advance! :)
    LM

    Hi Laemon,
    Creating two separate web applications, one for Internet site and the other for Intranet is the right thing you have done.
    1. To properly plan creation of your web application, site collection and website is of utmost important to ensure you build your site in a professional and most recommended way. Go through this article from Technet that would help you plan your site in
    SharePoint 2013.
    https://technet.microsoft.com/en-us/library/cc263267.aspx
    2. Planning and choosing the right authentication type is also a very important decision. I recommend you to go through the below article if you have not already gone through.
    Plan for user authentication methods in SharePoint 2013
    3. Plan for licensing for your SharePoint 2013 Internet Facing Website.
    Licensing Internet Sites Built on SharePoint 2013
    SharePoint 2013 licensing for Internet facing sites
    4. To grant access to registered users to Intranet site (as you mentioned in question), if you created both web applications in same farm (same domain) then that would be easy to grant access using Site Permission with Windows Authentication enabled for
    both web application. If both web applications are created on different domains then If there is a two-way trust in place, and the SharePoint servers have the necessary port access to the remote domain's Domain Controller, then it is automatic. If it is a
    one-way trust, then you need to follow these directions:
    http://technet.microsoft.com/en-us/library/cc263460(v=office.12).aspx
    If there is no domain trust in place, then you either need to create one, or look at alternative technologies,
    such as ADFS.
    Please remember to upvote if it helps you or
    click 'Mark as Answer' if the reply answers your query.

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best approach for IDOC - JDBC scenario

    Hi,
    In my scenarion I am creating sales order(ORDERS04) in R/3 system and which need to be replicated in a SQL Server system. I am sending the order to XI as an IDoc and want to use JDBC for sending data to SQL Server. I need to insert data in two tables(header & details). Is it possible without BPM?  Or what is the best approach for this?
    Thanks,
    Sri.

    Yes, this is possible without the BPM.
    Just create the Corresponding Datatype for the insertion.
    if the records to be inserted are different, then there wil be 2 different datatypes ( one for header and one for detail).
    Do a mutlimapping, where your Source is mapped into the header and details datatype and then send using the JDBC sender adapter.
    For the strucutre of your Datatype for insertion , just check this link,
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    To access any Database from XI, you will have to install the corresponding Driver on your XI server.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3867a582-0401-0010-6cbf-9644e49f1a10
    Regards,
    Bhavesh

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

  • Best approach -Tabs based ADF Tree left side navigation with Dynamic Regions with out UI Shell

    Hi,
    Somebody can help for the best approach to implement the following requirement.
    Req: When the user select the ADF Tree left side navigation menu, each menu will open as multiple tabs(Dynamic Tabs) in right side content area with out UI Shell Template.
    I completed the
    Step-1: From the Model project, I can able to render ADF Tree in the using view and view links. I can get the adf tree which is having 3 menu items. Each menu item having 2 sub menu's.
    I took each menu item as one(1) taskflow, each taskflow will have two(2) fragments.
    Total I have 3 task flows as Menu Items and 6 fragments for sub menu's.
    Step-2:  My question is How do I implement Tab based the ADF tree navigation (left side area to dynamic regions in content area) through dynamic regions? Please provide the steps in view layers.

    Than ks for your response.
    This is working fine for ADF Tree navigation with dynamic regions if the taskflow having only one fragment. if the taskflow having more than one fragments, this will not work. The following conditions are always satisfies one page fragment of either "employees" or "departments" task flow.  If the "employees" task flow have 2 page fragments, it's not work even you pass parameters through routers.
    public TaskFlowId getDynamicTaskFlowId() {
    if (currentTaskFlowID == null ||
    currentTaskFlowID.equalsIgnoreCase(“employees”)) {
    return TaskFlowId.parse(employeetaskFlowId);
    if (currentTaskFlowID != null &&
    currentTaskFlowID.equalsIgnoreCase(“departments”)) {
    return TaskFlowId.parse(departmetaskFlowId);
    return TaskFlowId.parse(employeetaskFlowId);
    My question is "Same use case with Dynamic Tabs" when the user click on any adf tree node.

  • Best approach... TiledView or regular ViewBean

    I'm wondering what approach would be best with JATO and the display
    I'm trying to build on a JSP page. My page layout will look something
    like this...
    Checked Out Items: (none)
    ToDo Items: 3 InfoDocs
    2 Sun Alerts
    Draft Items: (none)
    My Docs: 345 InfoDocs
    28 Sun Alerts
    18 SRDB's
    Now, this might get confusing so please ask questions if anything I
    state is not clear. The values after the colons (:) (i.e. 3 InfoDocs,
    2 Sun Alerts, etc) are dynamic. These values will be genereated off of
    queries on a database. There could be more listings or there could be
    the value (none). As far as setting this up through the JATO
    framework, I'm trying to determine the best way to do this. Right now
    I have a VoyagerHome.jsp that will represent the layout displayed
    above. I also have a VoyagerHomeViewBean.java. At first I was thinking
    of registering a Tiled View in this ViewBean but then I felt it didn't
    make sense and wouldn't work out the way I wanted it to. So now I'm
    thinking of registering the labels (Checked Out Items:, ToDo Items:,
    Draft Items:, and My Docs:) as StaticTextFields within the
    VoyagerHomeViewBean.
    Then I think I would also need to make the values (3 InfoDocs, 2 Sun
    Alerts, etc) children and make them HREF jato types. But I'm not sure
    how I can do this when the numbers are going to be dynamic. Does this
    make sense? One question I have is should I or do I have to register
    each document type (InfoDoc,
    Sun Alert, SRDB, etc) for each label grouping (Checked Out Items:,
    ToDo Items:, etc). So for example, in the end, I will need place
    holders for the following:
    Checked Out Items: xxx InfoDocs
    Checked Out Items: xxx Sun Alerts
    Checked Out Items: xxx SRDBs
    Checked Out Items: xxx Cobalt Assets
    Checked Out Items: xxx iPlanet Assets
    ToDo Items: xxx InfoDocs
    ToDo Items: xxx Sun Alerts
    ToDo Items: xxx SRDBs
    ToDo Items: xxx Cobalt Assets
    ToDo Items: xxx iPlanet Assets
    same for Draft Items and My Docs....
    This may seem confusing so please ask questions. In
    other words, is there a way to re-use the HREF tags and run them
    through a tiled view or something eventhough these values for the HREF
    tags are going to be run off of different database tables?
    What I'm trying to do is kinda difficult to explain but I hope you get
    some idea of what I'm trying to accomplish.
    Thanks
    - Billy -

    Best approach i can think of is a TiledView with HREFs for InfoDocs, Sun
    Alerts etc. You can use a StaticTextField for the numbers in the
    TiledView. ToDo Items, Draft Items etc. you can just label them in your
    jsp itself, no need to register them as children.
    Senthil
    chubbykidd wrote:
    I'm wondering what approach would be best with JATO and the display
    I'm trying to build on a JSP page. My page layout will look something
    like this...
    Checked Out Items: (none)
    ToDo Items: 3 InfoDocs
    2 Sun Alerts
    Draft Items: (none)
    My Docs: 345 InfoDocs
    28 Sun Alerts
    18 SRDB's
    Now, this might get confusing so please ask questions if anything I
    state is not clear. The values after the colons (:) (i.e. 3 InfoDocs,
    2 Sun Alerts, etc) are dynamic. These values will be genereated off of
    queries on a database. There could be more listings or there could be
    the value (none). As far as setting this up through the JATO
    framework, I'm trying to determine the best way to do this. Right now
    I have a VoyagerHome.jsp that will represent the layout displayed
    above. I also have a VoyagerHomeViewBean.java. At first I was thinking
    of registering a Tiled View in this ViewBean but then I felt it didn't
    make sense and wouldn't work out the way I wanted it to. So now I'm
    thinking of registering the labels (Checked Out Items:, ToDo Items:,
    Draft Items:, and My Docs:) as StaticTextFields within the
    VoyagerHomeViewBean.
    Then I think I would also need to make the values (3 InfoDocs, 2 Sun
    Alerts, etc) children and make them HREF jato types. But I'm not sure
    how I can do this when the numbers are going to be dynamic. Does this
    make sense? One question I have is should I or do I have to register
    each document type (InfoDoc,
    Sun Alert, SRDB, etc) for each label grouping (Checked Out Items:,
    ToDo Items:, etc). So for example, in the end, I will need place
    holders for the following:
    Checked Out Items: xxx InfoDocs
    Checked Out Items: xxx Sun Alerts
    Checked Out Items: xxx SRDBs
    Checked Out Items: xxx Cobalt Assets
    Checked Out Items: xxx iPlanet Assets
    ToDo Items: xxx InfoDocs
    ToDo Items: xxx Sun Alerts
    ToDo Items: xxx SRDBs
    ToDo Items: xxx Cobalt Assets
    ToDo Items: xxx iPlanet Assets
    same for Draft Items and My Docs....
    This may seem confusing so please ask questions. In
    other words, is there a way to re-use the HREF tags and run them
    through a tiled view or something eventhough these values for the HREF
    tags are going to be run off of different database tables?
    What I'm trying to do is kinda difficult to explain but I hope you get
    some idea of what I'm trying to accomplish.
    Thanks
    - Billy -
    To download the latest version of JATO, please visit:
    http://www.sun.com/software/download/developer/5102.html
    For more information about JATO, please visit:
    http://developer.iplanet.com/tech/appserver/framework/index.jsp
    [Non-text portions of this message have been removed]

  • Best approach - using materialized views

    Hi
    We are using materialized views for structuring complex business data.
    These views get refreshed every night. However, the refresh jobs runs slow in some cases.In such scenarios, the job would still be running while client applications (JDBC) try to access data from the views. This would result in client calls to wait for a long time or to timeout. I would like to know what is the best approach in such scenarios to ensure data availability and performance?
    Thanks
    RC

    See
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14226/repmview.htm#i31171
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14226/repmview.htm#sthref491
    and
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/basicmv.htm#sthref521
    (which also lists the Restrictions)
    You can use a Fast Refresh if it meets the restrictions. You need to have a MATERIALIZED VIEW LOG created on the source table(s).
    Hemant K Chitale

  • Best approach to create a security environment in Java

    I need to create a desktop application that will run third party code, and I need to avoid the third party code from export by any way (web, clipboard, file io) informations from the application.
    Somethig like:
    public class MyClass {
        private String protectedData;
        public void doThirdPartyTask() {
            String unprotedtedData = unprotect(protectedData);
            ThirdPartyClass.doTask(unprotectedData);
        private String unprotect(String data) {
    class ThirdPartyClass {
        public static void doTask(String unprotectedData) {
            // Do task using unprotected data.
            // Malicious code may try to externalize the data.
    }I'm reading about SecurityManager and AccessControler, but I'm still not sure what's the best approach to handle this.
    What should I read about to do this implementation?

    Whilst code without any permissions (as supplied through the ProtectionDomain by the class' ClassLoader) cannot access network, file and system clipboard, this does not mean it is entirely isolated.
    Even modern cryptographic systems are surprisingly vulnerable to side-channel attacks.
    Where an untrusted agent has access to sensitive data, it isn't very feasible to stop any escape of that data. Sure, you can block off overt posting of the data, but you cannot reasonably block off all covert channels.
    Steganographic techniques are a particularly obvious way to covertly send sensitive data out amongst intended publications.

  • Best approach for performing DMLs using stored procedures

    Hi,
    I have a really general question and would like to hear your say about this.
    I want my application to manipulate or read data using stored procedures (or packages in that manner) and not directly using queries against the DB.
    Let's say I have a table with many columns:
    create table test (pkid number(10),col1 varchar2(30), col2 number(10), col3 date,...);For such a DML procedure, is it best to do something like
    procedure do_update(i_pkid IN number,i_col1 IN varchar2, i_col2 IN number, i_col3 IN date,...) as
    begin
       update test
       set col1=i_col1,
            col2=i_col2,
            col3=i_col3...
       where pkid=i_pkid;
       commit;
    end;Or do a selective update, meaning update only a certain column every time, given only 1 column actually changes? (and how to do that - separate procedures for each column? [columns can be nulls])
    Also, is it better to work with test.col1%type instead of specifying the data type in the procedure?
    And one last question - If I have a table with 100 columns and I don't want to create a procedure with 100 parameters - the best approach would be to use a record?
    I just need to be set on the way I start implementing things in order to do it well from the start.
    Many thanks.
    Edited by: Pyrocks on Nov 17, 2010 1:58 PM

    Pyrocks wrote:
    One last clarification (although it may be more related to c/c++ developers - maybe one of you will know):
    We are working with C++ and VB against a SQLServer and my part is to translate all the existing procedures to Oracle in order to migrate the application to work with Oracle DB.
    The existing procedures use an IN parameter for each column in the table and I would really like to use rowtype like you mentioned.
    Since I'm not a c/c++/vb developer - is there an easy way of working with such types, or even User-Defined Types, from c/c++/vb (as in passing a rowtype record from c++ to a SP ?)
    I'm not looking for the actual way - just want to know how hard it would be and how much code needs to be changed in order to be able to convince the developers that this is the RIGHT way to work.
    PS. none of our developers have experience with ORACLE so they wouldn't know the answer...Not actually an Oracle server-side (SQL language or PL/SQL language) question - but a client one. And it has been a long time since I wrote a fat client using C/C++ or Delphi.
    The OCI (<i>Oracle Call Interface</i>) supports advance (user defined) SQL data types. Has since Oracle 8i. So in that respect, yes the client can support custom SQL data types.
    How well it does depends entirely on that client language's features wrt OCI integration. For example, Delphi 4 was release around Oracle 8i and supported custom SQL types. I would expect that most languages today (like Java and C#) will provide support for it.
    As for usiong +%ROWTYPE+ - this is a PL/SQL clause as far as I know. Unsure whether it is supported by the OCI. What could support it is a pre-compiler like Pro*C. These enable you to mix pseudo SQL source code with client language source code. The pre-compilation step then replaces the pseudo SQL code with native client language calls to the OCI. The code is then compiled by that client language's compiler. Pre-compilers can pull all kinds of interesting "tricks" with their pseudo SQL code support.
    The best would be to consult the applicable client language's manuals that describe the interface it supports (via OCI) to Oracle.

  • What is the best approach to converting LV7.1 tags to LV2012 shared variables in multiple VIs?

    What is the best approach to upgrading from LV7.1/DSC tags to LV2012/DSC shared variables, in multiple VIs running on multiple platforms? Our system is composed of  about 5 PCs running Windows 2000/LV7.1 Runtime, plus a PLC, and a main controller running XP/SP3/LV2012. About 3 of the PCs publish sensor information via tags across the LAN to the main controller. Only the main controller is currently being upgraded. Rudimentary questions:
    1. Will the other PCs running the 7.1 RTE (with tags) be able to communicate with the main controller running 2012 (shared variables)?
    2. Is it necessary to convert from tags to shared variables, or will the deprecated legacy tag VIs from LV7.1 work in LV2012?
    3. Will all the main controller VIs need to be incorporated into a project in order to use shared variables?
    4. Is the only way to do this is to find all tag items and replace them with shared variable items?
    Thanks in advance with any information and advice!
    lb
    Solved!
    Go to Solution.

    Hi lb,
    We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
    The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
    You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
    Hope this gets you headed in the right direction.  Let us know if you have more questions.
    Thanks,
    Dave C.
    Applications Engineer
    National Instruments

  • What´s the best approach to work with Excel, csv files

    Hi gurus. I got a question for you. According to your experience what's the best approach to work with Excel or csv files that have to be uploaded through DataServices to you datawarehouse.
    Let's say your end-user, who is not a programmer, creates a group of 4 excel files with different calculations in a monthly basis, so they can generate a set of reports from their datawarehouse once the files have been uploaded to tables in your DWH. The calculations vary from month to month. The user doesn't have a front-end to upload the excel files directly to Data Services. The end user needs to keep a track of which person uploaded the files for a determined month.
    1. The end user should place their 4 excel files in a shared directory that will be seen by DataServices.
    2. DataServices will execute certain scheduled job that will read the four files and upload them to the Datawarehouse at a determined time, lets say at 9:00pm.
    It makes me wonder... what happens if the user needs to present their reports immediately so they can´t wait until 9:00pm.  Is it possible for the end user to execute some kind of action (out of the DataServices Environment) so DataServices "could know" that it has to process those files right now, instead of waiting for the night schedule?
    Is there a way that DS will track who was the person who uploaded those files?
    Would it be better to build a front-end for the end user so they can upload their four files directlyto the datawarehouse?
    Waiting for your comments to resolve this dilemma
    Best Regards
    Erika

    Hi,
    There are functions in DS that captures the input files automatically. You could use file_exists() or wait_for_file() option to do that. Schedule the job to run every certain minute and if the file exists then run. This could be done by using a certain file name with date and timestamp etc or after running move the old files to archive and DS wait for new files to show up.
    Check this - Selective Reading and Postprocessing - Enterprise Information Management - SCN Wiki
    Hope this helps.
    Arun

  • Best Approach to create Security / Authorization Schema for an APEX Apps

    Hi,
    I am planning to create a Security / Authorization Schema for an APEX Application.
    Just want to know what is the best approach to create the security feature in APEX, so that it should be re-used in other APEXApplications too..
    I am looking for following features...
    1. users LOGIN and then user's name is stored in APEX_USER...
    2. Based on the user, I want to restrict the Application on following levels.
    - TABS
    - TABS - Page1 (Report
    - Page2 (Form)
    - Page2 (Region1)
    - Page2 (Region1, Button1)
    - Page2 (Region1, Items,....)
    AND so on.....basically depending on user....he will have access to certain TABS, Pages, Regions, Buttons, Items...
    I know, we have to create the Authorization Schema for this and then attach these Authorization Schema to the different Level we want.
    My Question is, what should be the TABLE structure to capture these info for each user...where we will say...this USER will have following access...AND then we create Authorization Schema from this table...
    Also what should be the FRONT end, we should have to enter these detail...
    SO, wondering, lot of people may already have implemented this feature....so if guys can provide the BEST Approach (re-usable for other APEX Application)....that will be really nice..
    Thanks,
    Deepak

    Hi Raghu,
    thanks for the detial info.
    so that means..I should have 2 table...
    master table (2 columns - username, password)
            username    password
       user1       xxxx
       user2       xxxx2nd table (2 columns - username, chq_disp_option)
    - In this table, we don't have Y/N Flag you mentioned..
    - If we have to enter all the regions/tabs/pages in the Applications here or just those regions/tabs/pages for which are conditionally diaplayed.
    - so that means in all the Pages/Regions/tabs/items in the entire Application, we have to call the Conditionally display..
    - suppose we have 3 tabs, 5 pages, 6 regions, 15 items..that means in this table we have to enter (3+5+6+15) = 29 records for each individual users..
              username    chq_disp_option
       user1       re_region1
       user1       re_region2
       user1       tb_main
       user1       Page1
       user1       Page5
       ----        ----     - how you are defining unique name for Regions..i mean in static ID or the Title
    - is the unique name for tab & item is same as the TAB_NAME (T_HOME) & Item Name (P1_ITEM1) or you are defining somewhere else.
    Thanks,
    Deepak

Maybe you are looking for

  • IPod not syncing, very strange and different problem compared to others

    I have an 80 GB iPod Classic, when i plug it in to sync with my computer. The computer makes the noise showing that the computer recognizes a USB port being used, however iTunes immediately freezes up while the iPod sits in limbo on its screen saying

  • Report file properties dialog on MS Vista

    Post Author: DCox CA Forum: General My company runs and distributes CR v11.x as part of our application currently running on XP.  We are migrating our application to Vista.  Our application, written in PowerBuilder v10.5, reads some of the user-defin

  • Lost functions

    Satellite L355. Since I had a tech fix a virus problem I can't get y flashcards, function keys or webcam to work. I get the usual drice fail on webcam and no response fromflashcards or function keys. Any options?

  • Oracle.xml.sql.OracleXMLSQLException: Expected name instead of .

    Hi, I'm using XDK in Tomcat to execute queries and get the XML result in the browser. I have written many queries but now I'm having problems in one of them. The error output that I see in my browser is this one: <ROOT> <ERROR> oracle.xml.sql.OracleX

  • FI Planning

    Hello All, I'm looking for overview and some information on FI planning , can some body help me out on this. Thanks Venkat Moderator: Please, read and respect SDN rules