Trigger validation for upat event in row level

Hi,
i have a emp table with 100 columns.Iwrote  a trigger for insert and update like bellow.
create or replace trigge t_emp
AFTER INSERT OR UPDATE ON  emp
  REFERENCING NEW AS NEW OLD AS OLD 
FOR EACH ROW 
BEGIN
if inserting then
insert into emp_temp1 values(:new.1,:new.2.....);
elsif updating then
insert into emp_temp2 values(:new.1,:new.2...);
end if;
end;
for insert and update i am able to get records perfect.My problem is if i open a record in emp and modify the colum with same value then also i am getting record into emp_temp2.i don't want duplicate so if i wrote like bellow
IF :OLD.1<>:NEW1 OR :OLD.2<>:NEW.2 like  then i won't get duplicate but there are 100's of columns is there .So is there any alternative for record level validation.
remember here is no primary keys...

Either check all your columns or create a primary key or unique constraint or unique index

Similar Messages

  • Validation for transient VO/table rows

    Hi,
    I have 2 pages:
    1. First page is an entry page with an editable table like this (It's backed by a transient VO which I initialize with some blank lines.) and a submit button:
    Part Type (dropdown)          Part num (text box)          Quantity (text box)     
    2. Second page is a read only table that shows results from the same transient VO with additional info.
    Part Type          Part num          quantity     price
    When hitting the submit button on the first page, I'd like to run a custom validator that validates each row in the entry page table.
    Since it's a transient VO and has no entity object, I didn't know how to do it in the model. The validation should use the part type and part code values and call a database API to do tha validation, and then display the message from the API in the part code text box.
    I got it to (sort of) work using a validator method in a backing bean.
    This is how I implemented the part validation (on first page).
         - Make part type autosubmit = true.
         - Make the editable table bind to the backing bean - it basically has a getTable1() and setTable1 method.
         - Add validator to Part num column.
         - In the validator code in the backing bean, use
              it3_validator(FacesContext facesContext,
                                  UIComponent uIComponent, Object object)
                 FacesCtrlHierNodeBinding q = (FacesCtrlHierNodeBinding)this.getTable1().getRowData();
                 Row currentRow  = q.getRow();
              OperationBinding method = getBindings().getOperationBinding("validateItem");  //call database api in AM
                     method.getParamsMap().put("item_number_in",(String)object);
                 method.getParamsMap().put("item_type_in",(String)currentRow.getAttribute("PartType"));
              map = (HashMap)method.execute();
                 if (map.get("valid_item")!= null && map.get("valid_item").toString().compareToIgnoreCase("Y") != 0)
                          String msgS = (String)map.get("error_msg");
                          FacesMessage message = new FacesMessage(FacesMessage.SEVERITY_ERROR, msgS, msgS);
                          facesContext.addMessage(uIComponent.getClientId(facesContext), message);
                          ((RichInputText)uIComponent).setValid(false);                        
         It works, but once a part no in a row becomes invalid it doesn't recognize changes in the part type in that row. How can I set it so that when I change part type, the validation recognizes that I've change the part type? So if I enter a correct part num but wrong pary type, and then change part type to correct value, it should not throw error message again.
    Should I even be doing this in the backing bean? Is there some way I can just implement it in the model so I don't have to worry about getting row level info from getTable1?
    I'd posted on this before, but the solution I tried didn't really work at all (the implementing-custom-generic-plsql uses entity object to execute the db api)
    Re: Validation using a DB function
    I'm using JDev 11.1.1.2.0
    thanks,
    Kalp

    Thank you Timo that helps me understand the validation behavior..
    I added this in the AM and it'scalling the validation:
            Row[] rows = vo.getAllRowsInRange();
            for (Row row : rows)
                row.validate();
            }and I added more code to the validation:
        @Override
        public void validate(){
            System.out.println("Validate in transient vo");
            System.out.println("getPartType from rowimpl:"+getPartType());
            System.out.println("getPartCode from rowimpl:"+getPartCode());       
            HashMap map = new HashMap();
            map = validateItem(getPartCode().toString(),getPartType().toString());
            if (map.get("valid_item")!= null && map.get("valid_item").toString().compareToIgnoreCase("Y") != 0)
                throw new JboException(map.get("error_msg").toString());
            super.validate();
        }   The message then appears as a popup which shows the error message for the first invalid row and also doesnt stop the navigation to next page.
    So I changed to:
       @Override
        public void validate(){
            System.out.println("Validate in transient vo");
            System.out.println("getPartType from rowimpl:"+getPartType());
            System.out.println("getPartCode from rowimpl:"+getPartCode());       
            HashMap map = new HashMap();
            map = validateItem(getPartCode().toString(),getPartType().toString());
            if (map.get("valid_item")!= null && map.get("valid_item").toString().compareToIgnoreCase("Y") != 0)
                FacesContext fctx = FacesContext.getCurrentInstance();
                String msgS = (String)map.get("error_msg");
                FacesMessage message = new FacesMessage(FacesMessage.SEVERITY_ERROR, msgS, msgS);
                fctx.addMessage(null, message);
            super.validate();
        }   This is better, but still I don't know how to tie the error message to a UI component. This shows the list of validation messages in the popup but I want to tie the corresponding error message to the part code for the invalid row in the editable table UI.
    I want it to have the same look/feel/behavior as when I add something like a Compare validator on an attribute in the VO. If I add a validator like Compare to 10, then the message is tied to the specific cell
    in the UI table and it stops navigation.
    How can I mimic this "seeded" type of behavior?
    Thanks for your help on this!
    thanks,
    Kalp

  • Getting iteration number in row level trigger

    Hi. Is there a way to get iteration number in row level trigger? Or to access data inserted in statement level trigger from row level trigger (statement level trigger are supposed to be executed before row level triggers but I cannot access them).
    I'm using Oracle 10g.

    My oracle version:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    And business problem is like this:
    I need to have two log tables for some tables in my database:
    First log table is a statement level log. After insert or update or delete it should get one new row with mentioning date, time, sid, query type and some additional information.
    Second table should include all columns from table logged, date, time, sid and operation type.
    The problem is, I need exact the same date and time for each row in both log tables.
    Someone said that sysdate should return same value for query execution time. But it have nothing to do with triggers fired on this query.
    So you may say that I'm curious about getting exact same date and time for one statement level trigger and for each execution of row level trigger.

  • Validation for issue for production not working

    hi all,
    i have designed database validation for issue for production  .
    by default SAP gives error  if instock is less than the quantity to be issued.
    My Clients requierment is whlie issuing item for production, row level item quantity should not be greater than the instock of a warehouse selected for that item in row level.
    while in case of sap default validation it considers all the instock of item from all the warehouse and i want to consider only those stock  of warehouse whih is selected for the item at row level.
    my validation is as follows but it gives error for all cases.
    IF @transaction_type in ('A','U') AND @object_type = '60'
    BEGIN
    declare @line as numeric(4)
    declare @dline as numeric(4)
    declare @ritem as nvarchar(50)
    declare @bref as nvarchar(50)
    select @bref= T1.[BaseRef] from OIge T0 left outer JOIN
    Ige1 T1 ON T0.DocEntry = T1.DocEntry
    left outer join oitm t2 on t2.itemcode=t1.itemcode
    left outer join  OITW T3 on t3.whscode=t1.whscode and t3.itemcode = t1.itemcode
    where T1.docentry=@list_of_cols_val_tab_del
    set @dline=0
    select @line= isnull(max(t1.linenum),0)
    from OIge T0 left outer JOIN
    Ige1 T1 ON T0.DocEntry = T1.DocEntry
    left outer join oitm t2 on t2.itemcode=t1.itemcode
    left outer join  OITW T3 on t3.whscode=t1.whscode and t3.itemcode = t1.itemcode
    where t1.docentry= @list_of_cols_val_tab_del
    while @dline<=@line
    begin
    select @ritem= t1.itemcode from OIge T0 left outer JOIN Ige1 T1 ON
    T0.DocEntry = T1.DocEntry
    left outer join oitm t2 on t2.itemcode=t1.itemcode
    left outer join OITW T3 on t3.whscode=t1.whscode and t3.itemcode = t1.itemcode
    where T0.DocEntry = @list_of_cols_val_tab_del and
    t1.linenum=@dline
    if exists (SELECT T0.DocEntry
    from OIge T0 left outer JOIN Ige1 T1 ON T0.DocEntry = T1.DocEntry
    left outer join oitm t2 on t2.itemcode=t1.itemcode
    left outer join OITW T3 on t3.whscode=t1.whscode and t3.itemcode = t1.itemcode
    where T0.DocEntry = @list_of_cols_val_tab_del and
    t1.linenum=@dline and t1.itemcode in (@ritem) and t3.onhand < t1.quantity )
    BEGIN
    SELECT @Error = 1, @error_message = 'less quantity'
    END
    set @dline=isnull(@dline,0)+1
    set @ritem=' '
    end
    END
    pls suggest solution...

    Iu2019m sorry. The error was that the tables contain the data after the transaction. So try this:
    IF @object_type = '60' and @transaction_type= 'A'
    and (Select top 1 j.BaseType from IGE1 j
          Where j.DocEntry = @list_of_cols_val_tab_del)=202
    BEGIN
    if exists
    (Select i.DocEntry from IGE1 i
       Where i.DocEntry = @list_of_cols_val_tab_del
        and (select w.OnHand from OITW w
            where w.ItemCode=i.ItemCode and w.WhsCode=i.WhsCode)<0 )
    Begin
    set @error =1
    set @error_message = 'Less quantity in the warehouse !'
    End
    END

  • Setting up Row Level Security in EPM 11.1.1.3

    I have been following the Administration guide but failed to setup row level security in EPM 11.1. Please advise which part of my steps are wrong. (note I am using MS SQL Server for the EPM Shared Services and Workspace database, everything under Windows env)
    i) Enable row level security in Workspace.
    Step 1) Define a ODBC Data Source named "EPM_WS" in Windows. The ODBC Data Source points to the MS SQL Server database of EPM Workspace since it contains the 3 tables (BRIOSECG, BRIOSECP, BRIOSECR) related to row-level-security.
    Step 2) Login to workspace, select "Administer"->"Configuration Console". Edit "Interactive Reporting Data Access Services" and add a data source with ODBC->MS SQL Server -> "EPM_WS" as the name of datasource. Restart "Interactive Reporting Data Access Services".
    Step 3) Login to workspace, select "Administer"->"Row Level Security". Check "Enable Row Level Security", Choose ODBC->MS SQL Server-> fill in "EPM_WS" as Data Source Name"-> Provide correct user name and password. Click "Save Properties"
    Step 4) It always prompt "Server error setting the Connectivity. Recommended Action: Logoff and logon again. If problem persists contact your local security administrator."
    Any log I can inspect for the connectivity error?
    ii) Configure Row Level Security setting
    I know that for Hyperion IR, there is a file row_level_security.bqy comes with the installation. User can use this bqy file to configure the actual row level security setting. However, I cannot locate this bqy file in the EPM 11.1 installation. What is the proper step for setting up the row level security configuration?
    thank you very much.

    I have been following the Administration guide but failed to setup row level security in EPM 11.1. Please advise which part of my steps are wrong. (note I am using MS SQL Server for the EPM Shared Services and Workspace database, everything under Windows env)
    i) Enable row level security in Workspace.
    Step 1) Define a ODBC Data Source named "EPM_WS" in Windows. The ODBC Data Source points to the MS SQL Server database of EPM Workspace since it contains the 3 tables (BRIOSECG, BRIOSECP, BRIOSECR) related to row-level-security.
    Step 2) Login to workspace, select "Administer"->"Configuration Console". Edit "Interactive Reporting Data Access Services" and add a data source with ODBC->MS SQL Server -> "EPM_WS" as the name of datasource. Restart "Interactive Reporting Data Access Services".
    Step 3) Login to workspace, select "Administer"->"Row Level Security". Check "Enable Row Level Security", Choose ODBC->MS SQL Server-> fill in "EPM_WS" as Data Source Name"-> Provide correct user name and password. Click "Save Properties"
    Step 4) It always prompt "Server error setting the Connectivity. Recommended Action: Logoff and logon again. If problem persists contact your local security administrator."
    Any log I can inspect for the connectivity error?
    ii) Configure Row Level Security setting
    I know that for Hyperion IR, there is a file row_level_security.bqy comes with the installation. User can use this bqy file to configure the actual row level security setting. However, I cannot locate this bqy file in the EPM 11.1 installation. What is the proper step for setting up the row level security configuration?
    thank you very much.

  • Row level validations for JTable

    Is their a way to validate the row data of a editable JTable before moving out from that row, and if invalid data existings in that row then we need to display an error message and set the focus back to that row.
    Basically the user should not be able to move out from that row until he enters valid data for the row.
    I was able to do the validations at the cell level, by validating in the editor of that particular cell or in the SetValueAt() method. The reason i need the row level validations are for example if a required cell is left blank(if we don't tab into the cell then nor the editor is started nor the setValueAt() is called to do the validations) or if a cell's validation depends on the other cell's value in the same row.
    I can over write the valueChanged() method of the Jtable itself to trap the row change event. But the valueChanged() method only does the repainting for the new selection, the selectedRow index is changed even before the valueChanged is called. So in the valueChanged() method the selectedRow index is the row to which u are moving into rather than the row from which u are moving out and this is the row u need validations for. ListSelectionEvent has two methods getFirstIndex() and getLastIndex(), one among these two will return the row number from which u are moving out from, so based on this i can validate the row iam moving out from and set the focus back to that row if invalid data exists.(The valueChanged event is fired again when u are resetting back the focus).
    But the problem is when the row u are moving into and the row u are moving out from both have invalid data. Then it becomes an infinite loop as the valueChanged event is fired when u are setting back the focus.
    Any Suggestions ?
    Thanks

    DrClap:
    i need a editable JTable. so ur suggestion doesn't work.
    Jaredth :
    here is the code. I have over rided valueChanged method of JTable and called my validateRow in this method. I am just displaying a error msg for now, iam not setting the focus back to the row which has invalidate data. This code doestn't work for cases where when the row u are moving into and the row u are moving out from both have invalid data.
    public void valueChanged(ListSelectionEvent e){
    if( this.getSelectedRow() == -1) return;
    if(!e.getValueIsAdjusting() ) {
    int selRow = this.getSelectedRow();
    int firstIndex = e.getFirstIndex();
    int lastIndex = e.getLastIndex();
    int rowToValidate = selRow == lastIndex ? firstIndex : lastIndex;
    if( !isValidateRow( rowToValidate) ){
    JOptionPane.showMessageDialog(null,"Invalid Data at row: "+rowToValidate,
    "Row validation failed", JOptionPane.ERROR_MESSAGE);
    return;
    kenny_lee:
    I have already done what u explained above. This works fine for cases where u need to validate individual cells in a row. But what happens when the validation of a cell in a row depends on the value in a different cell in the same row and for which u didn't enter any value yet. So in these cases u need to do the validation when u are moving out from the row. So i need to know is their a way to handle this in a efficient way? There is a way by over loading the valueChanged method of the JTable as mentioned above but it will not work for cases where the row u are moving into and the row u are moving out from both have invalid data.
    Thanks for ur response.
    Let me know if u find a solution to this problem

  • OA JDeveloper 9i (RUP5) 9.0.3.5 (Build 1453) - Row level validation issue

    Hi everyone,
    Can anyone help me with the following:
    Version of OA JDeveloper 9i (RUP5) 9.0.3.5 (Build 1453)
    I have a xml page what is based on a master detail relationship.
    The master table (Advancetable EO) contain the Employee details and the Detail VO (Advancetable) contain
    the allowances an employee are entitiled too.
    How do I stop a user from selecting(radio button) the next employee in an advance table, it the the records
    they entered at the detail level is not valid. (Row level validation). My validation method works fine then they try commit the record.
    But I need to stop them from navigating to the next employee record in the table, when the records they entered
    at detail level is not valid.
    How will I do this?
    I added the same validation method I used for the "APPLY" event to validate the records entered at detail level to the "EmployeeSelect" event, but the error message showed only after the new employee has been selected. Thus the error message that is displayed does not relate the the employee record selected at that stage.

    Time for you to make a trip to the [url http://forums.oracle.com/forums/forum.jspa?forumID=210]OA Framework Forum

  • How to set a default start and/or end date for New Events based on trigger date.

    I'm using the CalendarActivityListener to get current row when clicking on an existing event. As per previous posts this listener gives you access to event detail including Start Date, End Date, etc.
    However, what I want to do is to default the start (and end) dates for New Events based on the trigger date.
    I've tried the CalendarListener and can grab the Trigger Date from it - however, I can't see a way to pass this directly to the popup/dialog I'm using to create the new event.
    At present I'm putting the TriggerDate into the ADFContext session scope e.g. ADFContext.getCurrent().getSessionScope().put("TriggerDate",calendarEvent.getTriggerDate());
    Then, I've tried multiple approaches to try and "get" the TriggerDate from session scope to drop it into my new Calendar Event basically, I'm trying to default the InputField(s) associated with the Start Date using the value from the session - I've tried
    1. setting the default value for the InputField in the jspx using a binding expression i.e. value="#{sessionScope.TriggerDate}" - this actually sets the value appropriately when the jspx is rendered but, when I go to create I get a NPE and I can't debug. I assumed that it might be a Date type issue - it would appear that CalendarListener provides a date of type java.util.Date and that the StartDate attribute of my VO/EO/table is a DATE and therefore requires oracle.jbo.domain.Date so I tried casting it - to no effect
    2. Using a Groovy expression *(StartDate==null?adf.context.sessionScope.TriggerDate:StartDate)* in my calendar's EventVO to default the Start Date to the same result
    Any thoughts or ideas?

    John,
    Thanks for that suggestion - could not get it to work. However, I did manage a different approach. I finally determined the sequence of events in terms of how the various events and listeners fire (I think).
    Basically, the CalendarActivityListener fires, followed by the listener associated with the Calendar object's Create facet, followed finally by the CalendarEventListener - the final is where the TriggerEvent is available and then finally, control is passed to the popup/dialog in the Create facet. So, my approach of trying to set/get the TriggerDate in the user's HTTP session was doomed to failure because it was being get before it had been set :(
    Anyway, I ended up adding a bit of code to the CalendarEvent listener - it grabs the current BindingContext, navigates through the DCBindingContainer to derive an Iterator for the ViewObject which drives the calendar and then grabs the currently active row. I then do a few tests to make sure we're working with a "new" row because I don't want to alter start & end dates associated with an existing calendar entry and then I define the Start and End dates to be the Trigger Date.
    Works just fine. Snippet from the listener follows
    BindingContext bindingContext = BindingContext.getCurrent();+
    *if ( bindingContext != null )    {*+
    DCBindingContainer dcBindings = (DCBindingContainer) bindingContext.getCurrentBindingsEntry();+
    DCIteratorBinding iterator = dcBindings.findIteratorBinding("EventsView1Iterator");+
    Row currentRow = iterator.getCurrentRow();+
    if ( currentRow.getAttribute("StartDate") == null)+
    currentRow.setAttribute("StartDate", calendarEvent.getTriggerDate());+
    if (currentRow.getAttribute("EndDate")==null)+
    currentRow.setAttribute("EndDate", calendarEvent.getTriggerDate());+
    *}*

  • Capturing value in after insert or update row level trigger

    Hi Experts,
    I'm working on EBS 11.5.10 and database 11g. I have trigger-A on wip_discrete_jobs table and trigger-B on wip_requirement_operations table.When ever i create discrete job, it inserts record in both wip_discrete_jobs and wip_requirement_operations.
    Note:2 tables are like master-child relation.
    Trigger-A: After Insert.Row Level trigger on wip_discrete_jobs
    Trigger-B:After Insert,Row Level trigger on wip_requirement_operations
    In Trigger A:
    I'm capturing wip_entity_id and holding in global variable.
    package.variable:=:new.wip_entity_id
    In Trigger B:
    I'm using the above global variable.
    Issue: Let's say i have create discrete job,it's wip_entity_id is 27, but the global variable is holding the previous wip_entity_id(26),not current value.It looks like before trigger A event is complete, trigger B is also in process, i think this could be the reason it's not storing the current wip_entity_id in the global variable.
    I need your help how to have the current value in the global variable so that i can use that in the trigger B.
    Awaiting response at the earliest.
    Thanks

    798616 wrote:
    Hi Experts,
    I'm working on EBS 11.5.10 and database 11g. I have trigger-A on wip_discrete_jobs table and trigger-B on wip_requirement_operations table.When ever i create discrete job, it inserts record in both wip_discrete_jobs and wip_requirement_operations.
    Note:2 tables are like master-child relation.
    Trigger-A: After Insert.Row Level trigger on wip_discrete_jobs
    Trigger-B:After Insert,Row Level trigger on wip_requirement_operations
    In Trigger A:
    I'm capturing wip_entity_id and holding in global variable.
    package.variable:=:new.wip_entity_id
    In Trigger B:
    I'm using the above global variable.
    Issue: Let's say i have create discrete job,it's wip_entity_id is 27, but the global variable is holding the previous wip_entity_id(26),not current value.It looks like before trigger A event is complete, trigger B is also in process, i think this could be the reason it's not storing the current wip_entity_id in the global variable.
    I need your help how to have the current value in the global variable so that i can use that in the trigger B.
    Awaiting response at the earliest.
    ThanksMy head hurts just thinking about how this is being implemented.
    What's stopping you from creating a nice and simple procedure to perform all this magic?
    Continue with the global/trigger magic at your own peril, as you can hopefully already see ... nothing good will come from it.
    Cheers,

  • Mutating error : row level BEFORE UPDATE trigger

    Hi,
    I had an issue on mutating terror(was trying to write a row level BEFORE update trigger), however i got it resolved after refering tom kytes web site. I thought i would share it with everyone, might be helpful for a few... Thanks!
    I will be more than happy to learn on further better ways of resolving this issue.
    Below I have posted the trigger that was causing this error and the work around for that issue, I created a package and three other triggers to replace row level BEFORE update trigger:
    ++trigger that was causing this issue:++
    CREATE OR REPLACE TRIGGER C_F_BI
    BEFORE INSERT ON CONTACT_FUNCTION FOR EACH ROW
    declare
    cursor function_code_cur ( cur_contact_id CONTACT.Contact_Id%type,
    cur_function_type_code CONTACT_FUNCTION.Function_Type_Code%type)
    is
    select
    cft.function_type_code,
    cft.multiples_permitted
    from
    CONTACT_FUNCTION cf,
    CONTACT c,
    CONTACT_FUNCTION_TYPE cft
    where
    c.acct_id = (select acct_id from contact where contact_id = cur_contact_id)
    and c.contact_id = cf.contact_id
    and cf.function_type_code = cft.function_type_code
    and cft.function_type_code = cur_function_type_code;
    v_function_type_code contact_function_type.function_type_code%type;
    v_multiples_permitted contact_function_type.multiples_permitted%type;
    E_Multiples_Not_Permitted Exception;
    begin
    if not function_code_cur%isopen then
    open function_code_cur(:new.contact_Id,:new.function_type_code);
    end if;
    loop
    fetch function_code_cur into v_function_type_code, v_multiples_permitted;
    exit when not function_code_cur%found;
    end loop;
    ** if the fetch returns a v_multiples_permitted of 'Y' or no record is found, then it is
    ** ok to add the record. Otherwise raise an error because that function_type is already
    ** being used at the current acct.
    if v_multiples_permitted = 'N' then
    raise E_Multiples_Not_Permitted;
    end if;
    close function_code_cur;
    EXCEPTION
    when E_Multiples_Not_Permitted then
    raise_application_error( -20001,'Multiples not allowed for function type ' ||
    v_function_type_code || '. sqlerrm - ' || sqlerrm );
    when others then
    raise;
    end;
    ++solution for above issue :++
    create or replace package state_pkg
    is
    type ridArray_rec is record(rid rowid, cont_id number, cont_fn_type varchar2(20));
    type ridArray is table of ridArray_rec index by binary_integer;
    newRows ridArray;
    empty ridArray;
    end state_pkg;
    create or replace trigger contact_function_bu1
    before update on contact_function
    begin
    state_pkg.newRows := state_pkg.empty;
    end;
    create or replace trigger contact_function_bu2
    before update ON CONTACT_FUNCTION FOR EACH ROW
    DECLARE
    I NUMBER:=0;
    begin
    I := state_pkg.newRows.count+1;
    state_pkg.newRows( I ).rid := :new.rowid;
    state_pkg.newRows( I ).cont_id := :new.contact_id;
    state_pkg.newRows( I ).cont_fn_type := :new.function_type_code;
    end;
    create or replace trigger contact_function_bu
    after update ON CONTACT_FUNCTION
    declare
    cursor function_code_cur ( cur_contact_id CONTACT.Contact_Id%type,
    cur_function_type_code CONTACT_FUNCTION.Function_Type_Code%type,
    rid2 rowid)
    is
    select
    cft.function_type_code,
    cft.multiples_permitted
    from
    CONTACT_FUNCTION cf,
    CONTACT c,
    CONTACT_FUNCTION_TYPE cft
    where
    c.acct_id = (select acct_id from contact where contact_id = cur_contact_id)
    and c.contact_id = cf.contact_id
    and cf.function_type_code = cft.function_type_code
    and cft.function_type_code = cur_function_type_code
    and cf.rowid = rid2;
    v_function_type_code contact_function_type.function_type_code%type;
    v_multiples_permitted contact_function_type.multiples_permitted%type;
    E_Multiples_Not_Permitted Exception;
    begin
    for i in 1 .. state_pkg.newRows.count loop
    if not function_code_cur%isopen then
    open function_code_cur(state_pkg.newRows(i).cont_id, state_pkg.newRows(i).cont_fn_type, state_pkg.newRows(i).rid);
    end if;
    loop
    fetch function_code_cur into v_function_type_code, v_multiples_permitted;
    exit when not function_code_cur%found;
    end loop;
    ** if the fetch returns a v_multiples_permitted of 'Y' or no record is found, then it is
    ** ok to add the record. Otherwise raise an error because that function_type is already
    ** being used at the current acct.
    if v_multiples_permitted = 'N' then
    raise E_Multiples_Not_Permitted;
    end if;
    close function_code_cur;
    end loop;
    EXCEPTION
    when E_Multiples_Not_Permitted then
    raise_application_error( -20001,'Multiples not allowed for function type ' ||
    v_function_type_code || '. sqlerrm - ' || sqlerrm );
    when others then
    raise;
    end;
    /

    It seems you could have solved the issue otherwise:
    CREATE OR REPLACE TRIGGER c_f_bi
       BEFORE INSERT
       ON contact_function
       FOR EACH ROW
    DECLARE
       v_function_type_code        contact_function_type.function_type_code%TYPE;
       v_multiples_permitted       contact_function_type.multiples_permitted%TYPE;
       e_multiples_not_permitted   EXCEPTION;
    BEGIN
       BEGIN
          SELECT cft.function_type_code, cft.multiples_permitted
            INTO v_function_type_code, v_multiples_permitted
            FROM contact c, contact_function_type cft
           WHERE c.acct_id = (SELECT acct_id
                                FROM contact
                               WHERE contact_id = :NEW.contact_id)
             AND c.contact_id = :NEW.contact_id
             AND cft.function_type_code = :NEW.function_type_code;
       EXCEPTION
          WHEN NO_DATA_FOUND
          THEN
             v_function_type_code := :NEW.function_type_code;
             v_multiples_permitted := '?';
       END;
    ** if the query returns a v_multiples_permitted of 'Y' or no record is found, then it is
    ** ok to add the record. Otherwise raise an error because that function_type is already
    ** being used at the current acct.
       IF v_multiples_permitted = 'N'
       THEN
          RAISE e_multiples_not_permitted;
       END IF;
    EXCEPTION
       WHEN e_multiples_not_permitted
       THEN
          raise_application_error (-20001,
                                      'Multiples not allowed for function type '
                                   || v_function_type_code
                                   || '. sqlerrm - '
                                   || SQLERRM
       WHEN OTHERS
       THEN
          RAISE;
    END;
    /:p

  • FMS required for Row Level in Marketing document for Dimensions vs BP

    Dear All,
    I need some help in regards to the FMS in SAP Business One. I am trying to use Dimension in the row level of my marketing document. In Dimension 2 and Dimension 3 I have named them as Dimension 2 whose description is Regions and in Dimension 3 whose descrption is Area.
    Now in Cost Accounting I have setup the Profit Centers in Dimension 2 Region as under :
    Factor Code  Factor Description
    CD0201         Region 1_Asia
    CD0202         Region 2_Middle East
    Also I have setup the Profit Centers in Dimension 3 Area as under :
    Factor Code   Factor Description
    CD0201A       India
    CD0201B       Pakistan
    CD0201C       China
    CD0202A       Syria
    CD0202B       Saudi Arabia
    I have created a udf in the Business Partner Header as U_DCostRegion wherein I have set Valid Values for Field as under which the user will put
    Value            Description
    CD0201         Region 1_Asia
    CD0202         Region 2_Middle East
    In the Business Business Partner Territory the BP has been defined as per the Dimension 3 Area. In which for e.g C0001 territory is India, C0002 territory is Pakistan and so on
    Now my requirement :
    I want that if the user is doing a Sales Quotation or Sales Order from the Sales Quotation then in the Row level Dimensions column of Regions and Area automatically through FMS it should populate the data accordingly as to what is put in the BP master Data. For Example user is doing Sales Quotation for C0001 whose Area ( U_DCostRegion) is set as CD0201 Region 1_Asia and Territory is set as India then automatically in the Sales Quotation row columns of Region and Area values of CD0201 Region 1_Asia and CD0201A India should come.
    I think this requirement can be fullfilled by FMS but I am not able to do it from my end. Please advise what should be the FMS for it.
    Regards,
    Depika

    Dear Rahul,
    I am able to put the Region from the Business Partner UDF to the document Row of Region column with the FMS as SELECT $[OCRD.U_DCostRegion] as in the U_DCostRegion I had set Valid Values for Field as
    Value    Description
    CD0201 Region 1_Asia
    CD0202 Region 2_Middle East
    As in the Dimension in Marketing document it takes the Factor Code e.g CD0201 so I am able to handle it with the above FMS.
    But for the Area dimension I am not able to make the FMS because in the marketing document it takes the Factor Code means the Factor Code e.g CD0201A and its not linked to the BP Territory Table of OTER.
    I want a FMS which is linked with the OTER Table also as such if in the BP the territory ( avaliable in base product BP > General Tab > Territory ( where its defined as India for BP C0001) should link to CD0201A which is the Factor Code wherein the Factor Description is India.
    please advise in this regard.
    Regards,
    depika

  • After update row level trigger help

    Hello,
    I have to update some data on a table. I need to be able to undo the update just in case something goes wrong after update is comitted. I decided to keep track of updated rows by inserting new and old values on a audit table using UPDATE ROW LEVEL trigger. Everything working fine as I wished, but here is what I am having a problem in separating each bulk of update by a unique ID... I am not talking about any primary key or autogenerated sequence key here.
    Audit table inserts values: primary key of table, old value before update, new value after update, and time stamp. Now, I want add one more field on the audit table that indentifies each bulk of UPDATE operation... I tried to use session ID, it works fine, but sometimes I may update two or three times, maybe around same time, on the same day from the same session (timestamp doesn't help me). In that case, each UPDATE operation inserts the same session ID on Audit table. I won't know which update operation populated which row of the audit table.
    Can somebody give me how I can resolve this situation. Again, this has to be inside the trigger. Is there any other IDs that I can use? I would appreciate your help. Thanks,

    Can you add a table level trigger in addition to your row level trigger? If so, you could do what you want in there. In the new trigger, formulate a unique value (such as session id || sysdate) and store it using dbms_application_info.set_module and set the MODULE to that value. Then, in your row level trigger code, execute dbms_application_info.read_module and pull the MODULE value and insert it into your audit table.
    The use of session id || sysdate should be fine (and unique) in this context. You'd just have to know at what time the UPDATE batch occurred that you wanted to undo.
    By the way, you could use LogMiner to do what I think you're trying to create with the use of your trigger code and table entries. Recall the Oracle keeps the undo and redo data for every row that is updated in the redo/archive logs. Using LogMiner, you could find and undo any change from any time. Just like your method, you'd have to know when the "bad" thing occurred in order to find the correct log and "mine" it, but all the functionality is there. There's a kinda old, but very good, article by Arup Nanda at http://www.oracle.com/technology/oramag/oracle/05-jul/o45dba.html that reviews how it all works. You may want to look at it to see if you can avoid re-inventing the wheel to meet your needs. Just a thought....
    Karen

  • Row level trigger updating the entire table instead of affected rows

    I am using orace 8.1.7. My problem is I have a row level trigger that should fire only once ( and insert a row in my auditing table). But it is doing it for the entire table. This only happens when I have more than two columns in an Update clause. Has anyone run into this problem before. Any help would be highly appreciated.
    thanks,
    dinesh

    create or replace trigger contact_audit
    before update or delete on contact
    for each row
    declare
    v_audit_type char(1);
    v_audit_item varchar2(64) := 'CONTACT';
    v_acct_seqid number;
    Begin
    if inserting then
         v_audit_type := 'I';
    elsif updating then
    v_audit_type := 'U';
    elsif deleting then
    v_audit_type := 'D';
    end if;
    select acct_seqid into v_acct_seqid
    from account_contact
    where email = :old.contact_email;
    insert into audit_event ( id, audit_item, audit_type, audit_date, acct_seqid, col_1, col_2, col_3, col_4, col_5, col_6, col_7, col_8,
    col_9, col_10, col_11, col_12, col_13, col_14, col_15, col_16, col_17, col_18, col_19, col_20)
    values (audit_event_sq.nextval, v_audit_item, v_audit_type, sysdate, v_acct_seqid, :old.contact_email, :old.contact_type, :old.contact_last, :old.contact_first,
    :old.title, :old.address1, :old.address2, :old.address3, :old.city, :old.state, :old.zip_code5, :old.zip_code4, :old.zip_code_barcode, :old.country, :old.phone_no,
    :old.phone_ext, :old.receive_info_email_yn, :old.pwd_encrypted, :old.pwd_question, :old.pwd_answer);
    End;

  • Row Level Security not working for SAP R/3

    Hi Guys
    We have an environment where the details are as mentioned below:
    1. Crystal Reports are created using Open SQL driver to extract data from SAP R/3 using the SAP Integration Kit.
    2. The SAP roles are imported in Business Objects CMC.
    3. Crystal Reports are published on the Enterprise as well.
    3. Authorization objects are created in SAP R/3 and added as required for the row level security as mentioned in the SAP Installation guide as well. The aim is when the user logs into the Infoview and refreshes the report he should only see data that he is meant to so through the authorization objects.The data security works very much fine when the reports are designed directly on the table but when the reports are built on the Business View it doesnt work hence the user is able to see all data.
    Any help in this issue is greatly appreciated.
    Thanks and Regards
    Kamal

    Hi,
    In order for row level security to work for you using the OpenSql driver, you need to configure the Security Definition Editor on your SAP server.  This is a server side tool which the Integration solution for SAP offers as a transport.
    This tool defined which tables are to be restricted based on authorizations.
    However since you are seeing the issue on reports based on Business Views, you need to identify whether the Business View is configured in such a way where the user refreshing the report is based on the user logging into Infoview.  If the connection to your SAP server is always established with the same user when BV is used then you security definition is pointless.
    You can confirm this by tracing your SAP server to identify what user is being used to logon to SAP to refresh the reports.
    thanks
    Mike

  • How to avoid seeing data for a particular member in the row level

    Hello,<BR>I am using Essbase. In that i am having members like taks n/a, demand n/a. Purpose of these members are, if i load data for demand, task wont have data so i am mapping to task n/a.And same for the task loading.I am using alphablox for reporting and report script for retriving the data.So, if i select the task n/a and demand n/a then some value will be existing for this. Is there any possibility to remove this from report.I mean while seeing in the report, it should not get this member and respective value for display and also avoid taking this data for any manipulation.I know RESTRICT for column level data. Is there anything for row level restrictions.<BR>Please help me on this.<BR>Regards<BR>R.Prasanna

    Hi,
    Into the rule containing the Color metadata, you can restrict the values of the list.
    In "Has restricted list and pane", list the values 1, 2 and 3 (new line for each value) :
    1
    2
    3
    Romain.

Maybe you are looking for

  • Create multiple page jpgs from pdfs

    Hi, My boss has asked me to convert a large number of multiple-page pdfs into multiple-page jpgs. I have been able to convert the pdfs to jpgs in photo shop using image processor, but the processor only captures the first page of the multiple page do

  • How can I determine what is taking up my hard drive space?

    I have a 55GB hard drive with 6.83GB of free space remaining. I find it hard to believe that I have 50GB of space used. How do I find out what is taking up the 50GB of disc space? I manually run the sudo daily, weekly, monthly jobs so I don't believe

  • Cant edit sim card number

    Hello all.  Brand new phone.  Old sim card.  Sim card number has a lot of strange numbers after my phone number.  when i select "edit sim card number"  nothing happens.  All of the posts i read seem to end here.  Any suggestions when the "nothing" ha

  • Should the BI trigger set the PK for me on data upload in apex?

    This should be simple but not sure what to do about the PK field, ID. I'm trying to upload some rows to the table where txtname and dteEffectiveDate are the only 2 not null columns; I have ensured the upload data are complete, no nulls. However, the

  • How to transfer icloud data from old computer to new computer

    how to transfer icloud data from old computer to new computer