Best approach to add MVs in OBIEE

Hi Gurus,
Is there a best practice in adding Materialized Views in rpd.
Rgds,
Amit

Hi,
Look at this...its about MV... http://gerardnico.com/wiki/database/oracle/materialized_view?s[]=materialized&s[]=views
Thanks,
Srikanth

Similar Messages

  • Best approach to add Z custom field to IC Agent Inbox search and results view

    Hi Experts,
    We are having a requirement to add a Z custom field to IC Agent Inbox search and results view. I got multiple forums and ideas, but looking for the best approach for handling this. I am sure, you experts, would have already done this.
    Thanks in advance.
    Regards
    Siva

    Hi Sivakumar,
    AET is the best way by far to create a custom field in this area. It is easy and simple.
    Also, field once added in one business object it can be used at different objects as well.
    There is also a demo available for AET on sdn.
    Please let me know if any more help is required.
    Thanks,
    Bhushan

  • Best approach to add Task interaction processes using BeanShell in ExcecuteScript operation

    I am wondering if the following is the best way (thinking not) to accomplish the goal of determining the possible routes from a Task (Task ID is known) using BeanShell in the ExecuteScript operation in a short-lived LC process (taken from API docs and tweaked slightly).  The code does work, this is just a question of what would be optimal to build similar processes that can reach into more details.  I would like to know the best practice before building more such processes.
    import java.util.*;
    import com.adobe.idp.dsc.clientsdk.ServiceClientFactory;
    import com.adobe.idp.dsc.clientsdk.ServiceClientFactoryProperties;
    import com.adobe.idp.taskmanager.dsc.client.query.TaskRow;
    import com.adobe.idp.taskmanager.dsc.client.query.TaskSearchFilter;
    import com.adobe.idp.taskmanager.dsc.client.task.ParticipantInfo;
    import com.adobe.idp.taskmanager.dsc.client.task.TaskInfo;
    import com.adobe.idp.taskmanager.dsc.client.task.TaskManager;
    import com.adobe.idp.taskmanager.dsc.client.*;
    import com.adobe.idp.um.api.infomodel.Principal;
    import com.adobe.livecycle.usermanager.client.DirectoryManagerServiceClient;
    import java.util.List;
    Properties connectionProps = new Properties();
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_DEFAULT_EJB_ENDPOINT, "jnp://servername:1099");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_TRANSPORT_PROTOCOL,ServiceC lientFactoryProperties.DSC_EJB_PROTOCOL);         
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_SERVER_TYPE, "JBoss");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_CREDENTIAL_USERNAME, "administrator");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_CREDENTIAL_PASSWORD, "password");
    ServiceClientFactory myFactory = ServiceClientFactory.createInstance(connectionProps);
    TaskManagerQueryService queryManager = TaskManagerClientFactory.getQueryManager(myFactory);
    TaskManager taskManager = TaskManagerClientFactory.getTaskManager(myFactory);
    long taskId = patExecContext.getProcessDataLongValue("/process_data/@taskId");
    TaskInfo taskInfo= taskManager.getTaskInfo(taskId);
    String [] routeNames = taskInfo.getRouteList();
    List routeNameList = patExecContext.getProcessDataListValue("/process_data/routes");
    for (int i=0; i<routeNames.length; i++) {
        String currentRouteName=(String)routeNames[i];
        routeNameList.add(currentRouteName);
    patExecContext.setProcessDataListValue("/process_data/routes",routeNameList);

    I am wondering if the following is the best way (thinking not) to accomplish the goal of determining the possible routes from a Task (Task ID is known) using BeanShell in the ExecuteScript operation in a short-lived LC process (taken from API docs and tweaked slightly).  The code does work, this is just a question of what would be optimal to build similar processes that can reach into more details.  I would like to know the best practice before building more such processes.
    import java.util.*;
    import com.adobe.idp.dsc.clientsdk.ServiceClientFactory;
    import com.adobe.idp.dsc.clientsdk.ServiceClientFactoryProperties;
    import com.adobe.idp.taskmanager.dsc.client.query.TaskRow;
    import com.adobe.idp.taskmanager.dsc.client.query.TaskSearchFilter;
    import com.adobe.idp.taskmanager.dsc.client.task.ParticipantInfo;
    import com.adobe.idp.taskmanager.dsc.client.task.TaskInfo;
    import com.adobe.idp.taskmanager.dsc.client.task.TaskManager;
    import com.adobe.idp.taskmanager.dsc.client.*;
    import com.adobe.idp.um.api.infomodel.Principal;
    import com.adobe.livecycle.usermanager.client.DirectoryManagerServiceClient;
    import java.util.List;
    Properties connectionProps = new Properties();
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_DEFAULT_EJB_ENDPOINT, "jnp://servername:1099");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_TRANSPORT_PROTOCOL,ServiceC lientFactoryProperties.DSC_EJB_PROTOCOL);         
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_SERVER_TYPE, "JBoss");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_CREDENTIAL_USERNAME, "administrator");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_CREDENTIAL_PASSWORD, "password");
    ServiceClientFactory myFactory = ServiceClientFactory.createInstance(connectionProps);
    TaskManagerQueryService queryManager = TaskManagerClientFactory.getQueryManager(myFactory);
    TaskManager taskManager = TaskManagerClientFactory.getTaskManager(myFactory);
    long taskId = patExecContext.getProcessDataLongValue("/process_data/@taskId");
    TaskInfo taskInfo= taskManager.getTaskInfo(taskId);
    String [] routeNames = taskInfo.getRouteList();
    List routeNameList = patExecContext.getProcessDataListValue("/process_data/routes");
    for (int i=0; i<routeNames.length; i++) {
        String currentRouteName=(String)routeNames[i];
        routeNameList.add(currentRouteName);
    patExecContext.setProcessDataListValue("/process_data/routes",routeNameList);

  • What is the best approach to add new fieds to an extractor

    I am working with an infocube wher i need to get the day of the week based off the created on date.  I need the number like 1-7 and then tie this to another table to get the actual name like Monday, Tuesday.  Etc. 
    I was planning on creating an update rule which would find the day of the week and the name of the day.  I am not sure the best approqch as far as loading the data and the inforobject.  Should i load the text 1 time becaue that will never change, like Monday etc. or do Ioad it in the start routine?    also do I create 1 infoobject with text or do I create 2 separate one?

    You can create a new infoobject with text and manually create 7 records.
    In update rules ou can use function module DATE_COMPUTE_DAY or DAY_IN_WEEK to get the numeric value of the day of the week.
    Hope it helps.
    Regards

  • Best approach to develop office add-in

    Hello,
    i'm a .net programmer and i've developed addins for outlook/ word and excel with VS 2008 + .net 3.5 + VSTO.
    There were many problem initially with the VSTO addin and i faced lot of difficulties to solve them specially when working with both Outlook and Word.
    My client has Adobe Acrobat installed on every machine and no doubt the performance of Acrobat addin in office application is superb. On the other hand VSTO addins take time to load and the performance is specially low when tehre is a interaction between VB6 and MS Office application.
    Many times the Outlook addin gets disabled if outlook is not running and VB6 code creates new mail item to send and display it to user. Many times mail window freezes and many such problems.
    Now the question i've here is can anyone tell me the best approach to develop Office addin like how and in what language Acrobat addin is developed? or best practice to develop it with VSTO and which is best with a why?
    Thanks,
    Hemant

    Hi,
    In many cases, if your structure is very complex, you can not get direct nested xml after content conversion. In that case, in the mapping we need to handle the generation of nested strutcure. So you use Java or xslt mapping etc.. if it is not possible via graphical mapping. Also you can do this in the adapter module.
    Here you go with good example- Generic Structure-
    /people/sravya.talanki2/blog/2005/08/16/configuring-generic-sender-file-cc-adapter
    Also file content conversion - limitations-
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50061bd9-e56e-2910-3495-c5faa652b710
    Rgds,
    Moorthy

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • Best approach to publish new table or new column on existing table on MDW?

    Hi,
    I'm refering to Olite R3 without any patches. I don't use Java API, I use MDW.
    if I have a new table or a new column on a existing table, what's the best approach to publish it?
    I'm asking this because I've trying lots of approaches and the only solution was, step-by-step:
    1) On MDW, drop the publication item
    2) Add again the publication item
    3) Associate the publication item to the publication
    4) Save everything
    5) File / Deploy (if I don't do it, it does not work)
    6) Tools/Package... (that's where it's a problem: if I don't remove the app and create it again it does not work!)
    7) on the client side, I perform a msync with "force refresh"
    That's the only way I found to publish new items for sure. Any other action does not push the new table or new column to the client's embbeded DB.
    Any comments?
    Regards,
    Maurício Américo Vernaschi.

    I do not use MDW, rather a mix of java and the final publish step you use, but
    Adding new PIs should be easy, just add them and re-publish (no need to drop anything)
    for changes, if you just have new columns and the sql statement is 'select * from' then you should just need to make the changes in the base schema objects, and run the publish with no changes and the updates should be picked up. If selecting specific columns, then update and re-publish.
    When using MDW at the end you can save the application as a jar file, and then use this jar file to publish in the mobile manager - this is the best wayto publish.
    Have a look at this jar file in winzip, and you will find it contains a web.xml file. This is the xml definition of the publication items, and for simple changes it is possible to just edit this file and republish via the mobile manager

  • Best approach to dealing with someone else's sphagetti code with no documentat​ion

    Hello LabVIEW gurus,
    I am just given a few software tools to add functionality and rewrite, each of which is a big spaghetti mess and each tool has 100+ vis all sphagetti, these tools control a very complex machine talking via seria, parallel, ethernet, 485 etc. and there is barely any documentation of the logic or the implemetation of the source code / what the subvis do. 
    what would be my best approach to understand this mess and recreate it in a structured way faster. it has lot of old sequence structures and just plain bad style of programming.
    any help is highly appreciated
    Thanks all

    And Do not forget about using the VI Analyzer TK!  It can reveal several obvious sources to clarify code that "Stinks" A lot of skull sweat went into that framework and it has signifigant value!
    Norbert_B wrote:
    If your task is only to ADD things, you might be interested in Steve's recommendation here.
    Norbert
    (Inside joke ahead)
    Ah, That explains the TDMS File Viewer!
    Spoiler (Highlight to read)
    You really should run that through the VIA....:smileymad
    You really should run that through the VIA....:smileymad
    Spoiler (Highlight to read)
    It can be done fairly quick
    It can be done fairly quick
    Spoiler (Highlight to read)
    How do you unspoiler?  Ah well  I'll hope a moderator can leave only the first comment "spoiled"
    How do you unspoiler?  Ah well  I'll hope a moderator can leave only the first comment "spoiled"
    Spoiler (Highlight to read)
    Note the quote from the link "The Code we inherited might have been "richly obfuscated."" "richly Obfuscaed code was a code review term used for code written by your boss... The VIA would call it something else.
    Note the quote from the link "The Code we inherited might have been "richly obfuscated."" "richly Obfuscaed code was a code review term used for code written by your boss... The VIA would call it something else.
    Jeff

  • Best approach for roll over in BPC?

    Hi All,
    We are currently looking for the best approach in SAP BPC for the roll
    forward of closing balances into opening balances from the previous
    year to the current period.
    Our current approach takes the closing balance account lines from the
    previous year , copies them into specific opening year members (f_init
    and ob_init) using business transformation rules then every month there
    are business transformation rules which takes these values in local and
    base currency to calculate the fx gain\loss and also copies over the
    closing balance at the historic rate into the opening balance of the
    current period. This approach takes both input data and journal data
    into account.
    We also need to take into account now the fact that we need to pull
    through any journals which were posted to adjustment companies and some
    (but not all) legal entities for traditional lines which do not have
    typical opening balance accounts (e.g. cash, stock, accruals etcu2026). The
    approach above can work but we need to add the relevant opening balance
    accounts.
    Please could you advise whether there is a better approach than this?
    Kind Regards,
    Fiona

    I normally prefer saving images in LocalFolder and save file name in database table. I prefer this because saving just file name will keep size of SQLite database small so will load faster.
    Gaurav Khanna | Microsoft .NET MVP | Microsoft Community Contributor

  • Best approach to replicate the data.

    Hello Every one,
    I want to know about the best approach to replicate the data.
    First i clear the senario , i have one oracle 10g enterprise edition database and 20 oracle 10g standerd edition databases.The enterprise edition will run at center and the other 20 standered edition databases will run at sites.
    The data will move from center to sites and vice versa not between sites.There is only one schema to replicate with more than 500 tables.
    what will be the best for replication (Updateble MVs or Oracle Streams or any thing else.),its urgentpls.
    Thanx in advance.
    Edited by: user560669 on Dec 13, 2009 11:01 AM

    Hello,
    Yes MV or Oracle Stream are the common ways to replicate datas between databases.
    I think that in your case (you have to replicate a whole Schema) Oracle Streams is interresting (it's not so easy
    to maintain 500 MV).
    But you must care of the type of Edition.
    I'm not sure that Standard Edition allows Advanced replication features. It seems to me (but I may be wrong)
    that Updatable MV is an EE features.
    About the Stream It seems to be available even in SE.
    Please, find enclosed some links about it:
    [http://www.oracle.com/database/product_editions.html]
    [http://www.oracle.com/technology/products/dataint/index.html]
    Hope it can help,
    Best regards,
    Jean-Valentin

  • Best approach for uploading document using custom web part-Client OM or REST API

    Hi,
     Am using my custom upload Visual web part for uploading documents in my document library with a lot of metadata.
    This columns contain single line of text, dropdownlist, lookup columns and managed metadata columns[taxonomy] also.
    so, would like to know which is the best approach for uploading.
    curretnly I am  trying to use the traditional SSOM, server oject model.Would like to know which is the best approach for uploading files into doclibs.
    I am having hundreds of sub sites with 30+ doc libs within those sub sites. Currently  its taking few minutes to upload the  files in my dev env. am just wondering, what would happen if  the no of subsites reaches hundred!
    am looking from the performance perspective.
    my thought process is :
    1) Implement Client OM
    2) REST API
    Has anyone tried these approaches before, and which approach provides better  performance.
    if anyone has sample source code or links, pls provide the same 
    and if there any restrictions on the size of the file  uploaded?
    any suggestions are appreciated!

    Try below:
    http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx
    http://stackoverflow.com/questions/9847935/upload-a-document-to-a-sharepoint-list-from-client-side-object-model
    http://www.codeproject.com/Articles/103503/How-to-upload-download-a-document-in-SharePoint
    public void UploadDocument(string siteURL, string documentListName,
    string documentListURL, string documentName,
    byte[] documentStream)
    using (ClientContext clientContext = new ClientContext(siteURL))
    //Get Document List
    List documentsList = clientContext.Web.Lists.GetByTitle(documentListName);
    var fileCreationInformation = new FileCreationInformation();
    //Assign to content byte[] i.e. documentStream
    fileCreationInformation.Content = documentStream;
    //Allow owerwrite of document
    fileCreationInformation.Overwrite = true;
    //Upload URL
    fileCreationInformation.Url = siteURL + documentListURL + documentName;
    Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add(
    fileCreationInformation);
    //Update the metadata for a field having name "DocType"
    uploadFile.ListItemAllFields["DocType"] = "Favourites";
    uploadFile.ListItemAllFields.Update();
    clientContext.ExecuteQuery();
    If this helped you resolve your issue, please mark it Answered

  • Best approach for validation

    I am trying to implement validation in my struts application.
    My application prompts the user to enter a username and password and select a database name from a drop down menu to connect to. The jsp page is tied to an ActionForm and an Action class. The Action class calls a getConnection function in the Connection class.
    Everything works given a correct pair of username and password. I am now trying to handle the case when the user provides an invalid pair or username and password. What's the best approach to facilitate this.
    I've been fiddling with client side validation (checking to see if the user provides an input or not) but realised that even if I get that validation working, I'd eventually have to implement server-side validation - since the user can enter a username-password pair (which will pass client-side validation) but the system will crash on the server-side since it will try to connect using an invalid username and/or password and hence throw an exception.
    Intuitively, I think I need to do something in the catch(SQLException ex) block of my connection code but am not sure how to do this. Has anyone implemented this (or something similar) - please assist.
    Thanks

    In the catch block for the SQLException, create an ActionError object with the appropriate error message, add it to ActionErrors object, store this ActionErrors object in the reqest and forward the request to the same page. In the JSP have <html:errors/>tag at the top of the page and it will print out errors stored in the request by the action class.
    ActionErrors errors = new ActionErrors();
    try {
    //connect to the database
    } catch(SQLException ex) {
    //log the exception if you need to
    ActionError error = new ActionError(<error_message>);
    errors.add(Globals.ERROR_KEY, error);
    ActionErrors oldErrs = (ActionErrors )request.getAttribute(Globals.ERROR_KEY);
    if (oldErrs == null) {
    request.setAttribute(Globals.ERROR_KEY, errors);
    } else {
    oldErrs.add(errors);
    //forward the request

  • Do u know the best approach with data....?

    I am considering the best approach for returning a resultset from a ejb to my jsp page but I dont know which approach is the best. You comment PLEASE. (As resultset cannot be serialized so returning it directly wont be considered).
    Approach A � Make a custom class having get/set variables to represent each column values in the resultset, and use the class in jsp. However, I find this tedious because whenever I add to the select statement, I have to add class variables too.
    Approach B � Manually manipulate data in resultset and put into a vector then return the vector to jsp
    Approach C � use rowsets instead and return the rowset to jsp.
    Many thanks u all...

    Hello,
    Approach A is not recommended - you would have to leave the resultset open and so leave the connection the the database open.
    Approach B is better
    Approach C - well RowSets are a new thing in 1.4 which I have not tried yet. They look useful, but is your app running on 1.4?

  • Best method to add new rows

    Hi,
    I am new to apex and would like your suggestions as to the best method to add new row to a table. I do not want to use the wizard because there are many tables in the db. From reviewing this forum, the suggested method is to create the report with a form, the user clicks the 'create' button and it opens up to a new form for data entry. There is a 'submit' button to commit the changes to the table. When/How do I create the PK? Is it at the page render when the new form is opened or in page process when all the fields are committed? I tried to add the 'insert into table..' in the page render to create the new PK but I received oracle error. Am I missing a step? Thanks very much.
    Judy

    Good Morning,
    I have a second question to ask about Inserting rows..I was successful with the sql statement in adding a row to the parent table. Now I need to insert a row in the related child table. There are approx 5 child tables where I need to be able to add rows.
    My questions as to the proper sequence to do this:
    1. do I execute the Insert for the parent table first?
    2. then have the user click the button to add info for the child table and then click another button that does the insert into the child table? At this point, is this where I reference the parent table(fac_seq.currval)?
    INSERT INTO CONTACT(cnt_pk, fullname, street1, city, state, zip, phone, title)
    VALUES (CNT_SEQ.nextval,FAC_SEQ.currval,
    NVL(:P36_FULLNAME, 'No Data'),
    NVL(:P36_STREET1,'No Data'),
    NVL(:P36_CITY, 'No Data'),
    NVL(:P36_STATE,'CA'),
    NVL(:P36_ZIP,'00000'),
    NVL(:P36_PHONE,'555-1212'),
    NVL(:P36_TITLE,'No Data'));
    3. is it proper procedure to have separate insert statements or is it better to have 2 insert statements?
    INSERT INTO FACILITY(fac_pk, fac_type, fac_name, street1, city, state, zip, state_or_tribe, tribe_yn)
    VALUES (FAC_SEQ.nextval,
    NVL(:P211_FAC_TYPE, 'N'),
    NVL(:P211_FAC_NAME,'No Data'),
    NVL(:P211_STREET1,'No Data'),
    NVL(:P211_CITY, 'No Data'),
    NVL(:P211_STATE,'CA'),
    NVL(:P211_ZIP,'00000'),
    NVL(:P211_STATE_OR_TRIBE,'CA'),
    NVL(:P211_TRIBE_YN,'N'));
    INSERT INTO WELL(wel_pk, fac_fk, cnt_fk, geo_fk, well_state_uk, name,site, high_priority_yn,aqui_exempt_yn, well_in_swa)
    VALUES (WEL_SEQ.nextval, FAC_SEQ.currval, CNT_SEQ.currval, GEO_SEQ.currval,
    NVL(:P47_WELL_STATE_UK, '09DI'),
    NVL(:P47_NAME, 'No Data'),
    NVL(:P47_SITE,'No Data'),
    NVL(:P47_HIGH_PRIORITY_YN, 'N'),
    NVL(:P47_AQUI_EXEMPT_YN, 'N'),
    NVL(:P47_WELL_IN_SWA,'U'));
    **I am confused as to how to approach the process of adding rows to a child table. Please clarify for me or direct me to a link that has detailed documentation.
    Many thanks,
    Judy

  • What is the best approach for combining events?

    When I work on a wedding my current workflow involves creating a compound clip for each section of the video (e.g. reception, ceremony, dancing etc). Then I add the compound clip 'sequences' into a single project to add the chapter markers and export to a single master file.
    I like the idea of managing each section in a project rather than a compound clip now that projects are part of the library in 10.1, but is there a good way to combine multiple projects (for each section) into a single master project, or would I still need to copy the contents of each project and paste in the master project?
    Maybe I am best to continue with my current workflow.

    Just saw the discussion title - should have said "What is the best approach for combining projects"?

Maybe you are looking for

  • Runtime Error in PSE 9 Editor

    When I attempt to access the editor in PSE9 either directly or from the organizer I get the following message: "Runtime Error! Program:...s\Adobe\PhotoshopElements9\PhotoshopElementsEditor.exe This application has requested the Runtime to terminate i

  • Regarding sales summary (tcode 'vc/2' object 'rvkusta1')

    Hi, I have questions regarding this.. 1) In the lower part of the report (in the block 'backorders') that shows all the sales documents, date, net value, and status , how is the date calculated? Is this the creation date or the material avaialbility

  • Regarding Revenue Recognition

    Dear Friends, I configured Revenue Recognition to my service client.In general process after saving the billing document we can see  The bellow entries for accounting document. Customer a/c Dt Sales revenue a/c Cr All taxes are Cr But i want to know

  • Does Adobe have a download manager or a 3rd party sponsored download manager

    aloha, Having significant difficulty getting a Adobe download completed and unzipped. 1) Can't get a complete download of zipped files AdobeEncoreContent_en-US.zip & EncoreContentEnglish.zip. 2) If successful in the download, neither of my unzip prog

  • Sales employee related data not flowing in standard report MCTI and VF05

    Dear All I have created a partner function sales employee by coping VE.All assignments in partner determination done.After maintaining sales employee in customer master it is flowing to sales order.After processing invoice when i check the standard r