VC table use, best practices?

Hi,
I'm updating a table in the back end with an RFC. I would like to send only the rows I've modified or added on the VC client to the RFC and not the whole table. Is this possible?

Hey Joel,
Make a condition(say check box) on changing the table row (user need to select what are all the rows he is modifying).
while sending the values to the RFC, give the guard condition as valid only when check box is selected.
Regards,
Pradeep

Similar Messages

  • Use Best Practice to Import Master Data

    Hi,
    I am SAP beginner, i glad someone can guide me on my issue. How can I using best practice to import 1000++ material master data into SAP system ??? I already prepared the data in Excel spreadsheet. Can any one guide me on step?
    Thanks.

    Hi,
    LSMW is a very good tool for master data upload. The tool is very rich in features is also complex. Being a beginner you should check with a consultant to know how you can use LSMW to upload your 1000 + records. The tool too is quite intuitive. After entering the LSMW transaction you create the project, subproject and the object you are going to work on uploading. When u enter the next screen you see several radio buttons. Typically every upload required all the features behind those radio buttons and in the same sequence. It is not possible to give the details to be performed in each of these radio buttons in this forum. Please take a consultant's help in your vicinity.
    thanx
    Bala

  • How to use best practices installation tool?

    Hello!
    can anyone share me some useful links/doc which can guide me how to use best practices installation tool (/SMB/BBI) ?
    any responses will be awarded,
    Regards,
    Samson

    hi,
    will you please share the same ?
    thanks in advance

  • Varying table columns, best practices

    I've been wondering about this for quite sometime now. JTable is very complex, but it has a lot of funcationality that hints at reusable models. The separation of TableModel and ColumnModel seems to hint at being able to reuse a TableModel that stores some sort of objects, and apply different ColumnModels to view the data in different ways. Which is really cool.
    However, who is in charge of managing the columns? The default implementation is usually good enough. But, it doesn't do anything special to the columns like: assign renderers, or editors. Should the column model be in charge of this? But, then you have to swap full on column models out when you want to change the look of the table. What if you just want to vary the renderer on a column, or remove one column. Would you build a whole new ColumnModel for this?
    Should the JTable be in charge of setting himself up in these matters? But that seems to impose the view's representation on the model. What if you change views in some way that affects your model's structure.
    Should there be some external controller in charge of this?
    Sometimes you don't plan for these things at it hurts you when you need to reuse models, but maybe modify them in some way. What are your best practices?
    charlie

    The practice you described is what I'm doing right now, and I feel that it is cumbersome for reuse.The practice you described is what I'm doing right now, and I feel that it is cumbersome for reuse.
    What I'm wondering is that if anyone has come up with a very elgant way to organize their class' responsibilities between who populates the column model. I know I can subclass and fill it in the subclass, but it seems that I might NOT need to subclass, use the default, and have another class ( maybe the JTable or a controller ), that populates the ColumnModel.
    Then I can get better reuse between TableModel's and ColumnModels.
    charlie

  • Table partitioning best practices

    Just looking for some ideas.
    We have a large information warehouse table that we are looking to partition on 'service_period' id. We have been requested to partition on every month of every year which right now will create approximately 70 partitions. The other problem is that this is a rolling or dynamic partition meaning we will have a 'new' partition vale with each new month. I understand in 11g there is a rolling partition functionality but we are not there yet.
    So right now we are looking for a best practice for this scenario. We are thinking of possibly creating a partition on each year and indexing on the service period within each partition, maybe hash partitioning on the service period id (although does not seem to group the service periods distinct within each partition), somehow creating the partition dynamically via pl/sql (creating the table with a basic partition and then running an alter table on the data creating the proper number of partitions within a list partition.
    I am also wondering if there is a point of too many partitions on a table. I am thinking 70 may be a little extreme but not sure. We are going to do some performance testing but would be nice to hear from the community. We have 5,000,000 over approx 70 partitions giving us approx 70,000 records per partition. The other option would be to create the partition based on year and then apply an index over top on the service period to reduce the number of partitions.
    Thanks in advance,
    Bruce Carson

    This is not a lot of data, so the effort of partitioning may not be worth the benefit you receive. 70 partitions is not unreasonable. Do you have performance problems ? Do the majority of your queries reference service_period ? Do you have a lot of full table scans ?
    Partitioning strategies depend on the queries you plan to run, your data distributions, and your purge / archival strategy.
    Think about whether you should pre-create partitions for years in advance. Think about whether you should put every partition in a seperate tablespace for easy purging and archival. Think about what indexing you will use (can you use local indexes, or do you need global ?) Think about what data changes are happening ? Are they all on the newest data ? Can you compress older partitions with PCT_FREE 0 to improve performance ?

  • Job (C) use best practices

    Experts,
    This question is in regard to best practices/common ways that various companies ensure the proper use of the Job(C) object in the HCM systems.  For example, if there are certain jobs in the system that should only be assigned to a position if that position is in certain areas of the business (i.e. belongs to specific organizational areas), how is this type of restriction maintained?  Is it simply through business processes? Is there a way/relationship that can be assigned? Are there typical customizations and/or processes that are followed?
    I'm looking to begin organizing jobs into job families, and I'm currently trying to determine and maintain the underlying organization of our companies jobs in order to ensure this process is functional.
    Any insight, thoughts, or advise would be greatly appreciated.
    Best regards,
    Joe

    Hi Joe,
    You can embed the business area info into job description and this would be a part of best practice.
    What I mean is that:
    e.g. In your company you have 4 managers:
    HR Manager
    IT Manager
    Procurement Manager
    Production Manager
    Then, as part of the Best practice of SAP, you will have 4 positions (1 position per person)
    My advice is you should also have 4 jobs that describe the positions.
    Then, in order to group all managers, you may have one job family "Managers" and assign all four jobs to that family.
    By this way you can report all the managers as well as area-specific managers (e.g. HR manager)
    As far as I know, there is no standard relationship that holds business area info.
    For further info check table T778V via SM31.
    Regards,
    Dilek

  • Implementing a "login" using best practices

    I have a little bit of time now for my project where I'd like to refactor it a bit and take the opportunity to learn about the best practices to use with JSP/Servlets but I'm having some trouble thinking about what goes where, and how to organize things.
    Here's my current login functionality. I have not seperated my "business logic" from my "presentation logic" as you can see in this simple starting example.
    index.html:
    <html>
    <body>
    <form action="login.jsp" method="post">
        <h1>Please Login</h1>
        User Name:    <input type="text" name="login"><br>
        Password: <input type="password" name="password"><br>
        <input type=submit value="Login">
    </form>
    </body>
    </html>login.jsp:
    <jsp:useBean id="db" type="database.DatabaseContainer" scope="session"/>
    <%
    if(session.getAttribute("authorized")==null || session.getAttribute("authorized").equals("no") || request.getParameter("login")!=null)
         String login = request.getParameter("login");
         String password = request.getParameter("password");
         if (login!=null && db.checkLogin(login,password))
             // Valid login
              session.setAttribute("authorized", "yes");
             session.setAttribute("user",login);
         else
              // Invalid login
                 session.setAttribute("authorized", "no");
                 %><jsp:forward page="index.html"/><%
    else if(session.getAttribute("authorized").equals("no"))
        //System.out.println("Refresh");
        %><jsp:forward page="index.html"/><%   
    else System.out.println("Other");
    %>
    <html>
    <body>
    <h1>Welcome <%= " "+session.getAttribute("user") %></h1>
    //links to other jsps are here
    </body>
    </html>What should I be doing instead? Should I make the form action be a Servlet rather than a jsp? I don't want to be writing html in my servlets though. Do I do the authentication in a servlet that I make the form action and then make the servlet forward to some standard html page?

    Ok, so I'm starting things off simply be just converting what I have to use better practices. For now I just want to get the basic flow of how I transition from page to servlet to page.
    Here's my index.html page:
    <html>
    <body>
    <form action="login" method="post">
        <h1>Please Login</h1>
        Phone Number:    <input type="text" name="login"><br>
        Password: <input type="password" name="password"><br>
        <input type=submit value="Login">
    </form>
    </body>
    </html>I have a mapping that says login goes to LoginServlet, which is here:
    import java.io.IOException;
    import javax.servlet.ServletException;
    import javax.servlet.http.HttpServlet;
    import javax.servlet.http.HttpServletRequest;
    import javax.servlet.http.HttpServletResponse;
    import javax.servlet.http.HttpSession;
    import db.DatabaseContainer;
    public class LoginServlet extends HttpServlet
        public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
            HttpSession session = request.getSession();
            if(session.getAttribute("authorized")==null || session.getAttribute("authorized").equals("no") || request.getParameter("login")!=null)
                String login = request.getParameter("login");
                String password = request.getParameter("password");
                DatabaseContainer db = (DatabaseContainer)(request.getSession().getAttribute("db"));
                if (login!=null && db.checkLogin(login,password))
                    // Valid login
                    session.setAttribute("authorized", "yes");
                    session.setAttribute("user",login);
                    //forward to home page
                else
                    // Invalid login
                    session.setAttribute("authorized", "no");
                    //forward back to login page
            else if(session.getAttribute("authorized").equals("no"))
                //forward back to login page
        public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
            doGet(request, response);
    }If I'm not logged in, I want to simply forward back to the login page for now. If I am logged in, I want to forward to my home page. If my home page is a simple html page though, then what's to stop a person from just typing in the home page and getting to it? I would think it would need to be a jsp page but then the jsp page would have to have code in it to see if the user was logged in, and then I'd be back to where I was before.
    Edited by: JFactor2004 on Oct 21, 2009 7:38 PM
    Edited by: JFactor2004 on Oct 21, 2009 7:38 PM

  • Azure table design best practice

    What's the best practice for designing tables in Azure Tables to optimize query performance?

    Hi Raj,
    When we design the azure table, we need to consider the scalability of the azure table.
    and selecting the PartitionKey is very more important to scalability.
    Basically, we have two options which have their advantages and disadvantages:
    One Option: having a single partition by having the same value for PartitionKey for all entities to
    Second Option: having a unique value for PartitionKey for every entity
    More information about how to get the most out of windows azure tables ,please refer to the link below:
    http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx
    There is  also a detailed article which explain how to design a scalable partitioning strategy for Windows Azure Storage,please refer to the link below:
    http://msdn.microsoft.com/en-us/library/hh508997.aspx
    Best Regards,
    Kevin Shen

  • Using Best Practices personalization tool

    hello,
    We wish to usr the Best Practices personalization tool for customer specific data sor baseline package.
    I do not understand from the documantation if its possible to use it after  installing the Baseline or its have to be done simutanioslly (meenung the personalized data have to be ready in the files before stsrting the baseline installaion)?!
    Thank you
    Michal

    Hi,
    Please Ref:
    http://help.sap.com/bp_bl603/BL_IN/html/index.htm
    Your personalized files to be done before implementaion as you will be using the file during the installation process.
    The xml file and the txt files you creat from the personalization tool is used to upload the scenario to the system, or else it will upload the default.
    Also ref note 1226570 ( here i am refering to IN ) you can check the same for other county version also.
    Thanks & Regards,
    Balaji.S

  • External Table Authorization Best practices

    Hi,
    I am working on OBIEE External table Authorization. I am able to successfully implement for one Project (catalog). The field for Authorization table (AuthTable) are
    Windows_ID     Employeeid     Name     EmpEMail     GroupName     Process_ID     Process_Name     Portal_Path
    Here as per requirement a user should see data for a few process. So, I put a column for Process_ID and subsequently I created a INIT block in repository where query are like
    Select 'PROCESS_ID',AuthTable. Process_id
    From AuthTable
    WHERE upper(AuthTable.AD_ID) = upper(':USER')
    Then for User Groups I applied FILTERs for all the tables E.G for every Logical Table I applied Filter
    Dim_Process."Process ID" = VALUEOF(NQ_SESSION."PROCESS_ID")
    I checked data and every thing is correct. But My question is:
    We have many projects/catalog for which Filter Criteria will be different so shall we insert a new column for each criteria in SAME AuthTable or there is any other and better way to maintain it. Because if we maintain one table for all the projects/catalog it will be very messy I would prefer to keep different tables for different projects/catalog as there data marts are different.
    But Problem is for all other session variables we may use different INIT BLOCKS and hence different tables BUT for PORTALPATH there should be only one INIT BLOCK so only for PORTALPATH sake we need to keep every thing in same table ?
    Tell me if I am wrong some where in my understanding or there is a better way to do it.
    Regards
    Saurabh

    Hi,
    Pls refer to this link. Kumar explained it very clearly
    http://obieeblog.wordpress.com/category/obiee/obiee-security/
    Pls award points, if helpful
    Regards,
    Sarat Nallapati

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Is it a best practice to have a template with one master page?

    I am a newbie FM 11 writer and am cleaning up some unorganized books. Should I copy one set of Master Pages to all files in the book. Currently my TOC and certain other files have unique master pages. I would like to set up our books using best practices and would like input from the community. Thanks.

    There are two schools of thought on this. The specific sub-template approach or the "kitchen sink" approach.
    In the "kitchen sink" (i.e. everything, including the...) approach, the FM template is loaded with everything required for the project in a single file. It's simple to deploy, import it to all files and you're good to go. However, the author may have to deal with all sorts of superfluous tags and page layouts in some specific file types, like the cover pages, TOC, Index and other generated files. The onus is on the author to select the correct items to use from the multitude of choices.
    The sub-template approach is modular approach where one creates the various components in separate template files, e.g. paragraph and character tags, tables, page layouts, etc. and combines to create specific templates for the various book components. These component-combined templates only have the minimum that is required for each type of document component. This is a lego-like approach and it provides more flexibility (IMHO) with modifying, updating and creating new templates. This is easier (perhaps less intimidating would be a better term) for the author to use as their choices are much more limited in any given context. However, they do have to apply the correct templates to the specific book components.
    In all cases, you need to document the usage of all components in the template(s), so authors will know the intent of each and every tag, table, sttyle, page layout, etc.

  • EFashion sample Universes and best practices?

    Hi experts,
    Do you all think that the eFashion sample Universe was developed based on the best practices of Universe design? Below is one of my questions/problems:
    Universe is designed to hide technical details and answer all valid business questions (queries/reports). For non-sense questions, it will show 'incompatible' etc. In the eFashion sample, I tried to compose a query to answer "for a period of time, e.g. from 2008.5 to 2008.9, in each week for each product (article), it's MSRP (sales price) and sold_price and margin and quantity_sold and promotion_flag". I grabed the Product.SKUnumber, week from Time period, Unit Price MSRP from Product, Sold at (unit price) from Product, Promotions.promotion, Margin and Quantity sold from Measures into the Query Panel. It gives me 'incompatible' error message when I try to run it. I think the whole sample (from database data model to Universe schema structure/joins...) is flawed. In the Product_promotion_facts table, it seems that if a promotion lasts for more than one week, the weekid will be the starting week and duration will indicate how long it lasts. In this design, to answer "what promotions run in what weeks" will not be easy because you need to join the Product_promotion_facts with Time dimention using "time.weekid between p_prom.weekid and p_prom.weekid+duration" assuming weekid is in sequence, instead of simple "time.weekid=p_prom.weekid".  The weekid joins between Shop_fact and product_promotion_facts and Calendar_year_lookup are very confusing because one is about "the week the sales happened" and the other "the week the promotion started". No tools can smart enough to resolve this ambitious automatically. Then the shortcut join between shop_facts and product_promotion_facts. it's based on the articleid alone. obviously the two have to be joined on both article and time (using between/and, not the simple weekid=weekid in this design), otherwise the join doesn't make sense (a sale of one article on one day joins to all the promotions to this article of all time?).
    What do you think?
    thanks.
    Edward

    You seem to have the idea that finding out whether a project uses "best practices" is the same as finding out whether a car is blue. Or perhaps you think there is a standards board somewhere which reviews projects for the use of "best practices".
    Well, it isn't like that. The most cynical viewpoint is that "best practices" is simply an advertising slogan used by IT consultants to make them appear competent to their prospective clients. But basically it's a value judgement. For example using Hibernate may be a good thing to do in many projects, but there are projects where it would not be a good thing to do. So you can't just say that using Hibernate is a "best practice".
    However it's always a good idea to keep your source code in a repository (CVS, Subversion, git, etc.) so I think most people would call that a "best practice". And you could talk about software development techniques, but "best practice" for a team of three is very different from "best practice" for a team of 250.
    So you aren't going to get a one-paragraph description of what features you should stick in your project to qualify as "best practices". And you aren't going to get a checklist off the web whereby you can rate yourself for "best practices" either. Or if you do, you'll find that the "best practice" involves buying something from the people who provided the checklist.

  • Best practice in implementation of SEM-CPM

    Is someone have  the experiance of implementing SEM-CPM using best practice. and if so, does it reduces implementation time?

    We should be able to adopt the best pratices when the software finally gets integrated into netweaver.
    Ravi Thothadri

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

Maybe you are looking for

  • Adding Z field in Opportunity search and result view BT111S_OPPT/Search

    Hi, I have been searching this forum on adding Z fields in search and result view but couldnt find the precise information. We have Z field in ultimately residing in BUT000. Now when this field is used in BP_HEAD_SEARCH for search and result, it coul

  • Oracle Installation Problem on MacBook Pro with OS X

    Im trying to install Oracle database 10g on my MacBook Pro with the latest OS version. At first I was a bit confused on which of the links I should download, so I downloaded the client release 1, it was a 300mb aprox download. When the download was o

  • Can you change iphone 4 video output from ntsc to pal?

    Hi All, I am connecting my iphone 4 to a Pioneer AVH3200BT car reciever using the composite in ports on the back. I found that using a USB connection would not let me view and hear videos. Using the AV inputs allows me watch video and hear it but the

  • Swap elements in 2d array

    I'm looking for a way to swap a user-defined length of elements in a 2-dimensional array between rows. All of my efforts so far only take the elements and move them within the same row. Basically I have a 2d array with 2 rows, and I want to take a fe

  • Powerbook wont sleep when ichat connected

    whenever my computer is connected to ichat, even if thats the only program running, the display will sleep but the hard drive will not. I have tried adjusting all the settings that seem to make sense in the energy saver panel but still - no dice. can