Best practice need

Hello,
I need to build report with  10 key figures with analytics by period selected month with plan,fact, and deviation between plan and fact, from 1st month till selected month from variable with plan,fact, and deviation between plan and fact and fact of previous year by period from 1st month till selected month.
Report should be build based on SAP Query.
Plan and fact store in characteristic KFTYPEID as Plan and Fact.
________________*Month * ______________From 1st till selected month_______*   Previous year*
_________________Plan   Fact   Deviation________Plan   Fact   Deviation_______________ Fact
Key Figure1
Key Figure2
Key Figure3
How I can build it?
Do you have any idea?
If I transfer key figure to column, report'll be huge by length. But all data will be include in report.
If I choose crosstab, I can't add columns after KFTYPEID and i can't receive data for previous year, data for columns of From 1st till seelcted month.
Regards,
Romano
Edited by: Roman Safaryants on Mar 12, 2009 3:52 PM
Edited by: Roman Safaryants on Mar 12, 2009 3:53 PM

see
http://msdn.microsoft.com/en-us/library/bb669066(v=vs.110).aspx
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Similar Messages

  • Best Practice Needed: Global Application Properties...

    Hi All,
    When developing a web-based application that reads certain configurable parameters from .properties files, I usually put the appropriate code in a static block in the appropriate Java class, storing the property in a static final constant. For example, a DBConnection class might have a static block that reads the driver, username, and password from a properties file.
    My question is, what are some "best practice" techniques for accessing and storing such parameters for use in an application? Are all global properties initialized in one class? at the same time? only when first needed?

    over all, I would say that your approach is fine. Personally, I load properties through a single class, some thing like PropertyReader, and have the different classes initialize their static fields via a get method on that class, like getProperty("db.user"). I prefer to load them via a single class because I can place all of my IO trapping in one location, it is easier to implement new security measures and, if necessary, easier to support internationalization.
    I initialize all properties once, at startup, into a Wrapper Object, typically ResourceBundle or Properties (although Hashtable or any some thing else would be suitable). I believe that it is best to initialize all properties at the same time, at startup because the costs of storing properties that may not be used is going to be less then the cost of making multiple IO calls to load properties on a need-by-need basis. In other words, you are almost always going to take a bigger performance hit by loading properties only when a request for that key is received, rather then just loading them all at once.

  • Oil and gas best practices - need information

    Colleagues!
    Help me to find some information about sap oil and gas best bractices.
    I am interested in .ppt presentation, word documents that in some way describe best practices for oil and gas industry.
    Thanks in advance for your help.

    Hi,
    Can you please check this link http://www.sap.com/industries/oil-gas/index.epx.
    Hope this helps you.
    Rgds
    Manish

  • Best Practices needed -- question regarding global support success stories

    My customer has a series of Go Lives scheduled throughout the year and is now concerned about an October EAI (Europe, Asia, International) go live.  They wish to discuss the benefits of separating a European go Live from an Asia/International go live in terms of support capabilities and best practices.  The European business is definitely larger and more important than the Asia/International business and the split would allow more targeted focus on Europe.  My customer does not have a large number of resources to spare and is starting to think that supporting the combined go live may be too much (i.e., too much risk to the businesses) to handle.
    The question for SAP is regarding success stories and best practices.
    From a global perspective, do we recommend this split?  Do most of our global customers split a go live in Europe from a go live in Asia/International (which is Australia, etc.).  Can I reference any of these customers?  If the EAI go live is not split, what is absolutely necessary for success, etc, etc?  For example, if a core team member plus local support is required in each location, then this may not be possible with the resources they have u2026u2026..
    I would appreciate any insights/best practices/success stories/or u201Cwaru201D stories you might be aware of.
    Thank you in advance and best regards,
    Barbara

    Hi, this is purely based on customer requirement.
    I  have a friend in an Organization which went live in 38 centers at the same time.
    With the latest technologies in networking, distances does not make any difference.
    The Organization where I currently work in, has global business locations. In my current organization the go live was in phases. Here they went live in the region where the business was maximum first because this region was their largest and most important as far as revenue was concerned. Then after stabilizing this region, a group of consultants went to the rest of the regions for the go live in that region.
    Both the companies referred above are successfully into SAP and are leading partners with SAP. Unfortunately I am not authorized to give you the names of the Organizations as a reference for you as you requested.
    But in your case if you have shortage of manpower, you can do it in phases by first going live in the European Market and then in phases you can go live in the other regions.
    Warm Regards

  • Best practice needed to use PraparedStatments

    Dear Experts;
    I am using plain JDBC to intract with database, it works fine but what I am doing is, creating Connection and PreparedStatement in every method (where I need db interaction) and closing connection, resultSet and preparedStatment instances.
    What I required, I dont need to create/close connections myslef. Yes If I want to insert a row in db i can achieve this by ... (but below I am using Statment not preparedStatmenmt, what will be the case in preparedStatement?)
    public static boolean insert (string query)
    // creating Connection
    // creating statment
    // executing stament.executeQuery(query)
    // if insert successful setting retuenValue = true
    // closing connection and statment
    public insetRecord (User user)
      if (insert ("insert into user valaues "+user.getUserId+","+user.getUserId+")"))
      else
    }any practice that you are using with preparedStatment.
    I want to make this inset method generic for every table (and I want to use preparedStatments) and similarly fetching methods too.
    thanks in advance
    - Tahir

    I am using plain JDBC to intract with database, it
    works fine but what I am doing is, creating
    Connection and PreparedStatement in
    every method (where I need db interaction) and
    closing connection, resultSet and
    preparedStatment instances.Refractor to utility classes and methods. Or consider 3rd party DAO/ORM stuff, like Hibernate. They will care about them then.
    What I required, I dont need to create/close
    connections myslef. Yes If I want to insert a row in
    db i can achieve this by ... (but below I am using
    Statment not preparedStatmenmt, what will be the case
    in preparedStatement?)Not closing them is bad practice. Consider connection pooling and reuseable SQL query strings for PreparedStatement.
    I want to make this inset method generic for
    every table (and I want to use preparedStatments) and
    similarly fetching methods too.One word again: Refractor.
    If your application is getting bigger, then it is really worth to take a look for a solid ORM solution. Hibernate is great.

  • Best practice needed: how to dynamicly change rowset for a dataTableModel

    Hello creator folk,
    I need an advice on the following problem.
    I start from the insertUpdateDelete tutorial, and I stick to the very first part - creation of the first page with a dropdown and at table.
    Now I add a second dropdown to add another control level on my table, on tripType for example - simple, it work without problem.
    My problem: my dropdowns have a "off" value - that is a value indicating that the filtering according to this value should be disabled. For example, i want to filter displayed data according to person, tripType, or both.
    As a result, we now have 3 different request, one with personId = ?, one with tripTypeId = ? and the last one with both. But the displayed table is the same.
    I already done such a page, by using the "rendered" option: my JSP contains 3 time the same page, each with a dedicated rowset, but only one is rendered at a time. But I don't like this solution, it is a hell to maintain, and I don't want to imagine if my client ask for a third dropdow!!!
    Another possibility: create a separate page for each possibility. Well, quite the same than the previous one.
    Is it possible at runtime level to change the command associated to a rowset, and then to the linked RowSetDataModel? I tried the following way:
    In the constructor of the page:
                if (isPersonAndTripType()) {
                    myRowSet.setCommand(REQUEST_PERSON_TRIPTYPE);
                    myDataTableModel.setObject(1, this.getSessionBean1().getPersonId());
                    myDataTableModel.setObject(2, this.getSessionBean1().getTripTypeId());
                } else if (isTripTypeOnly()) {
                    ewslive_lasteventIlotRowSet.setCommand(REQUEST_TRIPTYPE);
                    myDataTableModel.setObject(1, this.getSessionBean1().getTriptypeId());
                } else {
         // the default rowset, no change.
                    myDataTableModel.setObject(1,
    this.getSessionBean1().getPersontId());
                myDataTableModel.execute();And in each dropdown_processValueChange, after updating tripId or personId:
                if (isPersonAndTripType()) {
                    myRowSet.setCommand(REQUEST_PERSON_TRIPTYPE);
                    myDataTableModel.setObject(1, this.getSessionBean1().getPersonId());
                    myDataTableModel.setObject(2, this.getSessionBean1().getTripTypeId());
                } else if (isTripTypeOnly()) {
                    ewslive_lasteventIlotRowSet.setCommand(REQUEST_TRIPTYPE);
                    myDataTableModel.setObject(1, this.getSessionBean1().getTriptypeId());
                } else {
              myRowSet.setCommand(REQUEST_PERSON);
                    myDataTableModel.setObject(1,
    this.getSessionBean1().getPersontId());
                myDataTableModel.execute();First run (one person selected by default), everything is OK. But when I change a dropdown I got an exception: the page constructor is called, all ok. The dropdown_processValueChange is called, the correct request is linked to the dataTableModel, and the function return normally, then the exception occures:
    Exception Details:  javax.faces.el.EvaluationException
      javax.faces.FacesException: java.sql.SQLException: [OraDriver] Not on a valid row.
    Possible Source of Error:
       Class Name: com.sun.faces.el.ValueBindingImpl
       File Name: ValueBindingImpl.java
       Method Name: getValue
       Line Number: 206 Help needed!!!

    I've done something similar in my current app, the only difference I see being that I retrieve the value from the dropdown directly rather than going through the sessionbean as I don't need to save the selection.
    I've managed to iron out all the bugs and it works well now. Not near my development machine or I'd post the code. I do have a couple of questions:
    Why do you have the if/else setup in the constructor? If the page is being called for the first time I don't see why you need it.
    Why do you useewslive_lasteventIlotRowSet.setCommand(REQUEST_TRIPTYPE);instead ofmyRowSet.setCommand(REQUEST_TRIPTYPE);?
    I think this is causing your problem as you haven't shown where you set the datacache for myDataTableModel
    to ewslive_lasteventIlotRowSet instead of myRowSet.
    You can also set all of your dropdowns to use the same event handler, cuts down on the duplicate code :)

  • Best practice followed on CONTRACT

    Hello All
    What are the best practice needs to be followed on changing the material data information ,
    Step 1. created a contract for Material and PO released agains contract. later some times.
    Step 2:- material master changes some important piece of data like order Unit/ Deletion or material group changes on MATERIAL.
    step 3:- We do the same for the  materials in the contract and deactivate the w.r.t line and create a new line item so that my new line item picks up from material master.
    What are the other incidents material master team may do on material master so that i can inform the contract team to do the same.
    What are the actions material master team may do on MATERIAL and which is relavant for contract data.so that i can alert CONTRACT and MATERIAL master team. so that communication will be seemless.so that every thing perfect and sync.
    Muthu

    Please check these answered links:
    Contract best practices
    good practices in SAP Value Contract
    Best Practice while creating Contract, Purchase Requisition, Purchase Order
    Best Practice unit of measurement usage in CONTRACT.

  • Best Practices for APO Production Support

    What are the best practices needs to be adopt for APO Production Support? in DP, SNP, PP/DS, CIF, APO-basis, APO -ABAP area?

    I know this is not the standard way to reach you, but you answered a question long back about /SAPAPO/AMON1 - Alert Monitor.  Can this alert monitor be configured to show up in the universal worklist (UWL) in the portal?  if so would these alerts show up in the alert inbox of the uwl just as other sap alerts do? (the other alerts i speak of are those referenced here (http://help.sap.com/saphelp_nw04s/helpdata/en/3f/81023cfa699508e10000000a11402f/frameset.htm).
    Or are these alerts two totally different types of functionality with the same name (ARERT) ?
    Thanks
    Ted Smith

  • Best Practices in SLD

    Best Practices needs to be followed in SLD.
    Currently our SLD's are maintained by the Basis and our Development team has only the Read Access. Whether it is NOrmal?
    Thanks

    Though its very irritating for a XI developer but from a best practice perspective its better if everyone in the development does not have access to modify the SLD. 
    As SLD configuration is generally a onetime activity and only changes are made when you need to add or delete systems in your system landscape which should be done by the basis team.
    Cheer's

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Need best practice when accessing an ucm content after being transferred.

    Hi All,
    I have a business requirement where I need to auto-transfer the content to another UCM when this content expires in the source UCM.
    This content needs to be deleted after it spends a certain duration in the target UCM.
    Can anybody advise me the best practice to do this in the Oracle UCM?
    I have set up an expiration date and trying to auto Replicate the content to the target UCM once the content reaches the expiration date.
    I am not aware of the best practice to access the content when it is in the target UCM?
    Any help in this case would be greatly appreciated.
    Regards,
    Ashwin

    SR,
    Unfortunately temp tables are the way to go. In Apex we call them collections (not the same as PL/SQL collections) and there's an API for working with them. In other words, the majority of the leg work has already been done for you. You don't have to create the tables or worry about tying data to different sessions. Start you learning here:
    http://download.oracle.com/docs/cd/E14373_01/appdev.32/e11838/advnc.htm#BABFFJJJ
    Regards,
    Dan
    http://danielmcghan.us
    http://sourceforge.net/projects/tapigen
    http://sourceforge.net/projects/plrecur
    You can reward this reply by marking it as either Helpful or Correct ;-)

  • Best practice for frequently needed config settings?

    I have a command-line tool I wrote to keep track of (primarily) everything I eat and drink in the course of the day.  Obviously, therefore, I run this program many times every day.
    The program reads a keyfile and parses the options defined therein.  It strikes me that this may be awfully inefficient to open the file every time, read it, parse options, etc., before even doing anything with command-line input.  My computer is pretty powerful so it's not actually a problem, per se, but I do always want to become a better programmer, so I'm wondering whether there's a "better" way to do this, for example some way of keeping settings available without having to read them every single time.  A daemon, maybe?  I suspect that simply defining a whole bunch of environment variables would not be a best practice.
    The program is written in Perl, but I have no objection to porting it to something else; Perl just happens to be very easy to use for handling a lot of text, as the program relies heavily on regexes.  I don't think the actual code of the thing is important to my question, but if you're curious, it's at my github.  (Keep in mind I'm strictly an amateur, so there are probably some pretty silly things in my code.)
    Thanks for any input and ideas.

    There are some ways around this, but it really depends on the type of data.
    Options I can think of are the following:
    1) read a file at every startup as you are already doing.  This is extremely common - look around at the tools you have installed, many of them have an rc file.  You can always strive to make this reading more efficient, but under most circumstances reading a file at startup is perfectly legitimate.
    2) run in the background or as a daemon which you also note.
    3) similar to #1, save the data in a file, but instead of parsing it each time save it instead as a binary.  If you're data can all be stored in some nice data structure in the code, in most languages you can just write the block of memory occuppied by that data structure to a file, then on startup you just transfer the contents of the file to a block of allocated memory.  This is quiet do-able - but for a vast majority of situations this would be a bad approach (IMHO).  The data have to be structured in such a way they will occupy one continuous memory block, and depending on the size of the data block this in itself may be impractical or impossible.  Also, you'd need a good amount of error checking or you'd simply have to "trust" that nothing could ever go wrong in your binary file.
    So, all in all, I'd say go with #1, but spend time tuning your file read/write proceedures to be efficient.  Sometimes a lexer (gnu flex) is good for this, but often times it is also overkill and a well written series of if(strncmp(...)) statements will be better*.
    Bear in mind though, this is from another amateur.  I c ode for fun - and some of my code has found use - but it is far from my day job.
    edit: *note - that is a C example, and flex library is easily used in C.  I'd be surprised if there are not perl bindings for flex, but I very rarely use perl. As an after thought, I'd be surprised if flex is even all that useful in perl, given perl's built-in regex abilites.  After-after-thought, I would not be surprised if perl itself were built on some version of flex.
    edit2: also, I doubt environment variables would be a good way to go.  That seems to require more system calls and more overhead than just reading from a config file.  Environment variables are a handy way for several programs to be able to access/change the same setting - but for a single program they don't make much sense to me.
    Last edited by Trilby (2012-07-01 15:34:43)

  • Captivate 4.0 - Need best practice tips on panning, templates, resolution, etc.

    Hi all,
    Been searching all over in case someone has already published Best Practices for Captivate 4.0 (particularly the new/enhanced features), but coming up empty. My client's in house graphics person has been tasked with creating a template for us to use for a number of software simulations. He's running in to these challenges:
    - Our SWF needs to be 800 x 600 and he created the template for this size, but the app won't fit.
    - He's found the panning feature to produce very choppy, disappointing results. Tips?
    - His suggestion is we capture at 1024 x 768, then resize to 800 x 600, then copy/paste the slides into his 800 x 600 template. Would we be better off recreating the template at 1024 x 768 and not resizing until the final output is generated for each tutorial? His concern is that our template will then become larger, making it harder for us to send back and forth as changes are made, etc.
    Any other suggestions for how to deal with the resolution issue, how best to take advantage of templates, etc.?
    Thanks,
    Katie Carver
    Senior Technical Writer
    Docs-to-You, LLC

    Hi there
    In my own opinion, Panning is a nice attempt, but just doesn't cut the mustard. It's nowhere near as good as the panning one sees with Camtasia Studio.
    I might suggest combining Camtasia with Captivate for the ultimate development set. There are aspects Captivate shines in when compared with Camtasia, and there are aspects Camtasia shines in when compared with Captivate. So I say if you can afford it, go for it!
    Now I know that both packages are sort of pricey and not everyone can afford both. In that case you might want to try Jing, which is free to use. I've not looked very deeply at it, but it may offer some of what Camtasia does. You could then use that for your panning and enhance Captivate that way.
    Cheers... Rick
    Helpful and Handy Links
    Captivate Wish Form/Bug Reporting Form
    Adobe Certified Captivate Training
    SorcerStone Blog
    Captivate eBooks

  • Need best practice configuration document for ISU CCS

    I am working on ISU CCS project. i need  best practice cofiguration document for
    Contract management
    Collections management
    Invoicing
    Work Management as it relates to ERP Billing.
    Thanks
    Priya
    priyapandey.sapcrmatgmailcom

    Which version are you setting up and what are the requirements? IF you are discussing the use of NIC bonding for high availability beginning in 11.2.0.2 there is a concept of "High Availability IP" of HAIP as discussed in the pre-installation chapters,
    http://docs.oracle.com/cd/E11882_01/install.112/e22489/prelinux.htm, section 2.7.1 Network Hardware Requirements.
    In essence, using HAIP eliminates the need to use NIC bonding to provide for redundancy.

  • Expert opinion needed: Best practices to handle huge rowsets on UI

    Hi All,
    I need to know what are the best practices from Oracle to handle huge rowsets on the UI.
    My ADF 11g app is a custom monitoring cum reporting tool for a highly active integration solution.
    The user can give me a selection criteria say show transactions between yesterday and tomorrow and our highly active transactional system may return upto 5000 records.
    I am showing these records in a tabular format and since pagination is not there we are depending on auto scrolling which is kind of slow.
    So please advice me what options come to your minds for showing/informing users of such rowsets.
    I am aware ideally UI should not have more that a couple hundred records but our use case does not adhere to that.
    Thanks

    since pagination is not there I'm not sure what you mean by this, the ADF Faces table does pagination when you scroll - so if your business service has 5000 records but the rows property of your table is set to 25 - you'll just fetch 25 records to the client.
    When you scroll down you'll fetch another 25.
    This type of thing is automated for ADF BC data controls - and you can control the range set.
    We also generate the code needed for EJB Facades to do this with JPAs.
    If you have your own Java class as a data source you'll need to implement this pagination on the business service side see exapmle 37 here: http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html

Maybe you are looking for

  • Break-point in ALV-GRID OO if standard-button &PRINT_BACK is pressed.

    Hi, is it possible to set a break-point in Button '&PRINT_BACK'? I will use it to test if i can use:     CALL METHOD I_ALV_GRID->GET_FRONTEND_PRINT       IMPORTING         ES_PRINT = GS_PRNT. and     CALL METHOD I_ALV_GRID->SET_FRONTEND_PRINT       E

  • Hp pavilion g6-2217CL left screen hinge

    My left screen hinge has for some time now been tearing apart the base of the laptop. I purchased it back in June of 2013 and do not know what is wrong. My laptop freezes when trying to boot as well now due to this problem. Any help would be greatly

  • Session state always invalid after login

    Hello community, i have a functional login process which uses my own login with username and password. After login i branch to application start page, but this session now is different from the start login session, so every login is invalid and will

  • Sum Total in Reports

    hi guyz, i did this before but today i forget it how i did :) i have master detail report, have the below field Master Invoice# Name Description Details Inv# Item QTY Unit Price Total Sum_total i place a formula on total is RETURN NVL(:QTY, 0) * NVL(

  • Down Time for  Installing APEX 4.2.1 and oracle HTTP server?

    Hi, all: I plan to install APEX 4.2.1 and oracle HTTP server on 10.2.0.4 database. Is there any down time required during installation? I mean during installation, do I need to block users from accessing the database? Can they do their normal data en