Backend design

Hi,
The scenario we have is that we have backend design done in two ways.
Let me explain both the structures completely.
1.We have 2 layers.In the 1st layer we have 3 ODS with full update and then on top of it 3 Infocubes with delta update from the underlying ODS's.Then the data from the cubes in the lower layer goes to the 3 cubes in the upper layer.And then a multiprovider is built on these upper layer cubes on which reporting is done.
2. Here also we have 2 layers.In the lower layer we have 3 ODS with full update.then another layer with 3 ODS with delta update and then these 3 ODS's feed 3 cubes in the upper layer .rest all remain same.
in short,we have cubes with delta update in the first structure and ODS with delta update in the second structure.
Can some one please explain which of this is better and why?
Please reply.
Regards,
Suchitra

Hi,
As per your scenario we are using cube as the reporting layer using multiprovider in first case and ODS as the reporting layer in the second case.Now the diffrence in both cases can be categorized in 2 ways:
1.Architecture Wise
2.Reporting Wise.
Architecture Wise:
we will have the following diffrences
one major difference is the manner of data storage. In ODS, data is stored in flat tables. By flat we mean to say ordinary transparent table whereas in a CUBE, it composed of multiple tables arranged in a STAR SCHEMA joined by SIDs. The purpose is to do MULTI-DIMENSIONAL Reporting
Another difference is In ODS, you can update an existing record given the KEY. In CUBES, theres no such thing. It will accept duplicate records and during reporting, SUM the keyfigures up. Theres no EDIT previous record contents just ADD. With ODS, the procedure is UPDATE IF EXISTING (base from the Table Key) otherwise ADD RECORD.
Reporting Wise:
basically you use ods-objects to store the data on a document/item/schedule line level whereas in the cube you will have only more aggregated data (on material, customer ...). So you can do your reporting on the already aggregated data and if necessary do a detailed reporting on the ods object. Addionally ods objects will provide you a delta in case your datasource doesn't provide it. Just use overwrite mode for all (characteristics and keyfigures) in the update rules and the ods will take care about the rest
Infocubes are Multi dimensional objects that contains fact table and dimension table are available whereas ODS is not a multi dimensional object there are no fact tables and dimension tables. It consists of flat transparent tables.
In infocubes there are characteristics and keyfigures but in ods key fields and data fields. we can keep non key characteristics in data fields.
Some times we need detailed reports we can get through ODS. ODS are used to store data in a granular form i.e level of detail is more. The data in the infocube is in aggregated form.
from reporting point of view ods is used for operational reporting where as infocubes for multidimensional reporting.
ODS are used to merge data from one or more infosources but infocubes does not have that facility.
The default update type for an ODS object is overwrite for infocube it is addition. ODS are used to implement delta in BW. Data is loaded into the ODS object as new records or updating existing records in change log or overwrite existing records in active data table using 0record mode.
You cannot load data using Idoc transfer method in ODS but u can do in infocube.
you cannot create aggregate on ODS. you cannot create infosets on infocube.
ODS objects can be used
when u want to use the facility of overwrite.if u want to overwrite nonkey characteristics and key figures
If u want detailed reports u can use ODS.
If u want to merge data from two or more infosources you can use ODS.
IT allows u to drill down from infocube to ODS through RRI interface.
Moreover to conclude reporting performance wise cubes are better as compared to ODS.
As per your requirement you can have ur model with various advantages and disadvantages as mentioned above.
Note:all these information is availabe in various threads and if you would have checked thoroughly you would have got it.
With Regrads:
Prafulla Singh.
Edited by: prafulla singh on Mar 27, 2008 12:47 PM

Similar Messages

  • Performance testing of bookmarks

    I am given some bookmarks and asked to do performance check like look for backend design(wether design needs to be changed r infoprovider design is good or not...etc) of that query, they asked me to prepare a report so that they can do chnages accordingly........What all things I need to do for this, pls gimme detailed steps to be caried out.....Its a very urgent req ans I will be thankful if u can come out with some answers........ waiting for BW Gurus answers....

    Hi,
       You want to do the performance study of your project from datamodeling to reporting layer?
       If yes, then i can forward you the document.
       mail me @ [email protected]

  • Non-conforming dimension filters

    Hi,
    Here how my join looks
    FACT1>>product(dim)<< FACT2>>ledger(dim)>>RPT(dim)
    Here RPT(dim) is non conforming dimension for FACT1 and product(dim) is common for both FACT1 and FACT2
    I have set content level(LTS) only for conformed dim.( to detail)
    When i drag fact1 and RPT(dim) ,my result is fine
    but when i drag only FACT1 and use (USING filter) of RPT(dim) columns
    i get error No fact table exists at the requested level of detail: [,,,,,,,,,,,,,,,,,,,,].
    FYI: Re: obiee 11g non-conforming dimension filters
    can i know how to set content level for nON conformed dim? will tht solve problem
    Regards
    Sabeer
    Edited by: 944346 on Sep 20, 2012 12:54 AM
    Edited by: 944346 on Sep 20, 2012 12:55 AM
    Edited by: 944346 on Sep 20, 2012 12:56 AM

    I resolved this issue a while ago, but I hadn't posted back the results. Here is what happened...
    I talked to Oracle support about my design and issues. They created a patch for me that would re-enable the 10g functionality so that my model/queries would work again. From their perspective, 10g's non-conforming code design was flawed and 11g was fixed to correct the flaw. This flaw was what allowed my model and queries to work in 10g.
    NOTE: I have not tested the patch (bug# 11850704)
    The only way to get things working on 11g without the patch is to find a common fact that all dimensions can join. After some consideration and testing, I decided that it would be possible to join all the dimensions to one fact with some backend design changes. I could have proceeded with the patch, but I doubt that the 10g functionality will be maintained in the coming releases. Therefore, I would only be delaying the inevitable redesign.
    I have to admit that the final results have been positive and I have eliminated many of the 10g issues that I had with the design.

  • BW implementation for SAP HCM.

    Dear Experts
    I am on a new project where they need to do HR Data Migration from PeopleSoft (Legacy System) into SAP ECC. The Client wants me to implement SAP BW (New Implementation) doing the following Functional Areas for SAP HCM:
    1. Personnel Administration
    2. Learning Solution Online
    3. Occupational Health & Safety
    4. Leave
    5. Time
    6. Allowances
    7. Position Management
    8. Organizational Structures
    9. Delegations
    10. Competencies
    11. Payroll
    12. CATS
    13. Business Intelligence
    This is the scope of the project.
    They were using just over 600 reports from the Legacy System.
    Please let me know how long does it take developing an average of 25 Reports in each of the above mentioned areas (Area-wise) including ETL (Backend) Design in SAP BW? We are also looking at using Business Object Reporting Tools moving forwards.
    What is the best order of implementation of above Functional Areas?
    I am to start off with a SAP BW Solution Blueprint with As-Is, To-Be (Solution and Approach). Please let me know if there are better ideas.
    Your time in very much appreciated.
    Kind regards,
    Chandu

    Hi chandra,
    You first need to understand the requirement of the business first because if there are 600 reports in legacy system you no need to presume that you need to develop 600 coz of the multidimensional report of BI you can club so many requirements in to one report. And you have to decide, up to what extent the Business content reports would be helpful. If the BC report meets the requirement it is as simple as that to activate and use it. But if the BC report wont meet the business requirement then you need to estimate the efforts to develop. Here you need to keep in mind that the standard extractors would be fine or need to enhance or need to go for Generic extraction. And also the HCM reports are big tricky and lot of data issues would rise. By considering all this you need to plan the days of work to complete all the reports.
    I hope you understood what i am trying to explain.
    Thanks,
    RRRR

  • Obiee 11g non-conforming dimension filters

    I have a non-conforming design configured in 11g that is functional. However, I have run into an issue where if I attempt to filter on columns from two non-conforming dimensions, I receive the following error.
    [nQSError: 14023] None of the fact sources for logicalTable1.logicalColumn1 are compatible with the detail filter [].
    I have not been able to resolve the issue, but I believe it is related to the design changes in 11g to configure non-conforming dimensions (I have documented the design changes in another thread). Oracle support has confimed my design, but I still have the filter issue. If I only filter on one column as opposed to two, the query executes without issues. Has anyone managed to resolve this issue?

    I resolved this issue a while ago, but I hadn't posted back the results. Here is what happened...
    I talked to Oracle support about my design and issues. They created a patch for me that would re-enable the 10g functionality so that my model/queries would work again. From their perspective, 10g's non-conforming code design was flawed and 11g was fixed to correct the flaw. This flaw was what allowed my model and queries to work in 10g.
    NOTE: I have not tested the patch (bug# 11850704)
    The only way to get things working on 11g without the patch is to find a common fact that all dimensions can join. After some consideration and testing, I decided that it would be possible to join all the dimensions to one fact with some backend design changes. I could have proceeded with the patch, but I doubt that the 10g functionality will be maintained in the coming releases. Therefore, I would only be delaying the inevitable redesign.
    I have to admit that the final results have been positive and I have eliminated many of the 10g issues that I had with the design.

  • Bex Query for Waterfall Report

    Hello,
    I've a Icube so defined:
    Dimensions:
    Data Pckg
    Time
    Unit
    Plant
    Vendor
    Material
    PVM_ID (Each Plant,Vendor,Material is linked to an ID)
    Navigation Attributes:
    Material Group
    Key Figures
    Min Qty
    I need to have a Variables screen with two time (From, To) selection criteria
    For instance If a select from Variable screens:
    X From (year week- yyyyww): 200901   To: 200908
    Y From (year week- yyyyww): 200901   To: 200904
    Plant:      0001
    Vendor: V000A
    Material: WF001
    Plant | Vendor| Materia |  KFigure  |                | wk01 | wk02 | wk03  | wk04|  wk05|wk06| wk07|wk08 | 
       001  V00A   WF001    | MIN Qty |    Yweek01 |  130   |    150 |     150 |  150  |   140  |  140 |  140  |   130 |
       001  V00A   WF001    | MIN Qty |    Yweek02 |           |    150 |     150 |  150  |   150  |  140 |  140  |   140 |   
       001  V00A   WF001    | MIN Qty |    Yweek03 |           |           |     150 |  150  |   150  |  150 |  150  |   150 |   
       001  V00A   WF001    | MIN Qty |    Yweek03 |           |           |            |  200  |   200  |  200 |  200  |   200 |
       001  V00A   WF001    | MIN Qty |    Yweek03 |           |           |            |          |   100  |  100 |  100  |   100 |
       001  V00A   WF001    | MIN Qty |    Yweek03 |           |           |            |          |           |    50 |    50  |    50  |
       001  V00A   WF001    | MIN Qty |    Yweek03 |           |           |            |          |            |         |    50  |    50  |
       001  V00A   WF001    | MIN Qty |    Yweek03 |           |           |            |          |              |         |          |    50  |                                                                               
    Please, someone can help me?
    With My Regards,
    Giuseppe

    HI,
    Case 1 :-
    You can achive this if Your X From is a subset of Y From.
    e.g.
    X From :- 20090101 to 20101231
    Y from :- 20090201 to 20100401
    Or it can be vice-versa.
    Create two varable with select option.
    In Charactericas Restriction use the higest value i.e Z from in above case.
    In Default Values use the subset variable of Charactericas Restriction i.e. Y From, in above case.
    case 2 :-
    If  X From is not a  subset of Y From, Then it cannot be achived, Then you need cahnge your backend design and add one more date.
    Regards,
    Ankit.

  • Error while trying to publish 2013 workflow via designer. Probably a workflow manager backend user account issue?

    Hello, I am getting error while publishing the 2013 workflow via designer. Also, under sharepoint designer if I try to delete any workflow then the page just refreshes and the workflow does not get deleted.
    I checked the services.msc and found that the workflow backend service was stopped. (this happened as the password of the user under which this serivce was running had changed).
    So, the network admin changed the service user to LOCAL SYSTEM and started the service.
    Now, the workflow backend service is started. We have run the iisreset also.
    However, I am still getting the same error:-
    System.IO.InvalidDataException: A response was returned that did not come from the Workflow Manager. Status code = 503:
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
    <HTML><HEAD><TITLE>Service Unavailable</TITLE>
    <META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
    <BODY><h2>Service Unavailable</h2>
    <hr><p>HTTP Error 503. The service is unavailable.</p>
    </BODY></HTML>
    Client ActivityId : ee94689c-4e08-b055-fe9c-268d7a94
    Please find attached snapshot.
    Is the error as a result of the change in service user? Can you tell me what priviledges should the account running the workflow backend service must have?
    UPDATE 1:-
    We have set the workflow service backend account user as farm admin and also tried to set it to site admin. The service is re-started. Now, for a new web application,
    I can delete workflows. However, for existing site, I am not able to delete the existing workflows.. Also, I am not able to publish workflows (present under new and previous sites) and the error is same as described earlier.

    Hi Nachiket,
    According to your description, my understanding is that you got an error while publishing the 2013 workflow via SharePoint 2013 Designer.
    As you have changed the password of this service. Please open IIS and make sure identify of the WorkflowMgmtPool is correct, you can re-type the identify. Then  make this pool is started.
    Open Services, find Service Bus Message Broker service and Windows Fabric Host Service. Make sure they are running. If not, firstly, start Windows Fabric Host Service, then start Service Bus Message Broker service. If Service Bus Message Broker service is
    started, stop and re-start it.
    Here are some similar posts for you to take a look at:
    http://developers.de/blogs/damir_dobric/archive/2012/10/23/deploying-sharepoint-workflows.aspx
    http://developers.de/blogs/damir_dobric/archive/2012/09/18/servicebus-message-broker-service-is-starting-and-starting.aspx
    I hope this helps.
    Thanks,
    Wendy
    Wendy Li
    TechNet Community Support

  • Still using BW 3.5 query designer for BI 7.0 backend

    Has anyone worked on an upgrade where only the backend has been upgraded to 7.0 but still using query designer 3.5?
    Have there been any query performance issues?
    Would you recommend upgrading the query designer to 7.0 soon?

    We upgraded to 7.0 a long time ago and there are still some reports running as 3.x queries. It's no problem but there are so many benefits to working with the 7.0 query designer that I don't see the point in not upgrading.
    Kind regards,
    Alex

  • Entering custom backend service in Design Time for Processes and Forms

    Hi all,
    I am trying to create a form that have 3 fields:
    1. pernr
    2. ename
    3. effective_date
    I done all the necessary setup of BADI ,class, interface and form.
    However I cannot seem to insert my custom backend service in the Design Time for Processes and Forms workbench (t-code:hrasr_dt).
    Currently, I am just using SAP_PA for my backend service which is unable to load the ename into my form. The code in my custom class will not be triggered if I did not use my custom back end service in the Design Time for Processes and Forms.
    Whenever I tried using my custom back end service, I will encounter the assertion_failed dump.
    Anyone have any idea?
    Edited by: Siong Chao on Dec 22, 2010 6:35 AM

    Hi Siong Chao,
    The filter name you have used in the BADI for your generic service requires to be defined in the Create / Edit Generic Service - Definition.( You can see this on selecting the Back-end services node under Form Scenario definition while using HRASR_DT transaction code)
    Once you do that, you can import the fields defined in the Generic service BADI - using the same procedure used in standard SAP backend sevices.
    Hope this clarifies. Please let me know if you have any further questions.
    Best Regards
    G Raj

  • VC Report design - Data from 2 backend systems

    Hi,
    I have a report to develop at below.
    Material Inventory (Backend 1) Inventory (Backend 2)
    XYZ 10 12
    ABC 100 103
    and so on.
    I get the following from 1st backend system
    Material and Inventory (Backend 1)
    I get the following from 1st backend system
    Material and Inventory (Backend 2)
    Master data from both systems are syncronised that means that the material numbers will be the same.
    I am using VC to extract data from these systems, I have managed to get the data into 2 table views and now need to join them together into 1 view as shown above.
    The combie operator does not work as it shows the data as below
    XYZ 10
    XYZ 12
    ABC 100
    ABC 103
    Please suggest options.
    Regards,
    Deepak

    thanks a ton Srinivas
    so to understand logic clearly---
    Q1- if i have 2 cubes- cube1 with flat file data and cube2 with sap data
    and if i build Multiprovider on top of it - the data will be like below ?
    Order No....level....date....quantity...price....dist.channel....country.
    131.............1..........1.1.10.......4........##.......11..............US
    131.............1..........##...........##.........100.......11.............##
    131.............2..........2.1.10.......5.........0........11..............US
    131.............3..........2.1.10.......1.........0........11..............US
    231.............1..........1.1.10.......4.........0........11..............Uk
    231.............2..........##...........##.........600.......11.............##
    231.............2..........2.1.10.......5.........##........11..............Uk
    231.............3..........2.1.10.......1........0.........11..............Uk
    and to bring the clear data i have to MANUALLY seperate logic of CONSTANT SELECTION in all queries in future ??
    Q2- if i have 1 dso with Order no. and Line item  & Distribution channel as key, and two transfer rules , 1 with flat file and 1 with sap
    the data in cube will be like below ??
    Order No....level....date....quantity...price....dist.channel....country.
    131.............1..........1.1.10.......4........100.....11..............US
    131.............2..........2.1.10.......5.........0........11..............US
    131.............3..........2.1.10.......1.........0........11..............US
    231.............1..........1.1.10.......4.........0........11..............Uk
    231.............2..........2.1.10.......5..........600...11..............Uk
    231.............3..........2.1.10.......1........0.........11..............Uk

  • Design Studio Client 1.4 SP00 doesnot shows the backend connections

    hello everyone,
    i am using Design Studio Client 1.4 SP00, in the local mode I am trying to connect BW system but selection screen I am unable see the list of BW connection available in SAP Logon pad, please see the attached screenshot.
    I have uninstalled and installed again but still have same issue.
    Anyone have ever faced this problem, could you please help
    Regards,
    Ganesh

    Hi Tommy - thank you very mach, it works,
    If the saplogon.ini file is stored in a virtual SAP GUI application, for example, and not on your PC, then Design Studio cannot find the file or display SAP NetWeaver BW connections in the design tool, So for displaying BW connections following command should  be copied in SapDesignStudio.ini before the -vmargs command.
    -aadSaplogoniniPath
    <saplogon.ini file path in your local machine>
    Regards,
    Ganesh

  • Portal application connecting backend SAP R/3 system

    Hi All,
               I am developing one portal application through NWDS. It's JSPDYNPage component.The applications functionality is basicaly is to connect it to backend SAP R/3 system & fetch some data in aparticular table & display that data in a tableview format on one JSP& also when i click any row of the tableview then details of that row should get displayed on the next JSP. So i reached till develpoing a code that connnects to Backend SAP R/3.(Basicaly coding of connection part is done). Now i need to test this code, to check whether it is fine or not? NWDS doesnt give nay error, not compile also or not runtime error also. It simply shows the output as a blank page, where it is supposed to display one line of text in textview(i coded this in my JSP).But as i said it displays a blank page. i tried to debug the application , but debugging also not worked.I performed the debugging twice in right way. The code was not debugged, it run the same way as it normally runs when debugging is off & shown the blank page. I also have done all the the settings or prerequisites for debugging properly. I am stuck at this point now.search many documents , but no relevant help has been received. Can anyonehlep me with this. i am putting below the code for JSP, JSPDYNPAGE component as well as portalapp.xml. Can anyone guide me with this, whhere i am making mistake? what should i change in this?
    JSPDYNPAGE code
    package com.lti.portal;
    //import java.util.ArrayList;
    import com.sapportals.htmlb.*;
    import com.sapportals.htmlb.enum.*;
    import com.sapportals.htmlb.event.*;
    import com.sapportals.htmlb.page.*;
    import com.sapportals.portal.htmlb.page.*;
    import com.sapportals.portal.prt.component.*;
    import java.util.ArrayList;
    import com.sap.mw.jco.IFunctionTemplate;
    import com.sap.mw.jco.JCO;
    import com.sun.corba.se.internal.core.Response;
    public class Connection_R3 extends PageProcessorComponent {
    public DynPage getPage(){
    return new Connection_R3DynPage();
    public static class Connection_R3DynPage extends JSPDynPage{
    private Conn_R3 myBean = null;
    public JCO.Client mConnection;
    public JCO.Repository mRepository;
    public ArrayList al = new ArrayList();
    public String output;
    public String Ans;
    public static String BEAN_KEY="myBean";
    public void doInitialization(){
    IPortalComponentRequest request =(IPortalComponentRequest) this.getRequest();
    IPortalComponentContext context = request.getComponentContext();
    IPortalComponentProfile profile = context.getProfile();
    //create & initialize the bean.
    Conn_R3 test_bean= new Conn_R3();
    test_bean.setans("3");
    // put the bean into application context.
    context.putValue(BEAN_KEY,test_bean);
    conn();
    //IPortalComponentResponse res = (IPortalComponentResponse)this.getResponse();
    //for(int i=0;i<al.size();i++)
    //res.write(" "+al.get(i).toString());
    public void doProcessAfterInput() throws PageException {
    public void doProcessBeforeOutput() throws PageException {
    this.setJspName("Connection_R3.jsp");
    public ArrayList conn() {
    IPortalComponentRequest request =(IPortalComponentRequest) this.getRequest();
    IPortalComponentContext context = request.getComponentContext();
    IPortalComponentProfile profile = context.getProfile();
    Conn_R3 sample_bean = new Conn_R3();
    sample_bean.setans("5");
    //context.putValue(BEAN_KEY, sample_bean);
    //r3-accessing
    //connect_to_r3();
    try {
    // Change the logon information to your own system/user
    mConnection = JCO.createClient("800", // SAP client
    "********", // userid
    "******", // password
    null, // language
    "*******", // application server host name
    "**"); // system number
    mConnection.connect();
    //System.out.println(mConnection.getAttributes());
    mRepository = new JCO.Repository("ABC", mConnection);
    } catch (Exception ex) {
    ex.printStackTrace();
    System.exit(1);
    JCO.Function function = null;
    JCO.Table codes = null;
    try {
    function = this.createFunction("ZSAMPLE");
    if (function == null) {
    System.out.println("ZSAMPLE" + " not found in SAP.");
    System.exit(1);
    String num1 = "7";
    String num2 = "9";
    //String ans;
    function.getImportParameterList().setValue(num1,"My_import");
    function.getImportParameterList().setValue(num2, "My_Import");
    mConnection.execute(function);
    Object name =function.getExportParameterList().getValue(Ans);
    output=name.toString();
    sample_bean.setans(output);
    //sample_bean.setans(output)
    //al.add(name);
    //store values into strings
    //pushing of values we get from r3 into awt
    } catch (Exception ex) {
    ex.printStackTrace();
    System.exit(1);
    disconnect_r3();
    return al;
    public void connect_to_r3() {
    public JCO.Function createFunction(String name) throws Exception {
    try {
    IFunctionTemplate ft =
    mRepository.getFunctionTemplate(name.toUpperCase());
    if (ft == null)
    return null;
    return ft.getFunction();
    } catch (Exception ex) {
    throw new Exception("Problem retrieving JCO.Function object.");
    public void disconnect_r3() {
    mConnection.disconnect();
    //**********************<b>Code for BEAN</b>****************************
    package com.lti.portal;
    import java.io.Serializable;
    public class Conn_R3 implements Serializable {
    public String answer;
    public void setans(String a)
    answer=a;
    public String getans()
    return answer;
    ///////////////////////<b>Code for JSP</b>*****************************
    ><%@ taglib uri= "tagLib" prefix= "hbj" %>
    <jsp:useBean id="myBean" scope="application" class="com.lti.portal.Conn_R3" />
    <hbj:content id="myContext" >
    <hbj:page title="PageTitle">
    <hbj:form id="myFormId" >
    <hbj:textView
    id="Welcome_message"
    text="<%=myBean.getans()%>"
    design="STANDARD" >
    </hbj:textView>
    </hbj:form>
    </hbj:page>
    </hbj:content>
    /////////////////////////////////<b>Portalapp.xml</b>*****************************************
    application>
      <application-config>
        <property name="PrivateSharingReference" value="com.sap.portal.htmlb"/>
      </application-config>
      <components>
        <component name="Address_comp">
          <component-config>
            <property name="ClassName" value="com.lti.portal.Address_comp"/>
          </component-config>
          <component-profile>
          <property name="tagLib" value="/SERVICE/htmlb/taglib/htmlb.tld"/>
          </component-profile>
        </component>
      </components>
      <services/>
    </application>

    Hi,
       Do onething, please refer this <a href="http://www.i-barile.it/SDN/JCoTutorial.pdf">JCo Tutorial</a> as well as <a href="http://www.apentia-forum.de/viewtopic.php?t=1962&sid=9ac1506bdb153c14edaf891300bfde25">Link</a> also.
    Regards,
    Venkatesh. K
    /* Points are Welcome */

  • Need help in bex query designer

    hi experts ,
    Actually we had a ODS where the KPI's values  for all weeks are present and also the module.
    in the query designer we need to show the targets for respective KPI's modulewise.
    the requirement is
    module-selections
    week no-selection
    targetweek1week2---week3
    KPI1--10090---90--
    90
    KPI2--95-7885-----90
    based on the module selection the targets values should change and also there should not be any restriction on weeks.
    and also exceptions needs to be done for color coding.
    we actually implemented cell defination for getting the above requirement , but here the problem is that we need to fix the  targets and there is arestriction on the weeks . but the requirement should be dynamic i.e, the targets should be configurable and the weeks should not be restricted.
    in the backend ODS all weeks data is present. we just need an idea how to fix these targets and also color coding for the respective KPI's without using cell defination.
    Kindly throw some pointers how to acheive this..
    thanks in advance,
    Madhu

    Hi Madhuri,
      Ur requirement can be done by using a customer exit variable,keeping any sap stand. time characteristics value.
    If u want to define the any selection dynamically,make a new selection with the text variable and call the customer exit variable into it and assaign the corresponding KPI into it and there by u can define the offset value as well.
      for writting the customer exit,u need to contact ur ABAP'er and say the requirement.
    Hope this helps!!

  • Workflow created using sharepoint designer 2013 is not getting started and the initiator is Anonymous.

    Hi All,
    I have created a Site and List Workflow using SharePoint designer 2013, its a simple list workflow with one stage init on creating an item.
    When i verified the status of workflow its showing as initiator = Anonymous and Internal Status - Not started. Please a screen shot below
    Looking forward for your help.
    Thanks in Advance.
    Sunitha

    Below is the solution.
    Solution:.
    check
    list in workflow manager server  :
    1. Check https://[wfms]:12290 or http://[wfms]:12291 if they are responding 2. Check
    ”WorkflowMgmtPool” is started 3. Following services are running
    Workflow Manager Backend
    Service Bus Message Broker
    Service Bus Gateway
    Windows Fabric Host Service (FabricHostSvc)
    4. Get workflow farm information , workflow database  by  Start Workflow
    Manager PowerShell     get-WFfarm   5. Check if the Workflow Manager farm running(workflow service backend and  front end         should be rennin)  by  Start Workflow Manager
    PowerShell    get-WFfarmStatus
    6. Check oAut in accessible by http:// [wfms]:12291/$SYSTEM/$Metadata/json/1
    7. Restart the WorkflowServiceBackend service
    net stop WorkflowServiceBackend
    net start WorkflowServiceBackend
    Reference Link
    http://blogs.msdn.com/b/laleh/archive/2014/09/03/sharepoint-2013-workflow-troublehsooting.aspx
    Thanks all.
    Sunitha

  • Misc Basic PL/SQL Application Design/Programming Questions 101 (101.1)

    ---****** background for all these questions is at bottom of this post:
    Question 1:
    I read a little on the in and out parameters and that IN is "by reference" and OUT and IN-OUT are by value. To me "by reference" means "pointer" as in C programming. So it seems to me that I could call a function with an IN parameter and NOT put it on the right side of an assignment statement. In other words, I'm calling my function
    get_something(IN p_test1 varchar2) return varchar2;
    from SP1 which has a variable named V_TEST1.
    So.... can I do this? (method A):
    get_something(V_TEST1);
    or do I have to do this (method B):
    V_TEST1 := get_something(V_TEST1);
    Also, although this may muddy the thread (we'll see), it seems to me that IN, since its by reference, will always be more efficient. I will have many concurrent users using this program: should this affect my thinking on the above question?
    -- ******* background *******
    So Far:<< I've read and am reading all over the net, read and reading oracle books from oracle (have a full safari account), reading Feurstein's tome, have read the faq's here.
    Situation Bottom Line:<< Have an enormous amount to do in a very little time. Lots riding on this. Any and all pointers will be appreciated. After we get to some undetermined point I can re-do this venture as a pl/sql faq and submit it for posting (y'alls call). Some questions may be hare brained just because I'm freaking out a little bit.
    Situation (Long Version):<< Writing a pl/sql backend to MS Reporting Services front end. Just started doing pl/sql about 2 months ago. Took me forever to find out about ref-cursor as the pipe between oracle and all client applications. I have now created a package. I've been programming for 20 years in many languages, but brand new to pl/sql. However, pl/sql sql has freed me from myriad of limitations in MS RS's. My program is starting to get big (for me -- I do a lot in a little) pks is currently 900 lines with 15 functions so far. Currently SP (pls) is back up to 800 lines. I get stuff working in the sp then turn it into a function and move it to the package.
    What does application do?:<<<< Back End for MS Reporting Services Web front end. It will be a very controlled "ad-hoc" (or the illusion of ad-hoc) web interface. All sql queries are built at run-time and executed via "open ref cusor for -- sql statement -- end;" data returned via OUT ref_cursor. Goal is to have almost 100% of functionality in a package. Calling SP will be minimalist. Reporting Services calls the SP, passes X number of parameters, and gets the ref_cursor back.
    Oracle Version: 10.2 (moving to 11g in the next 3 months).Environment: Huge DW in a massively shared environment. Everything is locked down and requires a formal request. I had to have my authenticated for a couple dbms system packages just to starting simple pl/sql programs.

    Brad Bueche wrote:
    Question 1:
    I read a little on the in and out parameters and that IN is "by reference" and OUT and IN-OUT are by value. To me "by reference" means "pointer" as in C programming. So it seems to me that I could call a function with an IN parameter and NOT put it on the right side of an assignment statement. The IN parameter is not passing by reference. It is passing by value. This means variable/value of the caller used as parameter, is copied to the (pushed) stack of code unit called.
    An OUT parameter means that the value is copied from the called unit's stack to the caller (and the current value of the caller's variable is overwritten).
    To pass by reference, the NOCOPY clause need to be used. Note that is not an explicit compile instruction. The PL/SQL engine could very well decide to pass by value and not reference instead (depending on the data type used).
    Note that the ref cursor data type and the LOB data types are already pointers. In which case these are not passed reference as they are already references.
    The NOCOPY clause only make sense for large varchar2 variables (these can be up to 32KB in PL/SQL) and for collection/array data type variables.
    As for optimising PL/SQL code - there are a number of approaches (and not just passing by reference). Deterministic functions can be defined. PL/SQL code can be written (as pipelined tables) to run in parallel using the default Oracle Parallel Query feature. PL/SQL can be manually parallelised. Context switches to the SQL engine can be minimised using bulk processing. Etc.
    Much of the performance will however come down to 2 basic issues. How well the data structures being processed are designed. How well the code itself is modularised and written.

Maybe you are looking for

  • ExternalInterface.call not working

    ***Flash CS4 -published to AS3.0 and Player  10*** I am having a problem using ExternalInterface.call to a javascript  function. I have used this method many times in many places and at some  point during this project, it was working, but now it stal

  • Unique and primary key

    column with unique constraint + not null constraint = primary key! (to some extent) Is it correct? I invite your ideas

  • [SOLVED] Network Shared Pacman Cache problems

    Hi all, I've got a couple of Arch servers and an Arch workstation that I'm working on; only the workstation has internet access and I need to keep the servers up to date and be able to install packages. At the moment, the servers have the workstation

  • Trying to view an ip camera on my iphone

    hi all i just recently aquired an 3Gs Iphone, and want to view my ip camera on it. I can link to my Dyndns address but it routes me to download the Java script, when i link to the Sun website is says that apple do not use the site to download Java sc

  • Own NC-Tool Table for Drill Export

    Hello, for my PCB manufacturer I have to supply drill data in the Excellon 2 format. The NC-Drill export tool allows only to adjust the number of integers, digits and unit format. The resulting *.drl file contains a subsequent list of tool numbers fo