Sorting query based on connect level

Hi.  I have a query like this:
SELECT DISTINCT
               LOAD_PROF2,
               V_TIME,
               SUBSTATION_CODE,
               CIRCUIT_CODE,
               PROFILE_DAY,
               DECODE (UPPER (PROFILE_DAY),
                       'MONDAY', 1,
                       'TUESDAY', 2,
                       'WEDNESDAY', 3,
                       'THURSDAY', 4,
                       'FRIDAY', 5,
                       'SATURDAY', 6,
                       'SUNDAY', 7,
                       'HOLIDAY', 8,
                       'H_THURSDAY', 9,
                       'H_FRIDAY', 10)
                  ORDERBY
          FROM LOAD_PROFILE_TEST
         WHERE     SUBSTATION_CODE = V_SUBSTATION_CODE
               AND CIRCUIT_CODE = V_CIRCUIT_CODE
               AND UPPER (PROFILE_DAY) IN (    SELECT UPPER (
                                                         TO_CHAR (
                                                            V_DATE + (LEVEL - 1),
                                                            'fmDAY'))
                                                 FROM DUAL
                                           CONNECT BY LEVEL <=
                                                         V_DATE_IN - V_DATE + 1)
      ORDER BY ORDERBY, V_TIME;
If i use V_DATE as 10/10/2013 and V_DATE_IN as 10/13/2013, I will get an output that is sorted based on ORDERBY (which I created), which will be
Thursday, Friday, Saturday and Sunday.  However, when I use 10/19/2013 as my V_DATE_IN, I get an output that is sorted like Monday, Tuesday...Sunday.  How do I sort it so that the first day that would appear must be the day of my V_DATE, in this case if I use 10/10/2013 as my V_DATE and 10/19/2013 as V_DATE_IN, i should get an output sorted like Thursday, Friday...Wednesday.
What should I replace my ORDERBY to get this type of sorting?  Thanks

Can you check if this query is what you wanted and matches your expectations.??
UNTESTED from my side....
  SELECT LOAD_PROF2,
         V_TIME,
         SUBSTATION_CODE,
         CIRCUIT_CODE,
         PROFILE_DAY
    FROM LOAD_PROFILE_TEST a,
         (    SELECT V_DATE + (LEVEL - 1) dt,UPPER (TO_CHAR (V_DATE + (LEVEL - 1), 'fmDAY')) days
                FROM DUAL
          CONNECT BY LEVEL <= V_DATE_IN - V_DATE + 1) b
   WHERE     SUBSTATION_CODE = V_SUBSTATION_CODE
         AND CIRCUIT_CODE = V_CIRCUIT_CODE
         AND UPPER (a.PROFILE_DAY) = b.days
GROUP BY LOAD_PROF2,
         V_TIME,
         SUBSTATION_CODE,
         CIRCUIT_CODE,
         PROFILE_DAY
ORDER BY b.dt, v_time;
Cheers,
Manik.

Similar Messages

  • Input Ready query based on aggregation level

    I have an input ready query that reads data from an aggregation level, based on a multiprovider that contains a cube for planning and a cube with real data.
    There are two columns of data, two of them for read-only and another for input.
    I´m selecting different fiscal year/periods for each column.
    The problem is that, when I select fiscal year/periods and the real data cube doesn´t contain any data for this filter, it doesn´t show me a line to input data anyway..
    how do I configure the query so that it understands that eventhough I don´t have data for the selected period, I still want to be able to perform the planning?
    Thanks,
    Cris.

    Hi Christina,
    Even though there is no Data in the  Real time Cube the Query must allow you to Input values. I guess the Query you have designed is not enabled for input yet.
    Kindly check the following Points before proceeding.
    1. In the Query properties under Planning tab, make sure the Start Query in change mode is turned On.
    2. Make sure you have used all of the characteristic in the aggregation level in the Query and is restricted to single value.
    3. make sure the columns are Input ready under the Planning tab in its properties
    Hope this Helps.
    Regards.
    Shafi.

  • Query based on ODS showing connection time out

    Hi,
    I have a query based on an ODS. The selection criteria for the query is Country which is a navigational attribute of the ODS. When i exceute the query using a country with lesser volume of data the query runs fine. But when i choose a country that has more number of records the query shows 500 connection timeout error sometimes and sometimes it runs fine for the same country.
    I tried to execute the report using country as input variable - First i executed the report using a country with lesser volume of data it worked fine as usual and then i added another country with larger volume of data in the filter criteria and the old problem cropped up.
    Could you please suggest a solution?
    Appreciate your response.
    Thanks
    Ashok

    Hi,
    Its not recommneded to have a query on DSO which gives huge amount of data as output.
    For small amount it should not be a problem.
    Shift your queries to a multicube as recommended by SAP and build the multicube over the cube which loads from the DSO.
    SAP recommends to create queries only on multiproviders.
    If not possible then you can try to create indexes on DSO on the characteristics field which are used in filter and global selections in the query.This can improve the query performace.
    But this will create performance issue while loading data to the DSO.
    So you need to do a trade off.
    Thanks
    Ajeet

  • BEx query based on virtual cube donu00B4t display a valid List of Value (LOV)

    Hello
    I have a problem with an invalid LOV. The scenario is the following; There´s a BEx query based on a virtual cube. The query has an exit variable on caracteristic that is based on 0CALMONTH.
    At Universe Designer I simply create a connection, a universe based on this query and export.
    At Web Intelligence (also at Live Office), when I try to execute de query, the prompt to fill my exit variable display a list of value that doesn´t match with the values of the caracteristic at the cube.
    Actually, the list at the prompt starts with 01.0000 and finishes with 05.0968.
    In Universe Designer, the option to edit the list of values is not available. But I think that editing the LOV is not the correct way.
    I´ve tried creating a new query based on the DSO that is the source of the virtual cube. In this case, I had a valid list. Unfortunately, I can´t use this DSO.
    Did anyone already have this problem?

    Hi James,
    can you explain what you mean with "input length for that filed" ?
    The field in the table is varchar2(120). I coudn't found options for the List of value.
    Thanks for your response
    Carsten
    null

  • Performance of my query based on cube ? and ods?

    hi all,
    how to identify the performance of my query based on a cube nor ods. I have requirement which enables to do flat file extraction and the extraction is only once and the records are less too. I need to sort whether my query will be faster based upon cube nor on ods.
    Can anyone let me know how to measure the performance of my query based upon cube and ods and how to find out which one will be faster. bcos i need to explain them the entire process of going to load the data directly to ods and do reporting from there nor data loaded directly to cube and do reporting from cube.
    thanxs
    haritha

    Hi,
    ODS is 2 Dimensional  so avoid reporting on ODS,
    Cube is MultiDim, for analysis perpose we can go reporting on Cube only
    Records in ODS are Overwritten whereas in Cube records are Aggregated
    and can also do compression on Cube, which will increase the query performance and so data retrieval in cube is faster
    Thanks

  • Incorrect data for proportional factor in query based on Planning Book

    hi,
    We have upgraded from APO 3.1 to SCM 5.0.
    Post upgrade, the proportional factor is being displayed incorrectly in the BEX query based on the Planning Book data if we run the query for multiple months.
    for eg,
    if, in the planning book, the proportional factor for months 10.2009 and 11.2009 are as follows :
    Brand  >> Month >> Proportional Factor
    B1 >> 10.2009 >> 70%
    B2 >> 10.2009 >> 30%
    B1 >> 11.2009 >> 80%
    B2 >> 11.2009 >> 20%
    When we execute the query for the above brands for months 10.2009 and 11.2009,
    then, at the total level, the % displayed is 100% and the data at brand level is halved.
    We do not have any exits or formulae operating at the key figure level in the query and hence are unable to figure out why this is happenning...
    Any clue on this ?
    regards,
    Anirudha

    Resolved.

  • SOURCING10: Passing parameters to a Query Based webservice using JAVA

    Hi Experts,
    I have been working on consuming a Query based webservice published in Souricng10 in a simple JAVA class. The query has a filter parameter which is not mandatory. I am able to consume the webservice using the GET method and display the content of the webservice. But when i try to POST a value to the filter parameter of the query i am getting the following error:
    java.io.IOException : Server returned HTTP response code: 415 for URL: http://sapcild9.web.bc:55000/sourcing/ngservices/rest/query/Z_TEST_WS_QUERY/execute/
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
    Following is the code which i have used:
       URL url = new URL("http://sapcild9.web.bc:55000/sourcing/ngservices/rest/query/Z_TEST_WS_QUERY/execute/");
       HttpURLConnection connection = (HttpURLConnection)url.openConnection();
       connection.setRequestProperty("Authorization", "Basic " + authStringEnc);
       connection.setRequestProperty("Content-Type","application/x-www-form-urlencoded; charset=UTF-8");
       connection.setRequestProperty("Content-Length", "" +Integer.toString(urlParameters.getBytes().length));
       connection.setRequestProperty("Content-Language", "en-US");
       connection.setUseCaches (false);
       connection.setDoInput(true);
       connection.setDoOutput(true);
       connection.setAllowUserInteraction(true);
       connection.setInstanceFollowRedirects(false);
       connection.setRequestMethod("POST");
       connection.connect();
       //Send request
       OutputStream out = connection.getOutputStream();
       OutputStreamWriter wr= new OutputStreamWriter(out, "UTF-8");
       wr.write("EXTERNAL_ID");
       wr.write("=");
       wr.write(URLEncoder.encode("temp","UTF-8"));
       wr.close();
       out.close();
       is = connection.getInputStream();
       isr =new InputStreamReader(is);
       BufferedReader bufferReader = new BufferedReader(isr);
       String str; StringBuffer stringBuffer = new StringBuffer();
       while ((str = bufferReader.readLine()) != null) {
       stringBuffer.append(str);
       stringBuffer.append("\n");
       System.out.println(stringBuffer.toString());
       connection.disconnect();
       is.close();
    Please Advise how to proceed on this isssue?
    Thanks in advance.
    Srikanth Emani

    Hi Gael,
    your URL is made up of :
    [ProcedureName]?[parameter1]=[value1]&[parameter2]=[value2]
    creating URLs like this can have problems especially with spaces and punctuation.
    the answer is a FORM
    the following will create a hidden form :
    FORM ACTION="[ProcedureName]" METHOD="POST" name="F1"
    INPUT type="HIDDEN" name="[parameter1]" value="[value1]"
    INPUT type="HIDDEN" name="[parameter2]" value="[value2]"
    /FORM
    you can set the values in the form using:
    document.F1.[parameter1].value="abc123%%&&$$!";
    document.F1.submit();
    will submit the form and the PL/SQL procedure should receive the text as it was contained in the form.
    the only characters that can now cause problems are :
    " as it delimits the field.
    ' as it may cause problems in PL/SQL.
    \ as it is a special character.
    Regards Michael

  • Query Execution Filter Val. Selection take no effect on query based on AG

    Hi,
    by setting the 'Query Execution Filter Val. Selection' property in query, we can control the value list when we execute the query. with setting 'Only posted value values for navigation', only the data posted to cube can be listed when you do the selection in the Query Execution selection screen and with setting 'values in master data table', all the date will be listed when you do the selection in the Query Execution selection screen.
    1. but for queries based on cube or multi-cube, when i select 'values in master data table' in query defination, still only the posted value is displayed when i do the selection in the Query Execution selection screen. is there anyone know why?
    2. for queries based on aggregation level, whatever setting i choose, all the values is displayed when i do the selection in the Query Execution selection screen. how i can do select the posted value for this kind of query?
    Many Thanks
    Jonathan

    I apologize I meant the other link:
    I will put the useful text from that link here.
    Regarding the query built on aggregation level please note the following
    the aggregation level is always a Virtual Provider built other
    InfoProvider and hence it does not have the dimension table and hence
    the F4 mode D is not supported.
    Therefore when aggregation level is used in a query, F4 does not supportD-mode: 'Only Values in InfoProvider' and all master data values are
    displayed in the value list.
    Reference from the note
    984229 F4 modes for input help as of SAP NetWeaver 2004s BI
    4. Since other InfoProviders do not have a dimension table, the system
    displays only posted values if you select "Only Posted Values for
    Navigation". Otherwise, it displays the values from the master data
    table.
    Hope this should clarify your doubts. Please let me know if you have
    questions else please confirm the message at your earliest convenie
    Edited by: Abhijit N on Apr 2, 2009 6:06 PM

  • Query based on date partition

    Hi,
    I am trying to output only a successful job during the past 24 hrs of each day. If there is job that has an outcome of a success and a failure within the last
    24 hrs for each day, I want to only output the successful one. If there are no success for the same job, I will output the last attempted failed job.
    Here are my columns:
    current output:
    JOB_ID     JOBDATE               GROUP     PATH          OUTCOME          FAILED     LEVEL     ASSET
    3400908     7/27/2012 10:01:18 AM     polA     target1          Success          0     incr     clone1
    3400907     7/27/2012 10:01:09 AM     polA     target1          Failed          0     incr     clone1
    3389180     7/23/2012 10:01:14 AM     polA     target1          Failed          1     incr     clone1
    3374713     7/23/2012 10:01:03 AM     polA     target1          Success          0     incr     clone1
    3374712     7/22/2012 11:24:32 AM     polA     target1          Success          0     Full     clone1
    3367074     7/22/2012 11:24:00 AM     polA     target1          Failed          1     Full     clone1
    3167074     7/21/2012 10:01:13 AM     polA     target1          Success          0     incr     clone1
    336074     7/21/2012 10:01:08 AM     polA     target1          Success          0     incr     clone1
    desired output:
    JOB_ID     JOBDATE               GROUP     PATH          OUTCOME          FAILED     LEVEL     ASSET
    3400908     7/27/2012 10:01:18 AM     polA     target1          Success          0     incr     clone1
    3374713     7/23/2012 10:01:03 AM     polA     target1          Success          0     incr     clone1
    3374712     7/22/2012 11:24:32 AM     polA     target1          Success          0     Full     clone1
    3167074     7/21/2012 10:01:13 AM     polA     target1          Success          0     incr     clone1
    Here is a code I am trying to use without success:
    select *
    from
       (selectjob_id, jobdate, group, path, outcome, Failed, level, asset,
              ROW_NUMBER() OVER(PARTITION BY group, path, asset ORDER BY jobdate desc) as rn
                   from job_table where jobdate between trunc(jobdate) and trunc(jobdate) -1 )
       where rn = 1
       order by jobdate desc;Thanks,
    -Abe

    Hi, Abe,
    You're on the right track, using ROW_NUMBER to assign numbers, and picking only #1 in the main query. The main thing you're missing is the PARTITION BY clause.
    You want to assign a #1 for each distinct combination of group_id, path, asset and calendar day , right?
    Then you need to PARTITION BY group_id, path, asset and calendar day . I think you realized that when you named this thread "Query Based *on date partition* ".
    The next thing is the analytic ORDER BY clause. To see which row in each partition gets assigned #1, you need to order the rows by outcome ('Success' first, then 'Failed'), and after that, by jobdate (latest jobdate first, which is DESCending order).
    If so, this is what you want:
    WITH     got_r_num     AS
         SELECT  j.*     -- or list columns wanted
         ,     ROW_NUMBER () OVER ( PARTITION BY  group_id     -- GROUP is not a good column name
                                   ,                    path
                             ,             asset
                             ,             TRUNC (jobdate)
                                   ORDER BY          CASE  outcome
                                                 WHEN  'Succcess'
                                         THEN  1
                                         ELSE  2
                                             END 
                             ,             jobdate     DESC
                           )      AS r_num
         FROM    job_table  j
         WHERE     outcome     IN ('Success', 'Failed')
    --     AND     ...     -- Any other filtering, if needed
    SELECT     *       -- or list all columns except r_num
    FROM     got_r_num
    WHERE     r_num     = 1
    ;If you'd care to post CREATE TABLE and INSERT statements for the sample data, then I could test it.
    It looks like you posted multiple copies of this thread.  I'll bet that's not your fault; this site can cause that.  Even though it's not your fault, please mark all the duplicate versions of this thread as "Answered" right away, and continue in this thread if necessary.
    Edited by: Frank Kulash on Jul 28, 2012 11:47 PM
    This site is flakier than I thought! I did see at least 3 copies of this same thread earlier, but I don't see them now.

  • Select query based on userinput

    Hi Folks...
    I posted this in another forum, but no reply,so I am posting it here too. I am trying to make a select query based on a user input. Earlier I was having probelms making a select query and printing out the result in the stack trace,with the advice given, I managed to solve that. I am now modifying that code to make a query based on user input, however it's not working, following is the code:
    String userId = request.getRemoteUser();  String sql = "SELECT hoursused FROM sysuser WHERE iduser = ?";    try      {        Connection connection = dataSource.getConnection();        PreparedStatement preparedStatement = connection.prepareStatement(sql);        preparedStatement.setString(1, userId);                ResultSet srs = preparedStatement.executeQuery(sql);                            while(srs.next()) {                          String hoursused = srs.getString("hoursused");                          System.out.println("The hours used are " + hoursused);    }                  }    catch (SQLException e)      {    e.printStackTrace();    }        } 
    stack trace seems to suggest I may have a probelm with this statement:
    "SELECT hoursused FROM sysuser WHERE iduser = ?"; 
    I am not sure how to rectify this, I hope someone can advise, thanks.

    Hi jschell..
    Thanks for responding, I appreciate it. The problem has been solved. No 'sysuser' is not a reserved word in Mysql. Based on the advice given in another forum and also from this site, I managed to solve the problem. I only made one change. This:
    ResultSet srs = preparedStatement.executeQuery(sql);     was changed to this:
    ResultSet srs = preparedStatement.executeQuery();  Thanks.

  • Query based on permission

    Does anyone know if there's a way to query for group (or any other object) based on the current user's permission?  For instance, if I wanted to retrieve all of the groups to which the current user has read access?
    If not, is there a way to retrieve the current user's access level for each object in the query?  So for instance, in addition to returning the group name and group id, also return the user's level of access.
    Thanks a lot!

    Read the documentation:
    http://livedocs.macromedia.com/coldfusion/7/htmldocs/00001252.htm
    page1.html
    <form id="form1" name="form1" method="post"
    action="page2.cfm">
    <label>enter id <input type="text" name="textfield"
    />
    </label>
    </form>
    page2.cfm
    <cfquery name="foo" datasource="bar">
    SELECT aField, bField, cField
    FROM aTable
    WHERE aField = <cfqueryParam value="#form.textfield#"
    cfsqltype="cf_sql_varchar">
    </cfquery>
    <cfoutput query="foo">
    #aField# #bField# #cField#
    </cfoutput>
    briankind wrote:
    > query based on a test input box
    >
    > hi
    > i have this html input box and i want to output a query
    based on what i put in
    > the input box. what should i do now.
    >
    > thanks
    >
    >
    > <form id="form1" name="form1" method="post"
    action="">
    > <label>enter id <input type="text"
    name="textfield" />
    > </label>
    > </form>
    >

  • Tip on Using "Folder" for performing Query Based Classification

    In setting up a TREX taxonomy for one of our intranet sites we concluded that the folder structure used by the web developers really helped classify the documents at a higher than an 80% level.  It seems that in many cases, the intranet, outlook public folders, or LAN folders will be organized such that the title of the folder is very valuable in classifying what is in it.
    After many searches and false starts, I found something that is so simple and elegant, I had to share it here.
    In the Taxonomy Query Builder there is a property "Folder".  Unlike the property "Content" which has a "contains" operator, this one is a "Is" or "Is Not" so it seemed like it might be hard to use.
    What I found is that it automatically put an * at the end of whatever we entered and if we put an * in the beginning, it seemed to find the folder consistently.
    So for a three level taxonomy such as:
    -- Business
      Customer Support
       Product information
    where you have folders /Business/CustSupport/ProdInfo you use a three level taxonomy.
    For the business I put in a Query:
      -- Folder is_not PassTheIsNotTestSoEverythingIsClassified
    That meant everything was included at the root level no matter what.  (i.e. I didn't want 'documents to be classified' to contain anything.)
    The query for the Customer Support level of the taxonomy was:
       -- Folder is *CustSupport
      which pulls in any document that is in the folder that contains CustSupport
    The query for the next level was:
       -- Folder is *ProdInfo
      which pulls in any document that is in the folder that contains ProdInfo
    As the classification engine works, items that pass all three test are put in the most detailed folder.  You can have as many taxonomy nodes at any level as you need and if they don't pass any, they are kept at the root of last level they did pass.
    You end up with an amazingly simple classification scheme that can handle a big taxonomy.  To the extent your folders are not organized like your taxonomy or are not related logically, they you need to add in other queries using the document contents, or other fields available for indexing.   This is where the 80-20 rule kicks in, you still need to work hard to get that last 20%.
    Let me know if this approach helps anyone.

    HI
    r u trying to define properties at the document level then it doesnot works because metadata of KM and metadata of document are different.
    u can define pre-defined properties at the folder level in KM.
    why dont you use example based classification as u train ur taxonomies with example and it inturn index furher document automatically based on your example document.
    But it still deponds upon individual requirements.
    Regards,
    Vijay.

  • To execute the query after getting connected to db from a unix shell script

    How the variable "output" can be used in the sql query after getting connected to database.
    Code:
    #!/bin/bash
    sort shipments | uniq > test1.txt
    sed "s/.*/'TESCO.&',/;$ s/,//" test1.txt | paste -s -d '' > output
    sqlplus glo/glo@tcot
    select * from xx where column_name in ($output)
    In this case, am getting connected to database, but cursor is waiting at sql prompt, with out executing the above select query.
    Please help.

    Try something like:
    #!/bin/bash
    sort shipments | uniq > test1.txt
    sed "s/.*/'TESCO.&',/;$ s/,//" test1.txt | paste -s -d '' > output
    sqlplus glo/glo@tcot << EOF
    WHENEVER SQLERROR EXIT 1;
    WHENEVER OSERROR EXIT 1;
    set serveroutput on
    declare
    v_data varchar2(100) := NULL;
    BEGIN
    select col_name into v_Data from xx where column_name in ('$output')
    and rownum < 2;
    dbms_output.put_line(v_Data);
    end;
    exit
    EOF

  • A query based report

    hi experts,
    i want to generate a query based report t for following
    PoNo     Item No     Qty     Vendor Chal No     Challan Date     Assesable Value     BED Value     SED Value     AED Val     ECS value     NCCD VAL     NCCD Rate     ECS Rate %     BED %     AED %     SED %     Sales Tax     Inv amt     VAT amt     Currency     Form-Series No     Form 31 No     Inv.No.     Inv.Date     Veh No     Packages     PackUnit
    these are the columns & fields in diff tables how can i join these tables & get data  pls help me for that how can i achieve this give me query for this.
    warm regards
    ketan
    SAP-B1.
    Edited by: ketan pande on Aug 17, 2009 9:41 AM

    Hi
    If you need the Purchase Order and A/P Invoice Header & Row(Item) Level (Purchase Register)Query Report.
    Try this,
    SELECT  T0.DocNum as 'PO. No.',
    T0.DocDate as 'PO. Date',
    M.DocNum as 'A/P Invoice No.',
    M.DocDate as 'Inv. Date',
    M.CardName as 'Vendor Name',
    M.NumAtCard as 'Bill No. & Date',
    ISNULL(L.ItemCode,'Service Item') as 'Item Code',
    L.Dscription,
    L.Quantity,
    (Select Sum(LineTotal) FROM PCH1 L Where L.DocEntry=M.DocEntry) as 'Base Amt.(Rs.)',
    (SELECT Sum(TaxSum) FROM PCH4 where statype=-90 and DocEntry=M.DocEntry) as 'ED (Rs.)',
    (SELECT Sum(TaxSum) FROM PCH4 where statype=-60 and DocEntry=M.DocEntry) as 'EDCS (Rs.)',
    (SELECT Sum(TaxSum) FROM PCH4 where statype=7 and DocEntry=M.DocEntry) as 'HECS (Rs.)',
    (SELECT Sum(TaxSum) FROM PCH4 where statype=1 and DocEntry=M.DocEntry) as ' VAT (Rs.) ',
    (SELECT Sum(TaxSum) FROM PCH4 where statype=4 and DocEntry=M.DocEntry) as ' CST (Rs.) ',
    (SELECT Sum(TaxSum) FROM PCH4 where statype=10 and DocEntry=M.DocEntry) as ' CVD (Rs.) ',
    (SELECT Sum(TaxSum) FROM PCH4 where statype=5 and DocEntry=M.DocEntry) as ' Ser.Tax (Rs.) ',
    (SELECT Sum(TaxSum) FROM PCH4 where statype=6 and DocEntry=M.DocEntry) as 'CS on Ser.Tax (Rs.)',
    (SELECT Sum(TaxSum) FROM PCH4 where statype=8 and DocEntry=M.DocEntry) as 'HECS_ST (Rs.)',
    (Select Sum(LineTotal) From PCH3 Q Where Q.DocEntry=M.DocEntry) AS 'Freight (Rs.)',
    M.WTSum AS 'TDS (Rs.)',
    M.DocTotal as 'Total (Rs.)'
    FROM OPOR T0 INNER JOIN POR1 T1 ON T0.DocEntry = T1.DocEntry
    INNER JOIN OPDN T2 ON T2.DocEntry = T1.TrgetEntry
    INNER JOIN PDN1 T3 on T3.DocEntry = T2.Docentry
    INNER JOIN OPCH M ON M.DocEntry = T3.TrgetEntry
    LEFT OUTER JOIN PCH1 L on L.DocEntry=M.DocEntry
    LEFT OUTER JOIN PCH4 T on T.DocEntry=L.DocEntry and L.LineNum=T.LineNum
    LEFT OUTER JOIN PCH5 J ON M.DocEntry = J.AbsEntry
    LEFT OUTER JOIN PCH3 Q ON M.DocEntry = Q.DocEntry
    WHERE M.DocDate >= '[%0]' AND M.DocDate <= '[%1]'
    GROUP BY
    T0.DocNum,T0.DocDate,M.DocNum,M.DocDate,M.CardName,M.NumAtCard,L.ItemCode,L.Dscription,L.Quantity,
    M.DocEntry,M.[DiscSum],M.WTSum,M.DocTotal
    ORDER BY
    T0.DocNum,T0.DocDate,M.DocNum,M.DocDate,M.CardName,M.NumAtCard,L.ItemCode,L.Dscription,L.Quantity,
    M.DocEntry,M.[DiscSum],M.WTSum,M.DocTotal
    Regards,
    Madhan.

  • Query Based Taxonomies

    Hello,
    I've created successfully a query based taxonomy. I also defined the queries to the folders within the taxonomy. Now my problem is, that a document only belongs to one folder. But in some cases, that's not nice, e.g. when you want to navigate through the hierarchy.
    Example:
    I've got a structure like Folder F1, under that a Folder F2.
    For folder F1 I defined a query: property P1 contains value V1.
    For folder F2 I defined a query: property P2 contains value V2.
    After updating the taxonomy it should now be possible, that documents which are classifications of folder F2 also classify to folder F1.
    How to do that?
    Help would be appreciated.
    Best regards,
    Denis

    Hello Karin,
    thank you for your answer. Does this mean, that a navigation through the taxonomy hierarchy is not possible (e.g. with a km navigation iView?). Taxonomies are stored in KM under /Taxonomies and I thought that a navigation with a km navigation iView is possible. But I think the real benefit of taxonomies is, that the deeper the hierarchy folder is, the more classified and smaller are the results. But when documents get only classified to more folders when the folders are on the same hierarchy level, how should a navigation be realized?
    Thanks in advance.
    Best regards,
    Denis

Maybe you are looking for

  • Net value posting of 2 line items to 2 g/l accounts

    Dear Gurus, I have a requirement where I have a sales order with 2 different line items and I need to post the net values for which of those line items to different GL accounts. how can i do this Wishes, Abhishek

  • Unable to access my computer

    I have a satellite P855-S5312 - has been working fine then when I turned it on the other day I get a gray screen with Enter Password. No password I have ever used works. Any ideas help?

  • Down payment- vendors

    Hi gurus In case of downpayment for venodrs, at the time making payment f-48 the Special G/l indicator in in required status how can i make that Special G/L indicators icon to optional status. can you please tell me............. Thanks Sreenivasulu.P

  • Update Data Automatically

    I have created a new ODS. In the ODS, the "Update Data Automatically" and "Activate Data Automatically" indicators are checked...I then created a Cube linked to this ODS. Because of the Update Data Automatically indicator is checked in ODS,so system

  • Is it possible to change iCloud email address?

    I would like to change my iCloud email address but not sure if it's possible.  Is there a way to do this?