Missing partition by clause causing wrong aggregation

Hello all!
I have a Location Hierarchy. Country Region > Country > State > City
When I create a report using Country, Headcount ( Month = July 11) I get correct results with right aggregation:
United States      2000           
Mexico      1500          
Ireland      1000      
SQL Generated:
WITH
SAWITH0 AS (select T95996.COUNTRY_NAME as c2,
T95996.COUNTRY_CODE as c3,
sum(T158903.HEADCOUNT) as c4,
T100027.PER_NAME_MONTH as c5
from
W_BUSN_LOCATION_D T95996,
W_EMPLOYMENT_D T95816,
W_MONTH_D T100027,
W_WRKFC_EVT_MONTH_F T158903
where ( T95816.ROW_WID = T158903.EMPLOYMENT_WID
and T95996.ROW_WID = T158903.LOCATION_WID
and T100027.ROW_WID = T158903.EVENT_MONTH_WID
and T100027.PER_NAME_MONTH = '2011 / 08'
group by T95996.COUNTRY_CODE, T95996.COUNTRY_NAME, T100027.PER_NAME_MONTH),
SAWITH1 AS (select distinct SAWITH0.c2 as c1,
LAST_VALUE(SAWITH0.c4 IGNORE NULLS) OVER (PARTITION BY SAWITH0.c3 ORDER BY SAWITH0.c3 NULLS FIRST, SAWITH0.c5 NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as c2,
SAWITH0.c3 as c3
from
SAWITH0)
select SAWITH1.c1 as c1,
SAWITH1.c2 as c2
from
SAWITH1
order by c1
When I create a report using Country Region, Headcount ( Month = July 11) I get wrong aggregation and all rows show same number
Region 1- 135000
Region 2- 135000
Region 3- 135000
SQL Generated:
WITH
SAWITH0 AS (select T95996.COUNTRY_REGION as c2,
sum(T158903.HEADCOUNT) as c3,
T100027.PER_NAME_MONTH as c4
from
W_EMPLOYMENT_D T95816,
W_MONTH_D T100027,
W_WRKFC_EVT_MONTH_F T158903,
W_BUSN_LOCATION_D T95996
where ( T95816.ROW_WID = T158903.EMPLOYMENT_WID
and T100027.ROW_WID = T158903.EVENT_MONTH_WID
and T100027.PER_NAME_MONTH = '2011 / 08'
group by T95996.COUNTRY_REGION, T100027.PER_NAME_MONTH)
select distinct SAWITH0.c2 as c1,
LAST_VALUE(SAWITH0.c3 IGNORE NULLS) OVER ( ORDER BY SAWITH0.c4 NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as c2
from
SAWITH0
order by c1
I see that the second SQL is missing that PARTITION BY CLAUSE and wondering if this is reason for wrong calculation. How can I make BI Server to include this clause?
Any leads will be helpful.

Hi Deepak,
Thanks for your reply. I see your point here.
Some more info: This fact table is actually a monthly snapshot. Do you think I should check on anything else?
I tried to simply the SQL Generated by I Server by removing some extra conditions. Here is the actual SQL for Country, Headcount:
WITH
SAWITH0 AS (select T95996.COUNTRY_NAME as c2,
T95996.COUNTRY_CODE as c3,
sum(case when T95816.W_EMPLOYMENT_STAT_CODE = 'A' and T95816.W_EMPLOYEE_CAT_CODE = 'EMPLOYEE' then T158903.HEADCOUNT else 0 end ) as c4,
T100027.PER_NAME_MONTH as c5
from
W_BUSN_LOCATION_D T95996 /* Dim_W_BUSN_LOCATION_D_Employee */ ,
W_EMPLOYMENT_D T95816 /* Dim_W_EMPLOYMENT_D */ ,
W_MONTH_D T100027 /* Dim_W_MONTH_D */ ,
W_WRKFC_EVT_MONTH_F T158903 /* Fact_W_WRKFC_EVT_MONTH_F_Snapshot */
where ( T95816.ROW_WID = T158903.EMPLOYMENT_WID and T95996.ROW_WID = T158903.LOCATION_WID and T100027.ROW_WID = T158903.EVENT_MONTH_WID and T100027.PER_NAME_MONTH = '2011 / 07' and T158903.SNAPSHOT_IND = 1 and T158903.DELETE_FLG <> 'Y' and T100027.CAL_MONTH_START_DT >= TO_DATE('2004-01-01 00:00:00' , 'YYYY-MM-DD HH24:MI:SS') and (T158903.SNAPSHOT_MONTH_END_IND in (1) or T158903.EFFECTIVE_END_DATE >= TO_DATE('2011-08-11 00:00:00' , 'YYYY-MM-DD HH24:MI:SS')) and (T95996.ROW_WID in (0) or T95996.BUSN_LOC_TYPE in ('EMP_LOC')) and T158903.EFFECTIVE_START_DATE <= TO_DATE('2011-08-11 00:00:00' , 'YYYY-MM-DD HH24:MI:SS') )
group by T95996.COUNTRY_CODE, T95996.COUNTRY_NAME, T100027.PER_NAME_MONTH),
SAWITH1 AS (select distinct SAWITH0.c2 as c1,
LAST_VALUE(SAWITH0.c4 IGNORE NULLS) OVER (PARTITION BY SAWITH0.c3 ORDER BY SAWITH0.c3 NULLS FIRST, SAWITH0.c5 NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as c2,
SAWITH0.c3 as c3
from
SAWITH0)
select SAWITH1.c1 as c1,
SAWITH1.c2 as c2
from
SAWITH1
order by c1
for Country Region, Headcount:
WITH
SAWITH0 AS (select T95996.COUNTRY_REGION as c2,
sum(case when T95816.W_EMPLOYMENT_STAT_CODE = 'A' and T95816.W_EMPLOYEE_CAT_CODE = 'EMPLOYEE' then T158903.HEADCOUNT else 0 end ) as c3,
T100027.PER_NAME_MONTH as c4
from
W_EMPLOYMENT_D T95816 /* Dim_W_EMPLOYMENT_D */ ,
W_MONTH_D T100027 /* Dim_W_MONTH_D */ ,
W_WRKFC_EVT_MONTH_F T158903 /* Fact_W_WRKFC_EVT_MONTH_F_Snapshot */ ,
W_BUSN_LOCATION_D T95996 /* Dim_W_BUSN_LOCATION_D_Employee */
where ( T95816.ROW_WID = T158903.EMPLOYMENT_WID and T100027.ROW_WID = T158903.EVENT_MONTH_WID and T100027.PER_NAME_MONTH = '2011 / 07' and T158903.SNAPSHOT_IND = 1 and T158903.DELETE_FLG <> 'Y' and T100027.CAL_MONTH_START_DT >= TO_DATE('2004-01-01 00:00:00' , 'YYYY-MM-DD HH24:MI:SS') and (T158903.SNAPSHOT_MONTH_END_IND in (1) or T158903.EFFECTIVE_END_DATE >= TO_DATE('2011-08-11 00:00:00' , 'YYYY-MM-DD HH24:MI:SS')) and (T95996.ROW_WID in (0) or T95996.BUSN_LOC_TYPE in ('EMP_LOC')) and T158903.EFFECTIVE_START_DATE <= TO_DATE('2011-08-11 00:00:00' , 'YYYY-MM-DD HH24:MI:SS') )
group by T95996.COUNTRY_REGION, T100027.PER_NAME_MONTH)
select distinct SAWITH0.c2 as c1,
LAST_VALUE(SAWITH0.c3 IGNORE NULLS) OVER ( ORDER BY SAWITH0.c4 NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as c2
from
SAWITH0
order by c1

Similar Messages

  • How to group the values with this partition over clause ?

    Hi,
    I have a nice request :
    select  c.libelle "Activité", sum(b.duree) "Durée"
    from    fiche a, activite_faite b,
            activites c, agent d
    where   a.date_activite
    BETWEEN TO_DATE('20/09/2009', 'DD/MM/YYYY') AND TO_DATE('26/10/2009', 'DD/MM/YYYY')
    AND     a.agent_id = 104
    AND     a.fiche_id = b.fiche_id
    AND     b.activites_id = c.activites_id
    AND     a.agent_id = d.agent_id
    group   by c.libelle
    order   by sum(b.duree)It gives me this nice result :
    ACTIVITE  DUREE
         Tonte            27I want to get a percentage, i use ratio_to_report
    select  a.fiche_id, c.libelle "Activité", ratio_to_report(duree) over (partition by c.activites_id) * 100 "Durée"
    from    fiche a, activite_faite b,
            activites c, agent d
    where   a.date_activite
    BETWEEN TO_DATE('20/09/2009', 'DD/MM/YYYY') AND TO_DATE('26/10/2009', 'DD/MM/YYYY')
    AND     a.agent_id = 104
    AND     a.fiche_id = b.fiche_id
    AND     b.activites_id = c.activites_id
    AND     a.agent_id = d.agent_idIt gives me this less nice result :
    Tonte 7,40740740740740740740740740740740740741
    Tonte 33,33333333333333333333333333333333333333
    Tonte 33,33333333333333333333333333333333333333
    Tonte 25,92592592592592592592592592592592592593I would like to get this result :
    Tonte 100I tried "grouping" values in the partition over clause but without success.
    Any help appreciated from the slq-masters :
    Regards,
    Christian

    Christian from France wrote:
    I would like to get this result :
    Tonte 100
    Hi,
    Why not this
    select  c.libelle "Activité", 100 "Durée"
    from    fiche a, activite_faite b,
            activites c, agent d
    where   a.date_activite
    BETWEEN TO_DATE('20/09/2009', 'DD/MM/YYYY') AND TO_DATE('26/10/2009', 'DD/MM/YYYY')
    AND     a.agent_id = 104
    AND     a.fiche_id = b.fiche_id
    AND     b.activites_id = c.activites_id
    AND     a.agent_id = d.agent_id
    group   by c.libelle
    order   by sum(b.duree)Because it would always be 100 (if you are taking as percentage) be what ever the count of duree be.
    Or did I miss something in understanding the requirement.
    Regards
    Anurag

  • Cross-listing Query (Partition By Clause? Self-Join?)

    Hello,
    I need a query that will cross-list courses a professor is teaching this semester. Essentially, two fields need to be the same (i.e.: Section & CourseTitle), while the third field is different (i.e.: Subject).
    For example, Max Power is a professor teaching 3 courses, one is cross-listed (ENG 123 and JRL 123):
    LastName     FirstName     Subject     Section     CourseTitle
    Power          Max          ENG     123     English Composition
    Power          Max          ENG     452     Robert Frost Poetry     
    Power          Max          JRL     123     English Composition
    Power           Max          ENG      300     Faulkner & TwainThe desired query output is this:
    LastName     FirstName     Subject     Section     CourseTitle
    Power          Max          ENG     123     English Composition
    Power          Max          JRL     123     English CompositionBasically, I need only the cross-listed courses in the output.Is this an instance where I use a "Partition By Clause" or should I create a self-join?
    Much thanks for any help and comments.

    Unfortunately, I can't create new tables. I don't have permission. I can't alter, add or delete any of the data.
    So I tried Frank's code with my data:
    WITH got_cnt AS
    SELECT  sivasgn_term_code, spriden_id, spriden_last_name, spriden_first_name,
                    ssbsect_ptrm_code, ssbsect_camp_code,
                    sivasgn_crn, ssbsect_subj_code, ssbsect_crse_numb, scbcrse_title,
           count(*) over (partition by ssbsect_crse_numb, scbcrse_title) cnt
    FROM spriden INNER JOIN sivasgn ON spriden_pidm = sivasgn_pidm JOIN
         ssvsect ON ssbsect_crn = sivasgn_crn JOIN
         sfrstcr ON sfrstcr_crn = sivasgn_crn
    WHERE  ssbsect_term_code= sivasgn_term_code  
    AND sfrstcr_term_code = sivasgn_term_code
    AND ssbsect_enrl > '0' and sivasgn_credit_hr_sess > '0'
    AND sivasgn_term_code IN ('200901', '200909')
    AND spriden_change_ind IS NULL
    AND ssbsect_camp_code IN ('1', '2', 'A', 'B')
    SELECT DISTINCT sivasgn_term_code, spriden_id, spriden_last_name, spriden_first_name,
                    substr(ssbsect_ptrm_code,1,1) as ptrm_code, ssbsect_camp_code,
                    sivasgn_crn, ssbsect_subj_code, ssbsect_crse_numb, scbcrse_title
    FROM got_cnt
    WHERE cnt >1
    ORDER BY spriden_last_name, sivasgn_term_code, ssbsect_crse_numb;The output pretty much displays all courses with same subject code, course number and course title.
    Output:
    LastName     FirstName     Subject     Section     CourseTitle
    Power          Max          ENG     123     English Composition
    Power          Max          ENG     123     English Composition
    Power          Max          ENG     452     Robert Frost Poetry
    Power          Max          ENG     452     Robert Frost Poetry
    Power           Max          ENG      300     Faulkner & Twain
    Power           Max          ENG      300     Faulkner & Twain
    Power          Max          JRL     123     English Composition
    Power          Max          JRL     123     English CompositionWhat I would like is same course number, course title, BUT different subject code. Pretty much that in my first post of this thread.
    Desired Output:
    LastName     FirstName     Subject     Section     CourseTitle
    Power          Max          ENG     123     English Composition
    Power          Max          JRL     123     English CompositionMaybe I'm explaining this wrong. Any help would be greatly appreciated. Thanks.

  • Firefox 5.0 is supposed to have a separaret Twitter tab, but I noticed today that that tab is missing. What is causing this problem?

    Firefox version 5.0 has a separate Twitter tab built into this web browser. However, today I have been having problems with launching the Firefox browser. An error message appears, after attempting to launch the browser, which states that the Firefox browser is already running, and that the process needs to be closed or that the computer needs to be restarted. After restarting my computer and launching the Firefox browser, I noticed that the built-in Twitter tab ( near the upper left-hand corner of the browser ) was missing. What is causing these problems?

    I'm pretty sure that Firefox version 5.0 has a built-in Twitter tab. Whatever the case may be, I downloaded Firefox version 5.0.1 this morning. I learned that if you right-click on a tab, that a menu box will appear on the screen. One of the options in that menu box is "Pin as App Tab." If I go to the http://twitter.com web site, and then right-click on the corresponding Twitter tab, and then select "Pin as App Tab," then a small Twitter tab will appear to the right of the orange-and-white "Firefox" tab ( in the upper left-hand corner of the Firefox web browser screen ). Then I can click on that Twitter tab and go directly to the http://twitter.com web site. However, when I exit the Firefox version 5.0.1 web browser, and then immediately re-launch it, the separate Twitter tab is gone. Is there any way, within Firefox version 5.0.1, that I can make the separate Twitter tab permanent after selecting "Pin as App Tab"? Thank you for your assistance regarding this matter.
    ''Edited by a moderator due to inappropriate content.''

  • Please help restore DMG missing partition and access file vault.

    I am quite emotional at the moment, if anyone can help me access my files, pictures of my baby daughter and important documents.. I would be extremely grateful and willing do make a donation via paypal.
    I have a 160 GB hard drive, which was partitioned as 30GB Spare and 130GB with my operating system and user files, including my filevault user directory.
    My computer was downloading movies which filled up the 130GB partition, most of the 130GB is contained in my personal filevault user directory, my computer became very slow and unresponsive and i tried to free up space by deleting files which i no longer needed.
    The machine became slow and un-responsive and required a number of forceful shut downs and restarts until the machine would no longer boot up, I inserted the OSX CD and attempted to boot from CD and use disc utility which now only displays my 30GB storage space, I was forced to install OSX on the remaining 30GB partition.
    Now running OSX on the 30GB partition i used DISK DRILL to scan my hard disk for the missing 130GB partition which it did and i was able to save this as a DMG file to a external hard disk, now i am able to get this DMG to mount in the OSX i am running on the spare 30GB space, however i am not able to access the majority of the data inside the file vaulted user account.
    I would like to know, would it be possible to restore the missing partition, as it remains on the HD but seems to be hidden.
    Or is there somehow i can login to the user account on the mounted DMG ?
    I am not very familiar with using terminal and would be extremely grateful for any advice or support leading to the recovery of my files.

    plesehelpme wrote:
    Is there anyway to recover the larger partition ? In disk utility i made the mistake of pressing the - button which has left me with the following: 
    [NO NAME] - is currently where i have installed OSX temporarily.
    And the grey space which contains my previous operating system and all my information which i desperately need to retrieve.
    Is there anyway to restore and boot the missing partition ?
    Probably not.     You've now deleted the partition.  It was probably useless anyway, though.
    When i run disk drill or other disk recovery tools i am able to see the partition including including the files and my personal sparsebundle.
    But you can't open the sparse bundle, much less see any of the files inside it, right?
    Since you've already recovered the (probably-damaged) sparse bundle containing your encrypted home folder, you'll have to work with it.
    I'd strongly recommend making another copy of that on another external HD and putting that "on the shelf" before doing anything else to it.   I suspect you have little chance of recovery, but don't take the risk of further damage.  
    You haven't answered my earlier questions:
    Have you tried to run Repair Disk on either the original or copied partition?
    If that doesn't help, this might:  http://www.thexlab.com/faqs/fixfilevault.html
    Do you have Time Machine backups?
    Obviously, now that the partition has been deleted, you can't run repair disk on it, but try it on the recovered disk image.  
    You need more expertise than either Christopher or I have.  Since Apple no longer uses "legacy" File Vault, it will be more difficult for them to help, too.
    Is your Mac covered by AppleCare?  If so, give them a call. 
    If not, your nearest Apple Store Genius Bar might be able to help:  http://www.apple.com/retail/geniusbar/ 
    Or check for an Apple Authorized Service Provider: http://support.apple.com/kb/HT1434?viewlocale=en_US
    If not, one of them may know of an outside service than can help.  That will be quite expensive.

  • OBIEE BI Answers: Wrong Aggregation Measures on top level of hierarchy

    Hi to all,
    I have following problem. I hope to be clear in my English because it's a bit complicated to explain.
    I have following fact table:
    Drug Id Ordered Quantity
    1 9
    2 4
    1 3
    2 2
    and following Drug Table:
    Drug Brand Id Brand Description Drug Active Ingredient Id Drug Active Ingredient Description
    1 Aulin 1 Nimesulide
    2 Asprina 2 Acetilsalicilico
    In AWM i've defined a Drug Dimension based on following hierarchy: Drug Active Ingredient (parent) - Drug Brand Description (leaf) mapped as:
    Drug Active Ingredient = Drug Active Ingredient Id of my Drug Table (LONG DESCRIPTION Attribute=Drug Active Ingredient Description)
    Drug Brand Description = Drug Brand Id of my Drug Table (LONG DESCRIPTION Attribute = Drug Brand Description)
    Indeed in my cube I've mapped leaf level Drug Brand Description = Drug Id of my fact table. In AWM Drug Dimension is mapped as Sum Aggregation Operator
    If I select on Answers Drug Active Ingredient (parent of my hierarchy) and Ordered Quantity I see following result
    Drug Active Ingredient Description Ordered Quantity
    Acetilsalicilico 24
    Nimesulide 12
    indeed of correct values
    Drug Active Ingredient Description Ordered Quantity
    Acetilsalicilico 12
    Nimesulide 6
    EXACTLY the double!!!!!!! But if I drill down Drug Active Ingredient Description Acetilsalicilico I see correctly:
    Drug Active Ingredient Description Drug Brand Description Ordered Quantity
    Acetilsalicilico
    - Aspirina 12
    Total 12
    Wrong Aggregation is only on top level of hierarchy. Aggregation on lower level of hierarchy is correct. Maybe Answers sum also Total Row????? Why?????
    I'm frustrated. I beg your help, please!!!!!!!!
    Giancarlo

    Hi,
    in NQSConfig.ini I can't find Cache Section. I post all file. Tell me what I must change. I know your patient is quite at limit!!!!!!! But I'm a new user of OBIEE.
    # NQSConfig.INI
    # Copyright (c) 1997-2006 Oracle Corporation, All rights reserved
    # INI file parser rules are:
    # If values are in literals, digits or _, they can be
    # given as such. If values contain characters other than
    # literals, digits or _, values must be given in quotes.
    # Repository Section
    # Repositories are defined as logical repository name - file name
    # pairs. ODBC drivers use logical repository name defined in this
    # section.
    # All repositories must reside in OracleBI\server\Repository
    # directory, where OracleBI is the directory in which the Oracle BI
    # Server software is installed.
    [ REPOSITORY ]
    #Star     =     samplesales.rpd, DEFAULT;
    Star = Step3.rpd, DEFAULT;
    # Query Result Cache Section
    [ CACHE ]
    ENABLE     =     YES;
    // A comma separated list of <directory maxSize> pair(s)
    // e.g. DATA_STORAGE_PATHS = "d:\OracleBIData\nQSCache" 500 MB;
    DATA_STORAGE_PATHS     =     "C:\OracleBIData\cache" 500 MB;
    MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
    MAX_CACHE_ENTRY_SIZE = 1 MB;
    MAX_CACHE_ENTRIES = 1000;
    POPULATE_AGGREGATE_ROLLUP_HITS = NO;
    USE_ADVANCED_HIT_DETECTION = NO;
    MAX_SUBEXPR_SEARCH_DEPTH = 7;
    // Cluster-aware cache
    // GLOBAL_CACHE_STORAGE_PATH = "<directory name>" SIZE;
    // MAX_GLOBAL_CACHE_ENTRIES = 1000;
    // CACHE_POLL_SECONDS = 300;
    // CLUSTER_AWARE_CACHE_LOGGING = NO;
    # General Section
    # Contains general server default parameters, including localization
    # and internationalization, temporary space and memory allocation,
    # and other default parameters used to determine how data is returned
    # from the server to a client.
    [ GENERAL ]
    // Localization/Internationalization parameters.
    LOCALE     =     "Italian";
    SORT_ORDER_LOCALE     =     "Italian";
    SORT_TYPE = "binary";
    // Case sensitivity should be set to match the remote
    // target database.
    CASE_SENSITIVE_CHARACTER_COMPARISON = OFF ;
    // SQLServer65 sorts nulls first, whereas Oracle sorts
    // nulls last. This ini file property should conform to
    // that of the remote target database, if there is a
    // single remote database. Otherwise, choose the order
    // that matches the predominant database (i.e. on the
    // basis of data volume, frequency of access, sort
    // performance, network bandwidth).
    NULL_VALUES_SORT_FIRST = OFF;
    DATE_TIME_DISPLAY_FORMAT = "yyyy/mm/dd hh:mi:ss" ;
    DATE_DISPLAY_FORMAT = "yyyy/mm/dd" ;
    TIME_DISPLAY_FORMAT = "hh:mi:ss" ;
    // Temporary space, memory, and resource allocation
    // parameters.
    // You may use KB, MB for memory size.
    WORK_DIRECTORY_PATHS     =     "C:\OracleBIData\tmp";
    SORT_MEMORY_SIZE = 4 MB ;
    SORT_BUFFER_INCREMENT_SIZE = 256 KB ;
    VIRTUAL_TABLE_PAGE_SIZE = 128 KB ;
    // Analytics Server will return all month and day names as three
    // letter abbreviations (e.g., "Jan", "Feb", "Sat", "Sun").
    // To use complete names, set the following values to YES.
    USE_LONG_MONTH_NAMES = NO;
    USE_LONG_DAY_NAMES = NO;
    UPPERCASE_USERNAME_FOR_INITBLOCK = NO ; // default is no
    // Aggregate Persistence defaults
    // The prefix must be between 1 and 8 characters long
    // and should not have any special characters ('_' is allowed).
    AGGREGATE_PREFIX = "SA_" ;
    # Security Section
    # Legal value for DEFAULT_PRIVILEGES are:
    # NONE READ
    [ SECURITY ]
    DEFAULT_PRIVILEGES = READ;
    PROJECT_INACCESSIBLE_COLUMN_AS_NULL     =     NO;
    MINIMUM_PASSWORD_LENGTH     =     0;
    #IGNORE_LDAP_PWD_EXPIRY_WARNING = NO; // default is no.
    #SSL=NO;
    #SSL_CERTIFICATE_FILE="servercert.pem";
    #SSL_PRIVATE_KEY_FILE="serverkey.pem";
    #SSL_PK_PASSPHRASE_FILE="serverpwd.txt";
    #SSL_PK_PASSPHRASE_PROGRAM="sitepwd.exe";
    #SSL_VERIFY_PEER=NO;
    #SSL_CA_CERTIFICATE_DIR="CACertDIR";
    #SSL_CA_CERTIFICATE_FILE="CACertFile";
    #SSL_TRUSTED_PEER_DNS="";
    #SSL_CERT_VERIFICATION_DEPTH=9;
    #SSL_CIPHER_LIST="";
    # There are 3 types of authentication. The default is NQS
    # You can select only one of them
    #----- 1 -----
    #AUTHENTICATION_TYPE = NQS; // optional and default
    #----- 2 -----
    #AUTHENTICATION_TYPE = DATABASE;
    # [ DATABASE ]
    # DATABASE = "some_data_base";
    #----- 3 -----
    #AUTHENTICATION_TYPE = BYPASS_NQS;
    # Server Section
    [ SERVER ]
    SERVER_NAME = Oracle_BI_Server ;
    READ_ONLY_MODE = NO;     // default is "NO". That is, repositories can be edited online.
    MAX_SESSION_LIMIT = 2000 ;
    MAX_REQUEST_PER_SESSION_LIMIT = 500 ;
    SERVER_THREAD_RANGE = 40-100;
    SERVER_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    DB_GATEWAY_THREAD_RANGE = 40-200;
    DB_GATEWAY_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    MAX_EXPANDED_SUBQUERY_PREDICATES = 8192; // default is 8192
    MAX_QUERY_PLAN_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_INFO_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_QUERY_CACHE_ENTRIES = 1024; // default is 1024
    INIT_BLOCK_CACHE_ENTRIES = 20; // default is 20
    CLIENT_MGMT_THREADS_MAX = 5; // default is 5
    # The port number specified with RPC_SERVICE_OR_PORT will NOT be considered if
    # a port number is specified in SERVER_HOSTNAME_OR_IP_ADDRESSES.
    RPC_SERVICE_OR_PORT = 9703; // default is 9703
    # If port is not specified with a host name or IP in the following option, the port
    # number specified at RPC_SERVICE_OR_PORT will be considered.
    # When port number is specified, it will override the one specified with
    # RPC_SERVICE_OR_PORT.
    SERVER_HOSTNAME_OR_IP_ADDRESSES = "ALLNICS"; # Example: "hostname" or "hostname":port
    # or "IP1","IP2":port or
    # "hostname":port,"IP":port2.
    # Note: When this option is active,
    # CLUSTER_PARTICIPANT should be set to NO.
    ENABLE_DB_HINTS = YES; // default is yes
    PREVENT_DIVIDE_BY_ZERO = YES;
    CLUSTER_PARTICIPANT = NO; # If this is set to "YES", comment out
    # SERVER_HOSTNAME_OR_IP_ADDRESSES. No specific NIC support
    # for the cluster participant yet.
    // Following required if CLUSTER_PARTICIPANT = YES
    #REPOSITORY_PUBLISHING_DIRECTORY = "<dirname>";
    #REQUIRE_PUBLISHING_DIRECTORY = YES; // Don't join cluster if directory not accessible
    DISCONNECTED = NO;
    AUTOMATIC_RESTART = YES;
    # Dynamic Library Section
    # The dynamic libraries specified in this section
    # are categorized by the CLI they support.
    [ DB_DYNAMIC_LIBRARY ]
    ODBC200 = nqsdbgatewayodbc;
    ODBC350 = nqsdbgatewayodbc35;
    OCI7 = nqsdbgatewayoci7;
    OCI8 = nqsdbgatewayoci8;
    OCI8i = nqsdbgatewayoci8i;
    OCI10g = nqsdbgatewayoci10g;
    DB2CLI = nqsdbgatewaydb2cli;
    DB2CLI35 = nqsdbgatewaydb2cli35;
    NQSXML = nqsdbgatewayxml;
    XMLA = nqsdbgatewayxmla;
    ESSBASE = nqsdbgatewayessbasecapi;
    # User Log Section
    # The user log NQQuery.log is kept in the server\log directory. It logs
    # activity about queries when enabled for a user. Entries can be
    # viewed using a text editor or the nQLogViewer executable.
    [ USER_LOG ]
    USER_LOG_FILE_SIZE = 10 MB; // default size
    CODE_PAGE = "UTF8"; // ANSI, UTF8, 1252, etc.
    # Usage Tracking Section
    # Collect usage statistics on each logical query submitted to the
    # server.
    [ USAGE_TRACKING ]
    ENABLE = NO;
    //==============================================================================
    // Parameters used for writing data to a flat file (i.e. DIRECT_INSERT = NO).
    STORAGE_DIRECTORY = "<full directory path>";
    CHECKPOINT_INTERVAL_MINUTES = 5;
    FILE_ROLLOVER_INTERVAL_MINUTES = 30;
    CODE_PAGE = "ANSI"; // ANSI, UTF8, 1252, etc.
    //==============================================================================
    DIRECT_INSERT = YES;
    //==============================================================================
    // Parameters used for inserting data into a table (i.e. DIRECT_INSERT = YES).
    PHYSICAL_TABLE_NAME = "<Database>"."<Catalog>"."<Schema>"."<Table>" ; // Or "<Database>"."<Schema>"."<Table>" ;
    CONNECTION_POOL = "<Database>"."<Connection Pool>" ;
    BUFFER_SIZE = 10 MB ;
    BUFFER_TIME_LIMIT_SECONDS = 5 ;
    NUM_INSERT_THREADS = 5 ;
    MAX_INSERTS_PER_TRANSACTION = 1 ;
    //==============================================================================
    # Query Optimization Flags
    [ OPTIMIZATION_FLAGS ]
    STRONG_DATETIME_TYPE_CHECKING = ON ;
    # CubeViews Section
    [ CUBE_VIEWS ]
    DISTINCT_COUNT_SUPPORTED = NO ;
    STATISTICAL_FUNCTIONS_SUPPORTED = NO ;
    USE_SCHEMA_NAME = YES ;
    USE_SCHEMA_NAME_FROM_RPD = YES ;
    DEFAULT_SCHEMA_NAME = "ORACLE";
    CUBE_VIEWS_SCHEMA_NAME = "ORACLE";
    LOG_FAILURES = YES ;
    LOG_SUCCESS = NO ;
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\CubeViews.Log";
    # MDX Member Name Cache Section
    # Cache subsystem for mapping between unique name and caption of
    # members for all SAP/BW cubes in the repository.
    [ MDX_MEMBER_CACHE ]
    // The entry to indicate if the feature is enabled or not, by default it is NO since this only applies to SAP/BW cubes
    ENABLE = NO ;
    // The path to the location where cache will be persisted, only applied to a single location,
    // the number at the end indicates the capacity of the storage. When the feature is enabled,
    // administrator needs to replace the "<full directory path>" with a valid path,
    // e.g. DATA_STORAGE_PATH = "C:\OracleBI\server\Data\Temp\Cache" 500 MB ;
    DATA_STORAGE_PATH     =     "C:\OracleBIData\cache" 500 MB;
    // Maximum disk space allowed for each user;
    MAX_SIZE_PER_USER = 100 MB ;
    // Maximum number of members in a level will be able to be persisted to disk
    MAX_MEMBER_PER_LEVEL = 1000 ;
    // Maximum size for each individual cache entry size
    MAX_CACHE_SIZE = 100 MB ;
    # Oracle Dimension Export Section
    [ ORA_DIM_EXPORT ]
    USE_SCHEMA_NAME_FROM_RPD = YES ; # NO
    DEFAULT_SCHEMA_NAME = "ORACLE";
    ORA_DIM_SCHEMA_NAME = "ORACLE";
    LOGGING = ON ; # OFF, DEBUG
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\OraDimExp.Log";

  • Partition by clause

    Dose the Partition by clause increase the perfomance then the normal group by clause ??
    thanks,
    Raj.

    Analytic queries != Aggregate queries
    Therefore it depends on what you're doing as to whether you'd want to use Analytic vs Aggregate queries.
    Performance depends also on the amount of data in your tables.
    Having said that, Analytics can mean that a self-join is no longer needed, and this could help if a large table is involved ... on the otherhand, it might hinder.
    Think of Analytic queries as another tool in your toolbox; sometimes a hammer is the right tool to use, but sometimes you need a spanner instead.

  • Missing partition boot record

    I'm a computer professional and am trying to install Solaris 7 (intel) on a Celeron 400 PC. The installation was okay (a few problems with an Elsa Quickstep 1000 ISA PNP card) except for the error 'Missing partition boot record.'
    The installation messages didn't show any errors; so, I'm wondering what does this error mean. If anyone can help me, I would greatly appreciate it.
    Thank you.
    Jasper

    Just my 2 cents worth from someone who's had similar pain!
    I discovered by trial and error that my Compaq Deskpro seems to want a 2MB partition described by fdisk as 'DIAGNOSTIC' in addition to the 'SOLARIS' ACTIVE partition. This doesn't appear, or have anything to do with slice layout. I believe this may be used for the VTOC and written to by root...
    Just FYI - the Sun docs say that Slice 2 is by default used for 'overlap' i.e. used to describe the whole logical disk.
    Slice 0 and 1 are usually for your / and swap partitions respectively.
    I believe slice 4 and 5 are used normaly by /usr and /opt with slice 6 or 7 for /export/home
    Colin

  • OMF enabled but still receiving ORA-02199: missing DATAFILE/TEMPFILE clause

    Any Ideas?
    SQL> show parameter DB_CREATE
    NAME TYPE VALUE
    db_create_file_dest string /path/to/datafiles
    db_create_online_log_dest_1 string /path/to/redo
    SQL> create tablespace test;
    create tablespace test
    ERROR at line 1:
    ORA-02199: missing DATAFILE/TEMPFILE clause
    Note this normally works but I ran into a problem with a script and pinpointed it to a problem with the system not recognizing that it should use OMF.
    - There is plenty of space on the filesystems

    _omf was disabled by accident by another team.                                                                                                                                                                                                                   

  • Help !!!  - partition by  clause

    hi. i have the following table
    table : item_tracker
    date | item_code | begin_price
    end of each day, one record for each item is getting written to this table. now after the records are written, i want to calculate the change in begin prices
    the formula is change_in_price=(todays_begin / yesterdays_begin) * 100
    i wrote the following query but it seems not working. please advise me on what i am doing wrong.. also is the approach wrong ?
    please advise.
    select date, item_code, begin_price, first_value(begin_price) over (partition by date, item_code order by trunc(date) desc rows between 2 preceding and 1 preceding) as yesterdays_begin
    from item_tracker
    here, the partition by column is not showing any value.
    is there any way i can include the date ? (like a 'having clause' for group by, what is the method to partition by)
    i tried range between as well. it reurned the error that i need to mention a number value instead of date to check a range.
    please help me :(
    thanx in advance

    Yes, joins are effective :
    SQL> create table item_tracker(price_date date, item_id number, day_price number);
    Table created.
    SQL> insert into item_tracker values ('11-JAN-11',1,10);
    1 row created.
    SQL> insert into item_tracker values ('11-JAN-11',2,12);
    1 row created.
    SQL> insert into item_tracker values ('11-JAN-11',3,24);
    1 row created.
    SQL> insert into item_tracker values ('12-JAN-11',1,10.5);
    1 row created.
    SQL> insert into item_tracker values ('12-JAN-11',2,16);
    1 row created.
    SQL> insert into item_tracker values ('12-JAN-11',3,21);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL>
    SQL> l
      1  select a.item_id, a.price_date, a.day_price, b.day_price PrevPrice, (a.day_price - b.day_price) PriceDiff
      2  from item_tracker a, item_tracker b
      3  where a.item_id=b.item_id
      4  and a.price_date=to_date('12-JAN-11','DD-MON-RR')
      5  and b.price_date=a.price_date-1
      6* order by 1
    SQL> /
       ITEM_ID PRICE_DAT  DAY_PRICE  PREVPRICE  PRICEDIFF
             1 12-JAN-11       10.5         10         .5
             2 12-JAN-11         16         12          4
             3 12-JAN-11         21         24         -3
    SQL>Hemant K Chitale

  • Missing partitions after running Recovery from previous System Image Backup in Windows 8.1

    After running Recovery from a recent Sytem Image Backup the LRS_ESP partition is now missing and the Used space on the  PBR_DRV partition is reported as 95.20MB while it previously reported 9.52GB in use. The WINRE_DRV, SYSTEM_DRV, Windows8_OS and D:LENOVO partions appear to be OK. 
    I ran the Recover after replacing the original 500GB HDD with a 1TB. Any details/links/recommendations on recreating/correcting the LRS_ESP and PBR_DRV partitions? Thanks!

    Here are the before and after partition details obtained through Partition Wizard:
    Link to image 1
    Link to image 2
    Moderator note: large image(s) converted to link(s):  About Posting Pictures In The Forums

  • Missing start boundary exception, caused by an empty Part, how to handle?

    Hello,
    i wrote an application that automatically handles mails from laboratories. The only essential part of the mail is the attachment, where chemical analyses are submitted (from permitted addresses, recognized by whitelist and fileheader of the attachment). Other ways to submit data weren't allowed.
    Currently a mail was received that can't be parsed. It's from a laboratory, that
    use its provider's (a german internet suplier named Arcor) webmail, a browser-based mailing portal. It always worked fine, because they wrote some greetings. But this time they sent a blank message. The result is following structure of the mail:
    MIME-Version: 1.0
    Content-Type: multipart/mixed;
    boundary="----=_Part_50112_10709369.1203586767396"
    //Some X-Flags
    ------=_Part_50112_10709369.1203586767396
    Content-Type: multipart/alternative;
         boundary="*----=_Part_50111_24141780.1203586767396*"
    ------=_Part_50111_24141780.1203586767396--
    ------=_Part_50112_10709369.1203586767396
    Content-Type: application/octet-stream
    Content-Transfer-Encoding: base64
    Content-Disposition: attachment; filename=somefile.bin
    ABCDEF.... //Some binary data
    ------=_Part_50112_10709369.1203586767396--
    It seems the webmailer creates an empty mailpart and only writes the end boundary (Line: ------=_Part_50111_24141780.1203586767396--).
    I know, the start boundary is really missing.
    I checked it out by getting a mailaccount from Arcor, and it always creates this structure when sending a message without a text. By the way, the Message-ID (header) generated from Arcor's server seems to be from javamail. (.....1234.567890.....JavaMail.ngmail@....).
    I don't know how many mailclients create "empty" parts, but impossible is nothing (e.g. other or future webmailer services).
    But how to handle?
    The error occures when calling MimeMultipart.getCount(), which causes to parse the mail if not parsed. All actions, which cause the mail to be parsed, will end in this exception (for this mail).
    I looked at the javamail source and found out, that the line of the empty part is not recognized as a boundary, because of its ending delimiters:
    if (line.equals(boundary))
    break;
    So the boundary is added to the preamble. It goes on with reading lines from the stream, until line == null.
    if (line == null)
    throw new MessagingException("Missing start boundary");
    Because there is no test, if the line matches the end boundary, it's not recognized. Wouldn't it be better in this case, to add an empty bodypart and set a variable to false (e.g. complete) instead of throwing an exception? Because MimeMultipart.parse() is called by other methods, like getCount, getBodyPart and writeTo, I can't nearly do anything automatically with the mail. How should i walk through the bodyparts and fetch the parts I'm interested in?
    Subclassing seems to be difficult to me:
    Object content = message.getContent();
    //javax.mail.Message, won't return a subclassed multipart
    if (content instanceof Multipart) {
    //recursive method!
    handleMultipart((Multipart) content); //collecting parts from multipart
    Of course, I could ask the laboratory: "please send me a greeting!" ;-)
    Greetings,
    cliff

    Interesting.
    Yes, it's probably a bug that JavaMail allows you to
    create a multipart with no body parts, since the
    MIME specification doesn't allow that. Still, the
    webmail application should be fixed so that it doesn't
    try to do that, at least including an empty plain text
    body part.
    Please contact the webmail provider and tell them of
    this bug in their application.
    I'll also look into making JavaMail cope with these
    broken messages more gracefully. Contact me
    at [email protected] and I'll let you know when
    I have a version ready to test.

  • XML Parsing attributes with encoded ampersand causes wrong order

    Hi all,
    I am writing in the forum first (because it could be that i am doing something wrong.... but i think it is a bug. Nonetheless, i thought i'd write my problem up here first.
    I am using Java 6, and this has been reproduced on both windows and linux.
    java version "1.6.0_03"
    Problem:
    read XML file into org.w3c.dom.Document.
    XML File has some attributes which contain ampersand. These are escaped as (i think) is prescribed by the rule of XML. For example:
    <?xml version="1.0" encoding="UTF-8"?>
         <lang>
              <text dna="8233" ro="chisturi de plex coroid (&gt;=1.5 mm)" it="Cisti del plesso corioideo(&gt;=1.5mm)" tr="Koro&#305;d pleksus kisti (&gt;=1.5 mm)" pt_br="Cisto do plexo cor&oacute;ide (&gt;=1,5 mm)" de="Choroidplexus Zyste (&gt;=1,5 mm)" el="&Kappa;&#973;&sigma;&tau;&epsilon;&iota;&sigmaf; &chi;&omicron;&rho;&omicron;&epsilon;&iota;&delta;&omicron;&#973;&sigmaf; &pi;&lambda;&#941;&gamma;&mu;&alpha;&tau;&omicron;&sigmaf; (&gt;= 1.5 mm)" zh_cn="&#33033;&#32476;&#33180;&#22218;&#32959;&#65288;&gt;= 1.5 mm&#65289;" pt="Quisto do plexo coroideu (&gt;=1,5 mm)" bg="&#1050;&#1080;&#1089;&#1090;&#1072; &#1085;&#1072; &#1093;&#1086;&#1088;&#1080;&#1086;&#1080;&#1076;&#1085;&#1080;&#1103; &#1087;&#1083;&#1077;&#1082;&#1089;&#1091;&#1089; (&gt;= 1.5 mm)" fr="Kystes du plexus choroide (&gt;= 1,5 mm)" en="Choroid plexus cysts (&gt;=1.5 mm)" ru="&#1082;&#1080;&#1089;&#1090;&#1099; &#1089;&#1086;&#1089;&#1091;&#1076;&#1080;&#1089;&#1090;&#1099;&#1093; &#1089;&#1087;&#1083;&#1077;&#1090;&#1077;&#1085;&#1080;&#1081; (&gt;=1.5 mm)" es="Quiste del plexo coroideo (&gt;=1.5 mm)" ja="&#33032;&#32097;&#33180;&#22178;&#32990;&#65288;&gt;=1.5mm&#65289;" nl="Plexus choroidus cyste (&gt;= 1,5 mm)" />
    </lang>As you might understand, we need to have the fixed text '>' for later processing. (not the greater than symbol '>' but the escaped version of it).
    Therefore, I escape the ampersand (encode?) and leave the rest of the text as is. And so my > becomes >
    All ok?
    Symptom:
    in fetching attributes, for example by the getAttribute("en") type call, the wrong attribute values are fetched.
    Not only that, if i only read to Document instance, and write back to file, the attributes are shown mixed up.
    eg:
    dna: 8233, ro=chisturi de plex coroid (>=1.5 mm), en=&#1082;&#1080;&#1089;&#1090;&#1099; &#1089;&#1086;&#1089;&#1091;&#1076;&#1080;&#1089;&#1090;&#1099;&#1093; &#1089;&#1087;&#1083;&#1077;&#1090;&#1077;&#1085;&#1080;&#1081; (>=1, de=Choroidplexus Zyste (>=1,5 mm)Here you can see that 'en' is shown holding what looks like greek, ... (what is ru as a country-code anyway?) where it should have obviously had the english text that originally was associated with the attribute 'en'
    This seems very strange and unexpected to me. I would have thought that in escaping (encoding) the ampersand, i have fulfilled all requirements of me, and that should be that.
    There is also no error that seems to occur.... we simply get the wrong order when fetching attributes.
    Am I doing something wrong? Or is this a bug that should be submitted?
    Kind Regards, and thanks to all responders/readers.
    Sean
    p.s. previously I had not been escaping the ampersand. This meant that I lost my ampersand in fetching attributes, AND the attribute order was ALSO WRONG!
    In fact, the wrong order was what led me to read about how to correctly encode ampersand at all. I had been hoping that correctly encoding would fix the order problem, but it didn't.
    Edited by: svaens on Mar 31, 2008 6:21 AM

    Hi kdgregory ,
    Firstly, sorry if there has been a misunderstanding on my part. If i did not reply to the question you raised, I appologise.
    In this 'reply' I hope not to risk further misunderstanding, and have simply given the most basic example which will cause the problem I am talking about, as well as short instructions on what XML to remove to make the problem disappear.
    Secondly, as this page seems to be displayed in ISO 8859-1, this is the reason the xml I have posted looks garbled. The xml is UTF-8. I have provided a link to the example xml file for the sample below
    [example xml file UTF-8|http://sean.freeshell.org/java/less2.xml]
    As for your most recent questions:
    Is it specified as an entity? To my knowledge (so far as my understanding of what an entity is) , yes, I am including entities in my xml. In my below example, the entities are the code for the greater than symbol. I am under the understanding that this is allowed in XML ??
    Is it an actual literal character (0xA0)? No, I am specifying 'greater than' entity (code?) in order to include the actual symbol in the end result. I am encoding it in form 'ampersand', 'g character', 't character', 'colon' in order for it to work, according to information I have read on various web pages. A quick google search will show you where I got such information from, example website: https://studio.tellme.com/general/xmlprimer.html
    Here is my sample program. It is longer than the one you kindly provided only because it prints out all attributes of the element it looks for. To use it, only change the name of the file it loads.
    I have given the xml code seperately so it can be easily copied and saved to file.
    Results you can expect from running this small test example?
    1. a mixed up list of attributes where attribute node name no longer matches its assigned attribute values (not for all attributes, but some).
    2. removing the attribute bg from the 'text' element will reduce most of these symptoms, but not all. Removing another attribute from the element will most likely make the end result look normal again.
    3. No exception is thrown by the presence of non xml characters.
    IMPORTANT!!! I have only just (unfortunately) noticed what this page does to my unicode characters... all the the international characters get turned into funny codes when previewed and viewed on this page.
    Whereas the only codes I am explicitly including in this XML is the greater than symbol. The rest were international characters.
    Perhaps that is the problem?
    Perhaps there is an international characters problem?
    I am quite sure that these characters are all UTF-8 because when I open up this xml file in firefox, It displays correctly, and in checking the character encoding, firefox reports UTF-8.
    In order to provide an un-garbled xml file, I will provide it at this link:
    link to xml file: [http://sean.freeshell.org/java/less2.xml]
    Again, sorry for any hassle and/or delay with my reply, or poor reply. I did not mean to waste anyones time.
    It will be appreciated however if an answer can be found for this problem. Chiefly,
    1. Is this a bug?
    2. Is the XML correct? (if not, then all those websites i've been reading are giving false information? )
    Kindest Regards,
    Sean
    import javax.xml.parsers.DocumentBuilderFactory;
    import org.w3c.dom.Document;
    import org.w3c.dom.Element;
    import org.w3c.dom.NamedNodeMap;
    import org.w3c.dom.Node;
    import org.w3c.dom.NodeList;
    import org.xml.sax.InputSource;
    public class Example
        public static void main(String[] argv)
              try
                   FileInputStream fis = new FileInputStream("/home/sean/Desktop/chris/less2.xml");
                 Document doc = DocumentBuilderFactory.newInstance()
                 .newDocumentBuilder()
                 .parse(new InputSource(fis));
                   Element root = doc.getDocumentElement();
                   NodeList textnodes = root.getElementsByTagName("text");
                   int len = textnodes.getLength();
                   int index = 0;
                   int attindex = 0;
                   int attrlen = 0;
                   NamedNodeMap attrs = null;
                   while (index<len)
                        Element te = (Element)textnodes.item(index);
                        attrs = te.getAttributes();
                        attrlen = attrs.getLength();
                        attindex = 0;
                        Node node = null;
                        while (attindex<attrlen)
                             node = attrs.item(attindex);          
                             System.out.println("attr: "+node.getNodeName()+ " is shown holding value: " + node.getNodeValue());
                             attindex++;                         
                        index++;
                        System.out.println("-------------");
                 fis.close();
              catch(Exception e)
                   System.out.println("we've had an exception, type "+ e);
    }  [example xml file|http://sean.freeshell.org/java/less2.xml]
    FOR THE XML, Please see link above, as it is UTF-8, and this page is not. Edited by: svaens on Apr 7, 2008 7:03 AM
    Edited by: svaens on Apr 7, 2008 7:23 AM
    Edited by: svaens on Apr 7, 2008 7:37 AM
    Edited by: svaens on Apr 7, 2008 7:41 AM

  • Group by Vs Partition By Clause

    Hello,
    Can you please help me in resolving the below issue,
    I have explained the scenario below with dummy table,
    CREATE TABLE emp (empno NUMBER(12), ename VARCHAR2(10), deptno NUMBER(12));
    INSERT INTO emp
    (empno, ename, deptno
    VALUES (1, 'A', 10
    INSERT INTO emp
    (empno, ename, deptno
    VALUES (2, 'B', 10
    INSERT INTO emp
    (empno, ename, deptno
    VALUES (3, 'C', 20
    INSERT INTO emp
    (empno, ename, deptno
    VALUES (4, 'D', 20
    INSERT INTO emp
    (empno, ename, deptno
    VALUES (5, 'E', 30
    COMMIT ;
    SELECT DISTINCT deptno, SUM (empno) / SUM (empno) OVER (PARTITION BY deptno)
    FROM emp
    GROUP BY deptno;
    ORA-00979: not a GROUP BY expression
    Earlier i had the query like
    SELECT DISTINCT deptno, SUM (empno) OVER (PARTITION BY deptno,empno) / SUM (empno) OVER (PARTITION BY deptno)
    FROM emp;
    which executed successfully with wrong result.
    Please guide me how to resolve this issue,
    Thanks,
    Santhosh

    Hi,
    santhosh.shivaram wrote:
    Hello all, sorry for the providing the limited data, I have now depicting the actual data set and the current select query which is giving error and desired output. Please let me know if you need further information on this.
    /* Formatted on 2012/09/14 08:00 (Formatter Plus v4.8.8) */ ...If you're going to the trouble of formatting the data, post it inside \ tags, so that this site won't remove the formatting.  See the forum FAQ {message:id=9360002}
    **Current query:**
    SELECT rep_date, cnty, loc, component_code,
    SUM (volume) / SUM (volume) OVER (PARTITION BY rep_date, cnty, loc)This is the same problem you had before, and was explained in the first answer {message:id=10573091}  Don't you read the replies you get?SUM (volume) OVER (PARTITION BY rep_date, cnty, loc)
    can't be used in this GROUP BY query, because it depends on volume, and volume isn't one of the GROUP BY expressions.
    FROM table1
    GROUP BY rep_date, cnty, loc, component_code;
    when execute this query i am getting "ORA-00979: not a GROUP BY expression" error
    My desired output_Formatting is especially important for the output.  Which do you think is easier to read and understand: what you posted:
    Rep_Date     Cnty     Loc     Component_Code     QTY_VOL
    9/12/2012     2     1     CONTRACT      -0.019000516
    9/12/2012     2     1     CONTRACT      -0.019000516
    9/12/2012     2     1     NON-CONTRACT      -0.893525112
    9/12/2012     2     1     NON-CONTRACT      -0.89322
    9/12/2012     2     1     CONTRACT-INDEX     1.912525629
    9/12/2012     2     1     CONTRACT-INDEX     1.912526
    9/12/2012     2     1     CONTRACT-INDEX     1.912526
    9/12/2012     2     4     CONTRACT     0.015197825
    9/12/2012     2     4     CONTRACT     0.015198
    9/12/2012     2     4     NON-CONTRACT     0.984802175
    9/12/2012     2     4     NON-CONTRACT     0.984802or this?Rep_Date     Cnty     Loc     Component_Code     QTY_VOL
    9/12/2012     2     1     CONTRACT -0.019000516
    9/12/2012     2     1     CONTRACT -0.019000516
    9/12/2012     2     1     NON-CONTRACT      -0.893525112
    9/12/2012     2     1     NON-CONTRACT      -0.89322
    9/12/2012     2     1     CONTRACT-INDEX     1.912525629
    9/12/2012     2     1     CONTRACT-INDEX     1.912526
    9/12/2012     2     1     CONTRACT-INDEX     1.912526
    9/12/2012     2     4     CONTRACT     0.015197825
    9/12/2012     2     4     CONTRACT     0.015198
    9/12/2012     2     4     NON-CONTRACT     0.984802175
    9/12/2012     2     4     NON-CONTRACT     0.984802
    Which do you think will lead to more answers?  Quicker answers?  Better answers?
    Please let me know if you need any more information.Explain the results.
    How do you compute the qty_vol column?  Give a couple of very specific examples, showing step by step how you calculate the values given from the sample data.
    What does each row of the output represent? Your query says
    GROUP BY rep_date, cnty, loc, component_code;which means the result set will have 1 row for each distinct combiation of rep_date, cnty, loc and component_code, but your desired output has at least 2 rows for every distinct combination of them, and in one case you want 3 rows with the same rep_date, cnty, loc and component_code.  How do you decide when you want 2 rows, and when you need 3?  Will there be occassions when you need 4 row, or 5, or 1?
    All the rows with the same rep_date, cnty, loc and component_code have *nearly* the same qty_vol, but usually not quite the same.  Sometimes qty_col is rounded: sometimes it's changed slightly, but not just rounded (-0.893525112 get converted to -0.89322).  How do you decide when it's rounded, when it remains the same, and when it's changed to a completely different number?  When it's rounded, how do you decide how many digits to round it to?
    Edited by: Frank Kulash on Sep 14, 2012 12:44 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • CASE causes strange aggregation issue.

    Hi I have a subject area; Journals; which has a dimension: - Budget Type (Actual or Budget), a dimension GL Code (Cost Centre) and a fact folder BUDGET FACTS which has a field amount which aggregates by default as SUM. (Simplified - but I think this is all that matters).
    I create a report in Answers thus; -
    Actual Type          Cost Centre       Amount
    Actual                801041            100
    Budget                801041            150This gives results as desired / expected.
    However, wishing to see Sum(Amount) in two columns, one for Budget and One for Actual - next to each other I try this; -
    Cost Centre        Budget Amount       Actual AmountWhere Budget Amount is populated by the following formula ; -
    CASE WHEN "Journals"."Budget Type" = 'Budget' THEN amount else 0 end
    And Actual Amount
    CASE WHEN "Journals"."Budget Type" = 'Actual' THEN amount else 0 end
    (Syntax may not be exact but you get the idea)
    Now all of my results look cartesian - producing massively bigger numbers....
    I have tried SUM in the formula and experimenting with default aggregation but cannot get it to work.
    Any suggestions - do I have to resort to the ADVANCED tab and setting the SQL - GROUPING - which I can do but I am hoping for a simpler solution to pass on to my wider user community - and pointing them at writing SQL fills me with cold dread....
    I am on 10.1.3.4
    Thanks for your input,
    Robert.

    Hi,
    these are the log files....
    The only difference between what was generated in answers in the first one to the second one is that in the first one I leave the 'Actual Type' column in my report, which works except for the fact that budgets and actuals do not appear on the same row, even when I hide the column....
    So the second is identical, but with the afore mentioned 'Actual Type' column deleted.
    Here all of the result data winds up in the 'Actual Amount' column (the first of the two case statements) - which makes no sense...
    Is there a way round this, except creating a calculation in the repository as you suggest???
    thanks,
    Robert (code follows)
    select distinct D1.c2 as c1,
         D1.c3 as c2,
         case  when D1.c4 = 'Actual' then D1.c1 else 0 end  as c3,
         case  when D1.c4 = 'Budget' then D1.c1 else 0 end  as c4,
         D1.c1 as c5,
         D1.c4 as c6,
         D1.c5 as c7
    from
         (select sum(T29613.AMOUNT) as c1,
                   T29642.COST_CENTRE as c2,
                   T29706.PERIOD_NAME as c3,
                   T31281.ACTUAL_TYPE as c4,
                   T29706.PERIOD_NUM as c5
              from
                   GL_ACTUAL_TYPE_MV T31281,
                   GL_CODE_COMBINATIONS_MV T29642 /* Gl Code Combinations for GL Journal Drill */ ,
                   GL_PERIODS T29706 /* Gl Periods for Gl Journal Drill */ ,
                   GL_JOURNAL_DRILL T29613
              where  ( T29613.ACTUAL_KEY = T31281.ACTUAL_FLAG and T29613.CODE_KEY = T29642.CODE_KEY and T29613.PERIOD_KEY = T29706.PERIOD_NAME and T29613.PERIOD_KEY = 'JUL-11' and T29642.COST_CENTRE = '801040' and T29706.PERIOD_NAME = 'JUL-11' )
              group by T29642.COST_CENTRE, T29706.PERIOD_NAME, T29706.PERIOD_NUM, T31281.ACTUAL_TYPE
         ) D1
    order by c1, c7, c6
    select D1.c2 as c1,
         D1.c3 as c2,
         D1.c1 as c3,
         D1.c4 as c4
    from
         (select D1.c1 as c1,
                   D1.c2 as c2,
                   D1.c3 as c3,
                   D1.c4 as c4
              from
                   (select T31281.ACTUAL_TYPE as c1,
                             T29642.COST_CENTRE as c2,
                             T29706.PERIOD_NAME as c3,
                             T29706.PERIOD_NUM as c4,
                             ROW_NUMBER() OVER (PARTITION BY T29642.COST_CENTRE, T29706.PERIOD_NAME, T31281.ACTUAL_TYPE ORDER BY T29642.COST_CENTRE ASC, T29706.PERIOD_NAME ASC, T31281.ACTUAL_TYPE ASC) as c5
                        from
                             GL_ACTUAL_TYPE_MV T31281,
                             GL_CODE_COMBINATIONS_MV T29642 /* Gl Code Combinations for GL Journal Drill */ ,
                             GL_PERIODS T29706 /* Gl Periods for Gl Journal Drill */ ,
                             GL_JOURNAL_DRILL T29613
                        where  ( T29613.ACTUAL_KEY = T31281.ACTUAL_FLAG and T29613.CODE_KEY = T29642.CODE_KEY and T29613.PERIOD_KEY = T29706.PERIOD_NAME and T29613.PERIOD_KEY = 'JUL-11' and T29642.COST_CENTRE = '801040' and T29706.PERIOD_NAME = 'JUL-11' )
                   ) D1
              where  ( D1.c5 = 1 )
         ) D1
    order by c2, c1

Maybe you are looking for

  • Executing SQL in a CLOB

    I have a query that is being dynamically created and sometimes exceeds the VARCHAR2 4000 limit. I then decided to build the query in a CLOB, to avoid the space limit. My problem is that I have no idea how to execute the query contained in the CLOB if

  • Hi to all... What is a  XML data provider,stored Procedure, personal data

    Hi to all... What is a  XML data provider,stored Procedure, personal data providers in deski.  when we use these data provider in desk top intelligence.. and use of it. Please give detail description of the above... Thanks for reply..........

  • Error in Oracle ManagementServer Service

    Hi, I have installed Oracle 8.1.6 R2 on Window NT4.0 Server. My machine has 18GB HDD and 512MB RAM. When I tried to logon to Oracle Enterprise Manager using enterprise Manager console, the following error message "VTK-1000: Unable to connect to manag

  • Interleaving doesn't work properly for Cisco 3725 router, IP Plus IOS

    Dear All, I am deploying VoIP between 2 sites using Cisco 3725 routers. Currently, interleaving doesn't work properly which result in voice quality problem only during data trafic. Issuing "show int multilink 1" command, I realise that there is no in

  • Past App Purchases

    I spent over $100 on apps.. and then my iTouch got stolen right from my house. And also my computer crashed.. so either way, I lost all of my apps. I have an iPhone now, but when I try to download the apps I've already purchased, it doesn't give me t