Special Aggregation

I have the following record set
1 s1 1/13/2006
1 s2 1/13/2006
1 s1 1/14/2006
1 s2 1/14/2006
1 s1 1/15/2006
1 s2 1/15/2006
1 s3 1/16/2006
1 s3 1/17/2006
1 s3 1/18/2006
1 s4 1/19/2006
1 s4 1/20/2006
1 s4 1/21/2006
1 s4 1/23/2006
1 s4 1/24/2006
1 s4 1/27/2006
1 s4 1/28/2006
2 s1 1/13/2006
2 s2 1/13/2006
2 s1 1/14/2006
2 s2 1/14/2006
2 s1 1/15/2006
2 s2 1/15/2006
2 s3 1/16/2006
2 s3 1/17/2006
2 s3 1/18/2006
2 s4 1/19/2006
2 s4 1/20/2006
2 s4 1/21/2006
2 s4 1/23/2006
2 s4 1/24/2006
2 s4 1/27/2006
2 s4 1/28/2006
I need to aggregate those records as the following
1 s1,s2 1/13/2006 1/15/2006
1 s3 1/16/2006 1/18/2006
1 s4 1/19/2006 1/21/2006
1 s4 1/23/2006 1/24/2006
1 s4 1/27/2006 1/28/2006
2 s1,s2 1/13/2006 1/15/2006
2 s3 1/16/2006 1/18/2006
2 s4 1/19/2006 1/21/2006
2 s4 1/23/2006 1/24/2006
2 s4 1/27/2006 1/28/2006
Which means that I have to group them first according to te first column (1, 2), and then according to the second one but according to the consecutive days! From the example above, you may notice that if the second column repeated for the same group (first column) in the same day, the output requires concatenating them (s1, s2), and this pattern lasts from 1/13/2006 till 1/15/2006
If break in these days is there then I need a separate group; for instance, 2 with s4 has been split into three groups; from 1/19/2006 till 1/21/2006, from 1/23/2006 till 1/24/2006, and from 1/27/2006 till 1/28/2006, and this is only because the break in the day sequence!
Can anyone help please?
Thanks in advance

23:04:21 session_152> select n,scbp,from_dt, thru_dt
23:04:30   2  from (
23:04:30   3  select n,scbp,mg,mlg, min(d) from_dt, max(d) thru_dt
23:04:30   4  from (
23:04:30   5  select n,d,scbp,mg,max(lg) over (partition by n,mg order by d) mlg
23:04:30   6  from (
23:04:30   7  select n,d,scbp,mg
23:04:30   8        ,case when trunc(lag(d) over (partition by n,mg order by d)) = trunc(d)-1
23:04:30   9                then null
23:04:30  10              else
23:04:30  11                 row_number() over (partition by n,mg order by d)
23:04:30  12              end lg
23:04:30  13  from (
23:04:30  14  select n,d,scbp,max(g) over (partition by n order by d) mg
23:04:30  15  from (
23:04:30  16  select n,d,scbp
23:04:30  17        ,case when lag(scbp) over (partition by n order by d) = scbp
23:04:30  18                then null
23:04:30  19              else
23:04:30  20                row_number() over (partition by n order by d)
23:04:30  21         end g
23:04:30  22  from (
23:04:30  23  select n, d, max(ltrim(sys_connect_by_path(v,','),',')) scbp
23:04:30  24  from (
23:04:30  25  select n,v,d
23:04:30  26        ,row_number() over (partition by n, d order by v) c
23:04:30  27        ,row_number() over (partition by n, d order by v) - 1 p
23:04:30  28  from t
23:04:30  29  )
23:04:30  30  start with c = 1
23:04:30  31  connect by prior c = p and prior n = n and prior d = d
23:04:30  32  group by n,d
23:04:30  33  ) ) ) ) )
23:04:30  34  group by n,scbp,mg,mlg
23:04:30  35  )
23:04:30  36  ;
         N SCBP                 FROM_DT    THRU_DT
         1 s1,s2                01/13/2006 01/15/2006
         1 s3                   01/16/2006 01/18/2006
         1 s4                   01/19/2006 01/21/2006
         1 s4                   01/23/2006 01/24/2006
         1 s4                   01/27/2006 01/28/2006
         2 s1,s2                01/13/2006 01/15/2006
         2 s3                   01/16/2006 01/18/2006
         2 s4                   01/19/2006 01/21/2006
         2 s4                   01/23/2006 01/24/2006
         2 s4                   01/27/2006 01/28/2006
10 rows selected.

Similar Messages

  • How to use special aggregation in bi  beans

    Dear Gurus:
    I am using Bi beans in a project, on cube has a banlance measure, I set the last() aggregation in olap option with OEM, But when I query this cube, this measure still use default sum(), So whould you please help me how to use this special aggregation in bi beans.

    George,
    One way to get non-additive aggregations is to use an Analytic Workspace, or AW. AWs support all the aggregation operations, and can be exposed through the OLAP Catalog as "fully solved" cubes, in which case
    the OLAP API will merely fetch the correctly computed aggregate values.
    Today, setting up an AW for use by BI Beans and the OLAP API requires lengthy scripts that create the necessary ADTs and Views, plus calls to the CWM2 PL/SQL API. However, OLAP will be releasing an AW Manager tool
    that makes this process easier. Please contact OLAP Product Management for further details.

  • Aggregator question

    Hi!
    I am working on a special aggregator that in order to do its job needs to look-up some data in the local partition of the cache it is aggregating over (I know that the particular data it needs is available in the same partition as the entries it receives since I use a custom KeyPartitioningStrategy that assigns them there - no remote calls should be needed).
    When my aggregator executes it triggers an com.tangosol.util.AssertionException claiming that "poll() is a blocking call and cannot be called on the Service thread". Is my key partitioning strategy not working as expected or is it not allowed to do any "potentially blocking" calls from an aggregator?
    As a side note - assuming that this kind of calls are allowed I would by the way have loved a way for an aggregator to find the cache it is operating on programmatically since that would simplify using the same aggregator class in different caches (with the same types of data in them making the same aggregator useful).
    Best Regards
    Magnus

    Hi Magnus,
    If you don't specify the "thread-count" element explicitly, all your aggregations are executed on the main service thread and are not allowed to make blocking calls into the same service (an obvious dead lock potential). Some operations are dangerous even on a worker thread and may result in a warning that looks like:
    Application code running on "DistributedCacheWorker:1" service thread(s) should not call "ensureCache" as this may result in deadlock. The most common case is a CacheFactory call from a custom CacheStore implementation.
    I would suggest taking a discussion regarding your specific implementations off line - I will email you directly.
    Regards,
    Gene

  • How to display values year wise in a request

    Hi All,
    My requirement is I have 3 years of data. I need to display that data year-wise in different tables at a time i.e., suppose if i have 2010,2011,2012 data then i need to display 2010 data in one table ,2011 data in next table and 2012 data in another table in same request. At a time i should get all three years of data but in different tables.
    Here table in the sense data should be separated year_wise.
    How can I achieve this?
    Thanks in Advance,
    Regards,
    Sindhu

    Hi,
    It is good know you got your requirement.
    Pivot table is on of the specialized view we are using to create the report. The options available in the view is same like Microsoft Excel Pivot table view.
    Pivot table optioins:
    Page: It is like prompt in the view itselft. If we are adding column in the page it will show all the values of that column for e.g if you are adding year column into that page tab it will show the values of the year, so we can select any particular year and can filter the report in the view itself.
    Section: It will separate the report into different section by the column value which we are having in section tab.
    Measures: It is the special place for adding number columns (e.g amount, employee count, quantity,revenue and etc, like that ) basically called measures in the fact table. The column dragged into the measures have special aggregation rules such as sum, min, max, avg,count and etc
    excluded: if want to add any column in the saved request but donot want to display, you can drag that column into excluded tab.
    Rows: These option will display the report by value of the column wise.
    Columns: It will separate the measure value by the column what we are having in this tab. (for e.g if we are putting year in column tab the column 'Revenue' in the measure table will be separated by year wise.
    Please mark the answer if it helpful.

  • Standard authorization concept versus analysis authorizations

    Hi
    I am bit confused about the necessity of maintaining both.
    Example:
    I have designed an analysis authorizations for CO (Controlling), named CO_001:
    InfoProvider: 0COOM_C02
    Thereafter I have put the authorization object S_RS_AUTH into a role (standard authorization object) with CO_001 as value in BIAUTH.
    Is there still a need to maintain authorization objects for Business Explorer or Business Planning, like:
    S_RS_COMP (limiting to the InfoProvider mentioned in the analysis authorization)
    S_RS_PLSE (limiting to a special aggregation level of the appropriate InfoProvider mentioned in the analysis authorization)
    What happens when there is no limitations maintained in the role for these auth objects, "*"?
    Which concepts dominates the other one?
    Thanks
    BEO

    Hi BEOplanet,
    S_RS_COMP will give you access to the Infoprovidor in BEx so this will be access level security.
    Then Analysis authorization will give you the data level security within the infoprovidor (like what data you can see within the infoprovidor)
    There fore you need to maintain both S_RS_COMP and Analysis Authorizations.
    To your question ,if you have maintained the cube 0COOM_C02 in Analysis Authorization and S_RS_COMP has only 0PA_01 then the Query will fail since you dont have access to the cube 0COOM_C02 in S_RS_COMP.
    Regards,
    Karthik.

  • Can any one tell me how can I move to a different folder pictures, that I've cloned, without them staying aggregated? They all come together to the other folder and I don't want that- thanks

    Can any one tell me how can I move to a different folder pictures, that I've cloned, without them staying aggregated? They all come together to the other folder and I don't want that… thanks

    There's more to it than that.
    Folders in Aperture do not hold Images.  They hold Projects and Albums.  You cannot put an Image in a Folder without putting it in a Project or an Album inside that Folder.
    The relationship between Projects and Images is special:  every Image must in a Project, and can be in only one Project.
    Images can be in as many Albums you want.  Putting an Image in an Album does not move it from the Project that holds it.
    You can make as many Versions from a Master as you want.
    What you want to do may appear simple to you, but it still much adhere to how Aperture works.  I still can't tell exactly what you are trying to do (specifically: Images don't live in Folders; moving an Image from a Folder is non-sensical).
    It can be very confusing (and frustrating) to get going with Aperture -- but it does work, and can be enormously helpful.  If you haven't, take a look at the video tutorials on Apple's Aperture support site.
    I feel as though I haven't helped you much -- but we need to be using the same names for interface items in order to get anything done -- and my sense is that you still haven't learned the names of the parts.

  • Error : Reading from Aggregation Level not permitted

    Hello Gurus,
          Could somebody please give some help or advice regarding this?
    I have a multiprovider on a regular cube and an aggregation level, for some reason the multicube gives me the following error message when I try to display data using listcube.
    Reading from Aggregation Level is not permitted
    Message no. RSPLS801
    Also the  Query on the multicube does not display data for any of the KF's in the Agg Level but when I create a query on the Agg level itself it is fine.
    Any suggestions?
    Thanks.
    Swaroop.
    Edited by: Swaroop Chandra on Dec 10, 2009 7:29 PM

    Hi,
    transaction LISTCUBE does not support all InfoProviders, e.g. aggregation level are not supported. LISTCUBE is a 'low level' to read data from the BW persistence layer, e.g. InfoCubes. Since aggregation level always read transaction data via the so called planning buffer and the planning buffer technically is a special OLAP query LISTCUBE does not support aggregation level.
    Regards,
    Gregor

  • Exit button not working in exe file made with Aggregator

    Hi,
    I created an .exe file using Aggregator in full screen mode but the exit button on the skin doesn't work. When I click the exit button, the dislay hiccups (wobbles a bit) but the file keeps playing and doesn't close. This is especially problematic since this occurs in full screen. The only way to exit is by hitting the Escape key on the keyboard. The exit button works fine in individual .exe files of the modules; it's just in the aggregated .exe file that the exit button doesn't work. Is there anything special I need to do to make the exit button work with Aggregator?

    Hi, this is more J2EE related stuff, SAP came with a solution but is taking ages to get the next page after the "Exit" button is pushed:
    Go to VisualAdnmin >server(n) >Services >Configuration Adapter >
    (Right side Pane) webdynpro >sap.com >tcwddispwda >PropertySheet
    default.
    Edit the property sheet and change the custom value for the property
    "sap.locking.maxWaitInterval" to 200.
    Note 1113811, explains why these kind of errors happen.
    I still waiting for a better solution from SAP.
    Cheers

  • Count Distinct Wtih CASE Statement - Does not follow aggregation path

    All,
    I have a fact table, a day aggregate and a month aggregate. I have a time hierarchy and the month aggregate is set to the month level, the day aggregate is set to the day level within the time hierarchy.
    When using any measures and a field from my time dimension .. the appropriate aggregate is chosen, ie month & activity count .. month aggregate is used. Day & activity count .. day aggregate is used.
    However - when I use the count distinct aggregate rule .. the request always uses the lowest common denominator. The way I have found to get this to work is to use a logical table source override in the aggregation tab. Once I do this .. it does use the aggregates correctly.
    A few questions
    1. Is this the correct way to use aggregate navigation for the count distinct aggregation rule (using the source override option)? If yes, why is this necessary for count distinct .. what is special about it?
    2. The main problem I have now is that I need to create a simple count measure that has a CASE statement in it. The only way I see to do this is to select the Based on Dimensions checkbox which then allows me to add a CASE statement into my count distinct clause. But now the aggregation issue comes back into play and I can't do the logical table source override when the based on dimensions checkbox is checked .. so I am now stuck .. any help is appreciated.
    K

    Ok - I found a workaround (and maybe the preferred solution for my particular issue), which is - Using a CASE Statement with a COUNT DISTINCT aggregation and still havine AGGREGATE AWARENESS
    To get all three of the requirements above to work I had to do the following:
    - Create the COUNT DISTINCT as normal (counting on a USERID physically mapped column in my case)
    - Now I need to map my fact and aggregates to this column. This is where I got the case statement to work. Instead of trying to put the case statement inside of the Aggregate definition by using the checkbox 'Base on Dimension' (which didnt allow for aggregate awareness for some reason) .. I instead specified the case statement in the Column Mapping section of the Fact and Aggregate tables.
    - Once all the LTS's (facts and aggregates) are mapped .. you still have to define the Logical Table Source overrides in the aggregate tab of the count distinct definition. Add in all the fact and aggregates.
    Now the measure will use my month aggregate when i specify month, the day aggregate when i specify day, etc..
    If you are just trying to use a Count Distinct (no CASE satement needed) with Aggregate Awareness, you just need to use the Logical Table Source override on the aggregate tab.
    There is still a funky issue when using the COUNT aggregate type. As long as you dont map multiple logical table sources to the COUNT column it works fine and as expected. But, if you try to add in multiple sources and aggregate awareness it randomly starts SUMMING everything .. very weird. The blog in this thread says to check the 'Based on Dimension' checkbox to fix the problem but that did not work for me. Still not sure what to do on this one .. but its not currently causing me a problem so I will ignore for now ;)
    Thanks for all the help
    K

  • OBIEE BI Answers: Wrong Aggregation Measures on top level of hierarchy

    Hi to all,
    I have following problem. I hope to be clear in my English because it's a bit complicated to explain.
    I have following fact table:
    Drug Id Ordered Quantity
    1 9
    2 4
    1 3
    2 2
    and following Drug Table:
    Drug Brand Id Brand Description Drug Active Ingredient Id Drug Active Ingredient Description
    1 Aulin 1 Nimesulide
    2 Asprina 2 Acetilsalicilico
    In AWM i've defined a Drug Dimension based on following hierarchy: Drug Active Ingredient (parent) - Drug Brand Description (leaf) mapped as:
    Drug Active Ingredient = Drug Active Ingredient Id of my Drug Table (LONG DESCRIPTION Attribute=Drug Active Ingredient Description)
    Drug Brand Description = Drug Brand Id of my Drug Table (LONG DESCRIPTION Attribute = Drug Brand Description)
    Indeed in my cube I've mapped leaf level Drug Brand Description = Drug Id of my fact table. In AWM Drug Dimension is mapped as Sum Aggregation Operator
    If I select on Answers Drug Active Ingredient (parent of my hierarchy) and Ordered Quantity I see following result
    Drug Active Ingredient Description Ordered Quantity
    Acetilsalicilico 24
    Nimesulide 12
    indeed of correct values
    Drug Active Ingredient Description Ordered Quantity
    Acetilsalicilico 12
    Nimesulide 6
    EXACTLY the double!!!!!!! But if I drill down Drug Active Ingredient Description Acetilsalicilico I see correctly:
    Drug Active Ingredient Description Drug Brand Description Ordered Quantity
    Acetilsalicilico
    - Aspirina 12
    Total 12
    Wrong Aggregation is only on top level of hierarchy. Aggregation on lower level of hierarchy is correct. Maybe Answers sum also Total Row????? Why?????
    I'm frustrated. I beg your help, please!!!!!!!!
    Giancarlo

    Hi,
    in NQSConfig.ini I can't find Cache Section. I post all file. Tell me what I must change. I know your patient is quite at limit!!!!!!! But I'm a new user of OBIEE.
    # NQSConfig.INI
    # Copyright (c) 1997-2006 Oracle Corporation, All rights reserved
    # INI file parser rules are:
    # If values are in literals, digits or _, they can be
    # given as such. If values contain characters other than
    # literals, digits or _, values must be given in quotes.
    # Repository Section
    # Repositories are defined as logical repository name - file name
    # pairs. ODBC drivers use logical repository name defined in this
    # section.
    # All repositories must reside in OracleBI\server\Repository
    # directory, where OracleBI is the directory in which the Oracle BI
    # Server software is installed.
    [ REPOSITORY ]
    #Star     =     samplesales.rpd, DEFAULT;
    Star = Step3.rpd, DEFAULT;
    # Query Result Cache Section
    [ CACHE ]
    ENABLE     =     YES;
    // A comma separated list of <directory maxSize> pair(s)
    // e.g. DATA_STORAGE_PATHS = "d:\OracleBIData\nQSCache" 500 MB;
    DATA_STORAGE_PATHS     =     "C:\OracleBIData\cache" 500 MB;
    MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
    MAX_CACHE_ENTRY_SIZE = 1 MB;
    MAX_CACHE_ENTRIES = 1000;
    POPULATE_AGGREGATE_ROLLUP_HITS = NO;
    USE_ADVANCED_HIT_DETECTION = NO;
    MAX_SUBEXPR_SEARCH_DEPTH = 7;
    // Cluster-aware cache
    // GLOBAL_CACHE_STORAGE_PATH = "<directory name>" SIZE;
    // MAX_GLOBAL_CACHE_ENTRIES = 1000;
    // CACHE_POLL_SECONDS = 300;
    // CLUSTER_AWARE_CACHE_LOGGING = NO;
    # General Section
    # Contains general server default parameters, including localization
    # and internationalization, temporary space and memory allocation,
    # and other default parameters used to determine how data is returned
    # from the server to a client.
    [ GENERAL ]
    // Localization/Internationalization parameters.
    LOCALE     =     "Italian";
    SORT_ORDER_LOCALE     =     "Italian";
    SORT_TYPE = "binary";
    // Case sensitivity should be set to match the remote
    // target database.
    CASE_SENSITIVE_CHARACTER_COMPARISON = OFF ;
    // SQLServer65 sorts nulls first, whereas Oracle sorts
    // nulls last. This ini file property should conform to
    // that of the remote target database, if there is a
    // single remote database. Otherwise, choose the order
    // that matches the predominant database (i.e. on the
    // basis of data volume, frequency of access, sort
    // performance, network bandwidth).
    NULL_VALUES_SORT_FIRST = OFF;
    DATE_TIME_DISPLAY_FORMAT = "yyyy/mm/dd hh:mi:ss" ;
    DATE_DISPLAY_FORMAT = "yyyy/mm/dd" ;
    TIME_DISPLAY_FORMAT = "hh:mi:ss" ;
    // Temporary space, memory, and resource allocation
    // parameters.
    // You may use KB, MB for memory size.
    WORK_DIRECTORY_PATHS     =     "C:\OracleBIData\tmp";
    SORT_MEMORY_SIZE = 4 MB ;
    SORT_BUFFER_INCREMENT_SIZE = 256 KB ;
    VIRTUAL_TABLE_PAGE_SIZE = 128 KB ;
    // Analytics Server will return all month and day names as three
    // letter abbreviations (e.g., "Jan", "Feb", "Sat", "Sun").
    // To use complete names, set the following values to YES.
    USE_LONG_MONTH_NAMES = NO;
    USE_LONG_DAY_NAMES = NO;
    UPPERCASE_USERNAME_FOR_INITBLOCK = NO ; // default is no
    // Aggregate Persistence defaults
    // The prefix must be between 1 and 8 characters long
    // and should not have any special characters ('_' is allowed).
    AGGREGATE_PREFIX = "SA_" ;
    # Security Section
    # Legal value for DEFAULT_PRIVILEGES are:
    # NONE READ
    [ SECURITY ]
    DEFAULT_PRIVILEGES = READ;
    PROJECT_INACCESSIBLE_COLUMN_AS_NULL     =     NO;
    MINIMUM_PASSWORD_LENGTH     =     0;
    #IGNORE_LDAP_PWD_EXPIRY_WARNING = NO; // default is no.
    #SSL=NO;
    #SSL_CERTIFICATE_FILE="servercert.pem";
    #SSL_PRIVATE_KEY_FILE="serverkey.pem";
    #SSL_PK_PASSPHRASE_FILE="serverpwd.txt";
    #SSL_PK_PASSPHRASE_PROGRAM="sitepwd.exe";
    #SSL_VERIFY_PEER=NO;
    #SSL_CA_CERTIFICATE_DIR="CACertDIR";
    #SSL_CA_CERTIFICATE_FILE="CACertFile";
    #SSL_TRUSTED_PEER_DNS="";
    #SSL_CERT_VERIFICATION_DEPTH=9;
    #SSL_CIPHER_LIST="";
    # There are 3 types of authentication. The default is NQS
    # You can select only one of them
    #----- 1 -----
    #AUTHENTICATION_TYPE = NQS; // optional and default
    #----- 2 -----
    #AUTHENTICATION_TYPE = DATABASE;
    # [ DATABASE ]
    # DATABASE = "some_data_base";
    #----- 3 -----
    #AUTHENTICATION_TYPE = BYPASS_NQS;
    # Server Section
    [ SERVER ]
    SERVER_NAME = Oracle_BI_Server ;
    READ_ONLY_MODE = NO;     // default is "NO". That is, repositories can be edited online.
    MAX_SESSION_LIMIT = 2000 ;
    MAX_REQUEST_PER_SESSION_LIMIT = 500 ;
    SERVER_THREAD_RANGE = 40-100;
    SERVER_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    DB_GATEWAY_THREAD_RANGE = 40-200;
    DB_GATEWAY_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    MAX_EXPANDED_SUBQUERY_PREDICATES = 8192; // default is 8192
    MAX_QUERY_PLAN_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_INFO_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_QUERY_CACHE_ENTRIES = 1024; // default is 1024
    INIT_BLOCK_CACHE_ENTRIES = 20; // default is 20
    CLIENT_MGMT_THREADS_MAX = 5; // default is 5
    # The port number specified with RPC_SERVICE_OR_PORT will NOT be considered if
    # a port number is specified in SERVER_HOSTNAME_OR_IP_ADDRESSES.
    RPC_SERVICE_OR_PORT = 9703; // default is 9703
    # If port is not specified with a host name or IP in the following option, the port
    # number specified at RPC_SERVICE_OR_PORT will be considered.
    # When port number is specified, it will override the one specified with
    # RPC_SERVICE_OR_PORT.
    SERVER_HOSTNAME_OR_IP_ADDRESSES = "ALLNICS"; # Example: "hostname" or "hostname":port
    # or "IP1","IP2":port or
    # "hostname":port,"IP":port2.
    # Note: When this option is active,
    # CLUSTER_PARTICIPANT should be set to NO.
    ENABLE_DB_HINTS = YES; // default is yes
    PREVENT_DIVIDE_BY_ZERO = YES;
    CLUSTER_PARTICIPANT = NO; # If this is set to "YES", comment out
    # SERVER_HOSTNAME_OR_IP_ADDRESSES. No specific NIC support
    # for the cluster participant yet.
    // Following required if CLUSTER_PARTICIPANT = YES
    #REPOSITORY_PUBLISHING_DIRECTORY = "<dirname>";
    #REQUIRE_PUBLISHING_DIRECTORY = YES; // Don't join cluster if directory not accessible
    DISCONNECTED = NO;
    AUTOMATIC_RESTART = YES;
    # Dynamic Library Section
    # The dynamic libraries specified in this section
    # are categorized by the CLI they support.
    [ DB_DYNAMIC_LIBRARY ]
    ODBC200 = nqsdbgatewayodbc;
    ODBC350 = nqsdbgatewayodbc35;
    OCI7 = nqsdbgatewayoci7;
    OCI8 = nqsdbgatewayoci8;
    OCI8i = nqsdbgatewayoci8i;
    OCI10g = nqsdbgatewayoci10g;
    DB2CLI = nqsdbgatewaydb2cli;
    DB2CLI35 = nqsdbgatewaydb2cli35;
    NQSXML = nqsdbgatewayxml;
    XMLA = nqsdbgatewayxmla;
    ESSBASE = nqsdbgatewayessbasecapi;
    # User Log Section
    # The user log NQQuery.log is kept in the server\log directory. It logs
    # activity about queries when enabled for a user. Entries can be
    # viewed using a text editor or the nQLogViewer executable.
    [ USER_LOG ]
    USER_LOG_FILE_SIZE = 10 MB; // default size
    CODE_PAGE = "UTF8"; // ANSI, UTF8, 1252, etc.
    # Usage Tracking Section
    # Collect usage statistics on each logical query submitted to the
    # server.
    [ USAGE_TRACKING ]
    ENABLE = NO;
    //==============================================================================
    // Parameters used for writing data to a flat file (i.e. DIRECT_INSERT = NO).
    STORAGE_DIRECTORY = "<full directory path>";
    CHECKPOINT_INTERVAL_MINUTES = 5;
    FILE_ROLLOVER_INTERVAL_MINUTES = 30;
    CODE_PAGE = "ANSI"; // ANSI, UTF8, 1252, etc.
    //==============================================================================
    DIRECT_INSERT = YES;
    //==============================================================================
    // Parameters used for inserting data into a table (i.e. DIRECT_INSERT = YES).
    PHYSICAL_TABLE_NAME = "<Database>"."<Catalog>"."<Schema>"."<Table>" ; // Or "<Database>"."<Schema>"."<Table>" ;
    CONNECTION_POOL = "<Database>"."<Connection Pool>" ;
    BUFFER_SIZE = 10 MB ;
    BUFFER_TIME_LIMIT_SECONDS = 5 ;
    NUM_INSERT_THREADS = 5 ;
    MAX_INSERTS_PER_TRANSACTION = 1 ;
    //==============================================================================
    # Query Optimization Flags
    [ OPTIMIZATION_FLAGS ]
    STRONG_DATETIME_TYPE_CHECKING = ON ;
    # CubeViews Section
    [ CUBE_VIEWS ]
    DISTINCT_COUNT_SUPPORTED = NO ;
    STATISTICAL_FUNCTIONS_SUPPORTED = NO ;
    USE_SCHEMA_NAME = YES ;
    USE_SCHEMA_NAME_FROM_RPD = YES ;
    DEFAULT_SCHEMA_NAME = "ORACLE";
    CUBE_VIEWS_SCHEMA_NAME = "ORACLE";
    LOG_FAILURES = YES ;
    LOG_SUCCESS = NO ;
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\CubeViews.Log";
    # MDX Member Name Cache Section
    # Cache subsystem for mapping between unique name and caption of
    # members for all SAP/BW cubes in the repository.
    [ MDX_MEMBER_CACHE ]
    // The entry to indicate if the feature is enabled or not, by default it is NO since this only applies to SAP/BW cubes
    ENABLE = NO ;
    // The path to the location where cache will be persisted, only applied to a single location,
    // the number at the end indicates the capacity of the storage. When the feature is enabled,
    // administrator needs to replace the "<full directory path>" with a valid path,
    // e.g. DATA_STORAGE_PATH = "C:\OracleBI\server\Data\Temp\Cache" 500 MB ;
    DATA_STORAGE_PATH     =     "C:\OracleBIData\cache" 500 MB;
    // Maximum disk space allowed for each user;
    MAX_SIZE_PER_USER = 100 MB ;
    // Maximum number of members in a level will be able to be persisted to disk
    MAX_MEMBER_PER_LEVEL = 1000 ;
    // Maximum size for each individual cache entry size
    MAX_CACHE_SIZE = 100 MB ;
    # Oracle Dimension Export Section
    [ ORA_DIM_EXPORT ]
    USE_SCHEMA_NAME_FROM_RPD = YES ; # NO
    DEFAULT_SCHEMA_NAME = "ORACLE";
    ORA_DIM_SCHEMA_NAME = "ORACLE";
    LOGGING = ON ; # OFF, DEBUG
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\OraDimExp.Log";

  • Design thoughts: Replacing a L2 aggregation switch

    Hi,
    I have purchased a 4507R switch to replace a 2924M-XL switch that acts as an aggregation switch in our network. Let me explain further what I plan to do.
    I have 20 remote sites connected point to point via 100 Mbps dark fibre to the 2924M-XL. Most of the sites have only a handful of users but 5 of them are bigger (ie. 20-70 users). Some of the larger remote sites (small campuses really) have 2-5 switches in a star topology with the "hub" switch connecting back to the 2924M-XL. Each site has 1 or 2 user VLANs and a management VLAN. The 2924M-XL trunks all VLANs back to a 6513 at the core of our network.
    I will be connecting the 4507R along 2 seperate dark fibre runs (for layer 1 redundancy) to 2 6513s in our core. This will give us fault-tolerence should our primary 6513 fail.
    My problem is I'm struggling with the decision to go layer 2 or layer 3 between the 4507R and the 6513s. Layer 2 would be alot easier to implement and support (I'm the sole administrator of this rather large network) but then I'd have RSTP to deal with among the 2 6513s and 4507. I'm comfortable with RSTP since I run it between 2950G switches dual connected to the 6513s but my gut feeling is that I should be putting in layer 3 between the 6513s and the 4507.
    We will be implementing VoIP is the next 2 years and I'm unsure how that affects my decision.
    One last comment. Would layer 2 trunking of VLANs from the 4507 to the 6513s WITHOUT trunking these VLANs between the 6513s be a viable optionand would HSRP between the 2 6513s still work OK for layer 3 redundany? The remotes sites are setup with unique user VLANs but there is a special use VLAN that spans 4 of the sites and my manegement VLAN spans all the sites (I'm planning to change this).
    Thanks everyone for your thought/opinions.
    Ian.

    Hi there Ian,
    I'm a big fan of routing over switching, which I read is becoming Cisco's recommended way of doing things.
    I would route between the 2 x 6513's and the 4507 as it will not only give you fault tolerance, but also load balancing, plus cutting down on broadcast domains and all those other nice things.
    As far as configuration goes, onec you've got it up and running, then it'll just keep running. It seems like you will only need straight forward routing here and nothing too complex. Setting it up would be a simple affair.
    VoIP, in my experience, is much better implemented over a routed network than a switched one. There are loads more things that you can do at layer 3 than you can at layer 2. Think about all the QoS that you'll be able to implement, with shaping and policing, etc. Much more security can be built in at layer 3 too. You'll get the likes of NBAR and all other features that you'll be able to (over time) tweak you network with.
    As for performance, you'll never spot a difference. The 4507 will be lots faster than the 2924 and using cef, the 4507 will keep a forwarding table for ip's the same way a 2900 keeps a mac table.
    You will not regret routing it.
    Hope this helps - if so, please give it a rating.
    LH

  • Aggregated Spatial Network Model

    Hi, everyone
    The paper posted is intended for audiences specializing in GIS and Utility Engineering.
    Please contact me if interested
    http://matchlogics.dyndns.org/MatchLogicsNew/Articles/Aggregated%20Spatial%20Network%20Model.pdf

    After making it 'FALSE' also i am getting the error
    (ORA-29532: Java call terminated by uncaught Java exception: java.lang.IllegalArgumentException: the specified map does not exist
    ORA-06512: at "MDSYS.SDO_NETWORK_MANAGER_I", line 315
    ORA-06512: at "MDSYS.SDO_NETWORK_MANAGER_I", line 245
    ORA-06512: at "METRO.GETSHORTESTPATH", line 31
    ORA-00600: internal error code, arguments: [17099], [], [], [], [], [], [], []
    ORA-06512: at line 1)
    What am i doing wrong..

  • Aggregation plan/Skip level aggregation for model with a cumulative measure

    I have planning data in the following format.
    Project     Department Name     Task     Date          Units of work completed
    PRO1     DEPARTMENT1          Task1     01/01/2008     12
    PRO1     DEPARTMENT1          Task1     01/21/2008     3
    PRO1     DEPARTMENT1          Task1     03/01/2008     8
    PRO1     DEPARTMENT1          Task1               4
    PRO1     DEPARTMENT1          Task2     01/21/2008     5
    PRO1     DEPARTMENT1          Task2               9
    PRO1     DEPARTMENT2          Task1     01/01/2008     20
    PRO1     DEPARTMENT2          Task1     02/11/2008     6
    PRO1     DEPARTMENT2          Task3     01/15/2008     15
    Note: The rows having blank dates indicate remaining work for that task
    Based on user requirements, I have created a OLAP model as follows
    Dimensions:
    1. All Projects-->Projects
    2. All Department-->Department
    3. All Tasks     --> Tasks
    4. Year-->Month-->Day
    Measures:
    1. Total units of work (Irrespective of date)
    2. Cumulative units of work completed (Based on the date)
    If someone has worked on similar models before, I would be thankful if they can help me with
    1) An aggregation plan for these measures. (Basically, for my first measure, I would want to get a cumulative total across my time dimension, and for my other measure, I would like to see the total units, whatever date I pick, for example, for Dep1, Task1, this measure should show 27 on 01/01/08 and also on 01/21/08 and also when I roll up and look at year 2008, I still need 27 in this column)
    2) Is it ok to apply Skip level aggregation to this type of calculations, or would that result in some problems?
    Any and All suggestions to implement this are welcome.
    Thanks,
    Bharat

    Hi,
    Can you build time dimension to include as many years as your application needs (2000 to 2025 say)? Then you can simplify the model a lot by defaulting the records with remaining units -- the ones with no date -- with the last date in your time dimension (31-DEC-2025). So in a sense, you're loading them as if they'll be complete on 31-DEC-2025.
    Also you should have a grand total level (ALL_YEARS say) along time dimension which contains a single member which includes all the years.
    Cube with 3 dimensions: Projects, Dept, Task, Time and 1 Fact: Units
    You can use calculated measures to get the results you want
    1. Total units of work (Irrespective of date) ... reference top most member. Will include all -- completed and incomplete units of work.
    Expression: cube1_units(time 'ALL_YEARS_1').. or use olap dml function to get the last member programatically if desired... Alternate Expression: cube1_units(time limit(time to time_levelrel eq 'ALL_YEARS')).
    2. Cumulative units of work completed (Based on the date)
    2a: Create formula/measure which is a regular Cumulative summation of Units .... Note: you need a Period-To-Date calculation set to sum up all peers under ancestor at level: ALL_YEARS (instead of year)
    This will include all completed units until the day in question. Since incomplete units are on the last day, they will not count.. You may need to add a special check for the last day.
    Use another formula to reference 2a appropriately across all levels of time...
    Formula for Measure #2: if time_levelrel eq 'DAY' then 2a else 2a(time statlast(limit(time to bottomdescendants using time_parentrel time(time time))))
    For higher levels of time (above day), you should reference the Cumulative units of work for the last day of the relevant period. E.g. To get completed units of work for October 2011, you need to reference the value of 2a. for last day of the Month: 31-Oct-2011.
    HTH
    Shankar

  • Activate more than one Aggregation Level

    Hello experts,
    We use the Integrated Planning and we have more than 80 Aggregation Levels. When the MultiProvider is deactived because of adding a new InfoObject, all our Aggregation Levels are deactivated.
    Is there a possibility to activate more than one Aggregation Level at once? Maybe a special report?
    Thank you.

    Hi,
    please check:
    [Re: Activate all the Aggregation level of underlying multi provider]
    Gregor wrote a small program and posted it to the forum. Please note: There is no support for this tool.
    Bye Matthias

  • Project Settlement - Aggregating/Settling on new custom field

    At our company, we created a new custom field called "Location Code" that is populated during journal entries through manual input. When an entry to a WBS element is posted with a Location Code value entered, the Location Code value also gets carried over to CO (controlling), SPL, and is in PS. The problem we have is with settlement as it does not carry over the Location Code at the final settlement receiver. For example:
    1. Journal entry is posted to WBS element H01.12345.01.01 with Location Code value entered of "435" (this is a field in the FI document posting screen) for $100. Another entry is posted to the same WBS element but with a different Location Code of "080" for $200. Both entries to the same G/L account.
    2. Settlement is run to settle the 2 entries in Step 1 above which totals ($200 + $100) $300. WBS element H01.12345.01.01 will settle to a cost center.
    3. When we view the final accounting documents (especially our special ledger purpose documents), Location Code value is blank and there is one entry to the receiving cost center for $300. It seems project settlement aggregates entries based only on WBS level and G/L account.
    QUESTION: Is it possible (and if so, how) to get projects to settle these lines individually based on aggregating on Location Code as well as G/L account? We would like to see $200 settle to the receiving cost center with Location Code value "435", and another line settling $100 to the receiving cost center with Location Code "080".
    I hope I made sense. Thanks in advance!

    line item settlement should only be possible if the setting is maintained in the IM Profile and it sounds like your projects don't have an IM profile...  so it shouldn't work.  I haven't used it on a non-capital project but you could give it a shot.
    [-nathan|http://wiki.sdn.sap.com/wiki/display/profile/Nathan+Genez]

Maybe you are looking for

  • Automatically Shutdown Database During VMware Server 'Shut Down Guest'

    I have 10g Enterprise Edition Release 10.2.0.4.0 running on Windows Server 2003 R2 under VMware Server version 2.0.0. I 'Power On' the VM and the OS/database start normally. I don't even open the guest OS console from VMware. I use TOAD and SQLPlus f

  • Recommendations on monitor calibrators?

    Hi, I've tried the color calibrator in OS X, and I've also tried 3rd party apps like "SuperCal." I can get my monitors "close" and "good enough" but now I want "better" or as close to "perfect" as possible. That is where a HARDWARE color calibrator c

  • Report painter document

    CAN ANY ONE SEND ME THE MATERIAL OR ANY DOCUMENTATION ON REPORT PAINTER. My email id: [email protected] regards VINU Edited by: VINU C T on Apr 25, 2008 4:43 AM Edited by: VINU C T on Apr 25, 2008 4:44 AM

  • ADE on MacBook crashes when opening downloaded library book.  Any fix?

    I just downloaded ADE on my MacBook Pro, then downloaded a book from the library.  The book downloads every time I open ADE.  Then, when clicking on the book, ADE quits.  It is supposed to add it to a list to "READ", but this doesn't happen.  Any sol

  • PocketMac Hangs When Trying to Open

    Any advice? I know it's not an apple product, but has anyone else ran into an issue simply opening PocketMac Sync Manager? It just hangs and I have to force quit. Even after several reboots.