Logical partitioning, pass-through layer, query pruning

Hi,
I am dealing with performance guidelines for BW and encountered few interesting topics, which however I do not fully undestand.
1. Mainetance of logical partitioning.
Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
2.Pass- though layer.
There are very few information about this basic concept.  Anyway:
- is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
3. Query pruning
Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
4. DSOs for master data loads
What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
Thanks,
Marcin

1. Mainetance of logical partitioning.
Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
Logical partitioning is when you have separate ODS / Cubes for separate Years etc ....
There is no automated way - however if you want to you can physically partition the cubes using time periods and extend them regularly using the repartitioning options provided.
2.Pass- though layer.
There are very few information about this basic concept. Anyway:
- is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
Usually a pass through layer is used to
1. Ensure data consistency
2. Possibly use Deltas
3. Additional transformations
In a write optimized DSo - the request ID is key and hence delta is based on request ID. If you do not have any additional transformations - then a Write optimized DSO is essentially like your PSA.
3. Query pruning
Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
The query pruning - depends on the run based and cost based optimizers within the DB and not much control over how well you can influence the execution of a query other than havin up to date statistics , building aggregates etc etc.
4. DSOs for master data loads
What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
It depends more on the data volumes and also the number of transformation required...
If you have multiple levels of transformations - use a DSO or if you have very high data volumes and want to identify changed records - then use a DSO.

Similar Messages

  • Are there any tools from NI can generate the pass-through layer (IVI class driver) for IVI class specific driver?

    IVI class driver (layer) will provide interchangeable feature for a IVI class specific driver. It would work as a pass-through layer and finally make calls to the IVI class specific driver. Since there could be a lot of functions to be passed through. Are there any tools from NI (Labwindows/CVI or Labview) can do this ?
    Thanks a lot. 
    BTW: the IVI class specific driver interface is generated from Labwindows/CVI tools.

    Hi Chris,
    Yes, I did.  To support interchangability, from my understanding of the IVI specs, there should be another layer IVI-C class driver on top of this IVI class specific driver. As in the IVI‑3.1: Driver Architecture Specification:
    Although IVI‑C class drivers export inherent,
    base, and extension capabilities, they do not actually implement them. Except
    for a few inherent functions and attributes defined exclusively for class
    drivers, class driver functions and attributes provide a pass‑through layer to
    the IVI‑C specific driver. An IVI‑C specific driver is responsible for
    implementing the operations of functions and attributes and for communicating
    with the instrument. The IVI‑C specific instrument driver contains the
    information for controlling the instrument, including the command strings,
    parsing code, and valid ranges of each instrument setting"
    So where is this IVI-C class driver and how is it created and communicate with my class specific driver?
    Thanks a lot.
    Cheers,
    IVI‑3.1: Driver Architecture Specification

  • Code to run a query in SQL from Access with pass through query

    I have a query in SQL Server 2008:
     [Auto Null Up Date].sql. I want to run this query from Access 2007 using a Pass Through Query. What is the command/code to run this query from Access? I have used Pass Through Queries but never in this capacity so I am somewhat lost. I have
    already established the OBDC link and tested.

    Naomi,
    Here are a few lines of the SQLCMD code in the [Auto Null Update].sql query:
    USE [Archive Master]
    Go
    :r "\\10.200.1.60\c$\Users\bkreft\My Documents\SQL Server Management Studio\Projects\Null BackPress 2 update.sql"
    GO
    :r "\\10.200.1.60\c$\Users\bkreft\My Documents\SQL Server Management Studio\Projects\Null CHWR 3 update.sql"
    GO
    :r "\\10.200.1.60\c$\Users\bkreft\My Documents\SQL Server Management Studio\Projects\Null CHWR 4 update.sql"
    When this code is pasted into a Create Procedure, (the USE [Archive Master] is not used), the procedure will run, but once saved here is what is left of the procedure once I attempt to modify:
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    -- =============================================
    -- Author:                         
    <Author,,Name>
    -- Create date: <Create Date,,>
    -- Description:              
    <Description,,>
    -- =============================================
    Create PROCEDURE [dbo].[NullTest2]
    AS

  • Changing from "Pass Through" as Default in Blend Mode in a Group Folder Layer Set

    Hello Forum gurus!
    Using a MacBook Pro, 10.5.4. Painting with CS2. My question is two-fold.
    When the Group layer folder in a layer set is selected, it is displaying the "Pass Through" blend mode by default. Is there a special convenience to this? (Best use for pass through blend mode)
    Also, How may I change the default in this mode to be "Normal" as the setting when creating a Group folder layer set?
    Thanks!!
    Colene

    Matthias, now I understand. We're both right from different perspectives. If you want the adjustment layer (should there be one within the group) to affect all layers below the group (perhaps because that's the way the image looked before you made the group in the first place), then Pass Through is your blend mode. If you want that adjustment layer to affect only those layers within the group (which I often do), then Normal (or some other) is your blend mode.

  • Bex Query: make data pass through user exit calculation at navigation time

    Hi all!
    I have a new requirement and I don't know how to solve it...
    Now, when I execute a web model containing a query, the system "reads" a date and calculate the query based on that date in a user exit defined in CMOD, for example, filtering data with an interval between january and the date read.
    Besides, I have in the web model a dropdown item where user can choose other months. The dropdown item only shows single values but now if I choose a month, the query only shows data for that month.
    I need the system filters the query with the new interval. For example, between january and the new month the user has just chosen.
    Does anyone know a way to make a query pass through the user exit calculation after executing the query for the first time? Any other ideas? I need the query to "reexecute" and filter the data (create a new interval) based on the value a user chose.
    (sorry about any inconvenience, because I posted the problem in another sdn specific forum but as I received no answer I've decide to explain it in here...)
    Thank you! Points will be assigned.

    Any ideas please?

  • DMS Document upload: does it pass through sap DMS ?

    Dear All,
    We have a question concerning the transmission of documents from the client to DMS and the Content Server: does the document need to pass through the sap server ?
    Document upload (create document CV01N)
    Does the document go directly from the client to the Content server OR does it pass through the sap DMS before to be stored in the CS ?
    Document download (read document CV03N)
    Does the document go directly from the CS to the client OR does it pass through the sap DMS server ?
    This could be interesting to know for network performances.
    best regards,

    Hi Gurus
    is the cache server a default funtionality from the content server or any configuration is required from our part.
    is that the cache server acts as the RAM of our system?
    please explain the partitioning or biferfication of the Content server, as you told
    content server is divided into storage catagories and  this in turn in to content repositories,
    please  clarify below points,
    1)any server or PC can be made as Content server by insatalling the content server CD if iam right ?
    2) Practical and funtional benifits of partitioning content server into content repositories is it for authorization and storing data by naming convection or can it also help in copiying data from a specific content repository if needed, ( is content repositories a logical partition or practical partition like B,C,D, F drives of our PC hard disk)
    3) can /should there be multiple content server installation for a particular (production) client.
    4) Can Archiving  be done say by creating a separate content repository inside the same Content server, or is it mandatory to have a separate archiving server itself,
    please give some idea with examples
    Thanks and regards
    Kumar

  • 0IC_C03 related Inventory Process - Logical Partitioning (Vs) Physical Part

    Hello Everyone,
    After going through multiple postings throughout the form and documentation from SAP, it states that the 0IC_C03 InfoCube when used with Non Cumulative keyfigures is not recommended to be partitioned logically by physical year/calendar year as the query will read all the data sets due to the stock marker logic.
    In our specific scenario,
    1. After the InfoCube (0IC_C03) was enhanced with additional characterisitcs such as Doc Number, Movement type and so on due to business requirements I was not able to actually use the Non Cumulative Keyfigures as they were not populated within the report.
    2. So, we decided not to use the Non Cumulative keyfigures but rather create two cumulative keyfigures (Issue Stock Quantity - Receipt Stock Quantity) and (Issue Valuated Stock Value - Receipt Valuated Stock Value) and both of these are available in the InfoCube and are calculated during the update process.
    These two keyfigures are cumulative with exception aggregation of LAST based on 0CALDAY.
    The question is,
    Since we are not using the actual Non Cumulative Keyfigures (even though we are not using these, we still have the stock marker updated and data compressed based on this along with Validity table defined) can we do logical partitioning on the InfoCube based on Calendar year.
    Thanks....

    Hello Elango,
    Appreciate your response.
    First of..I do understand the difference between logical and physical partitioning and the question is not about joining them together.
    I am sorry, if others cannot understand the detailed issue posted. My apologies was a part of polite gesture, and please do respond back with proper precise answer if you think you did actually understand the question....
    The question here is about how I can leverage the performance and administrative performance by logically breakingdown the data.
    The issues due to which I am trying to look into different aspects of logical partitioning are:
    1. If I do logical partitioning by Plant due to the stock marker logic then I cannot do archiving as a Plant and its related data cannot be archived by time characteristic as the partitioning is not done by time characteristic.
    2. The reason I would have to have document number and movement type in the InfoCube is due to the kind of reporting users perform.
    We have a third party system whose data needs to be reconciled to the data in the plants and storage locations.
    And in order to do so, the first step users would be running the report is plant, storage location and sku. From here on for the storage locations which have balance they would like to drill down on to the document number and movement type to see what the actual activity is.
    So, to support this requirement I would have to have the above characterisitcs in the InfoCube.
    The question again is,.....what is the exact list of issues I would be having doing the logical partitioning by time characteristic.
    Once again, even though the non cumulative keyfigures are available in the InfoCube we are not using them for any reporting purpose....so please keep that in consideration while replying back.
    Thanks
    Dharma.

  • Physical Vs Logical Partitioning

    We have 2 million records in the sales infocube for 3 years. We are currently discussing the pros and cons of using Logical partitioning Vs Physical Partitioning. Please give your inputs.

    hi
    there are two types of partitioning generally talked about with SAP BW, logical and physical partitioning.
    Logical partitioning - instead of having all your data in a single cube, you might break into separate cubes, with each cube holding aspecific year's data, e.g. you could have 5 sales cubes, one for each year 2001 thru 2005.
    You would then create a Multi-Provider that allowed you to query all of them together.
    A query that needs data from all 5 years would then automatically (you can control this) be split into 5 separate queries, one against each cube, running at the same time. The system automatically merges the results from the 5 queries into a single result set.
    So it's easy to see when this could be a benefit. If your queries however are primarily run just for a single year, then you don't receive the benefit of the parallel processing. In non-Oracle DBs, splitting the data like this may still be a benefit by reducing the amount of rows in the fact table that must be read, but does not provide as much value to an Oracle DB since Infocube queries are using a Star_Transformation.
    Physical Partitioning - I believe only Oracle and Informix currently support Range partitioning. This is a separately licensed option in Oracle.
    Physical partitioning allows you to split an Infocube into smaller pieces. The pieces, or partitions, can only be created by 0FISCPER or 0CALMONTH for an InfoCube (ODSs can be partitioned, but require a DBAs involvement). The DB can then take advantage of this partitioning by "pruning" partitions during a query, e.g. a query only needs data form June 2005
    The DB is smart enough to restrict the indices and data it will read to the June 2005 partition. This assumes your query restricts/filters on the partitioning characteristic. It can apply this pruning to a range of partitions as well, e.g. 0FISCPER 001/2005 thru 003/2005 would only look at the 3 partitions.
    It is NOTsmart enough, however, to figure out that if your restrict to 0FISCYEAR = 2005, that it should only read 000/2005 thru 016/2005 since 0FISCYEAR is NOT the partitioning characteristic.
    An InfoCube MUST be empty in order to physically partition it. At this time, there is no way to add additional partitions thru AWB, so you want to make sure that you create partitions out into the future for at least a of couple of years.
    If the base cube is partitioned, any aggregates that contain the partitioning characteristic (0CALMONTH or 0FISCPER) will automatically be partitioned.
    In summary, you need to figure out if you want to use physical or logical partitioning on the cube(s), or both, as they are not mutually exclusive.
    So you would need to know how the data will be queried, and the volume of data. It would make little sense to partition cubes that will not be very large.
    physical partitioning is done at database level and logical partitioning done at data target level.
    Cube partitioning with time characteristics 0calmonth Fiscper is physical partitioning.
    Logical partitioning is u partition ur cube by year or month that is u divide the cube into different cubes and create a multiprovider on top of it.
    logical Vs physical partitions ?

  • Data has changed after passing through FIFO?

    Dear experts,
    I am currently working on a digital triangular shaping using the 7966R FPGA + 5734 AI. I am using LabView 2012 SP1.
    Some days ago I have encountered a problem with my FIFOs that I have not been able to solve since. I'd be glad if somebody could point out a solution/ my error.
    Short description:
    I am writing U16 variables between ~32700-32800 to a U16 configured FIFO. The FIFO output does not coincide with the data I have been writing to the FIFO but is rather bit-shifted or something is added. This problem does not occure if I execute the VI on the dev. PC with simulated input.
    What I have done so far:
    I am reading all 4 channels of the 5734 inside a SCTL. The data is stored in 4 feedback nodes I am applying a triangular shaping to channel 0 and 1 by using 4 FIFOs that have been prefilled with a predefined number of zeros to serve as buffers. So it's something like (FB = Feedback node):
    A I/O 1  --> FB --> FIFO 1 --> FB --> FIFO 2 --> FB --> Do something
    A I/O 2  --> FB --> FIFO 3 --> FB --> FIFO 4 --> FB --> Do something
    This code shows NO weird behaviour and works as expected.
    The Problem:
    To reduce the amount of FIFOs needed I then decided to interleave the data and to use only 2 FIFOs instead of 4. You can see the code in the attachment. As you can see I have not really changed anything to the code structure in general.
    The input to the FIFO is a U16. All FIFOs are configured to store U16 data.
    The data that I am writing to the FIFO can be seen in channel 0 of the output attachment.
    The output after passing through the two FIFOs can be seen in channel 2 of the same picture.
    The output after passing through the first FIFO (times 2) can be seen in channel 3 of the picture.
    It looks like the output is bit-shifted and truncated as it enters Buffer 1. Yet the difference between the input and output is not exactly a factor of 2. I also considered the possibility that the FIFO adds both write operations (CH0 + CH1) but that also does not account for the value of the output.
    The FIFOs are all operating normally, i.e. none throws a timeout. I also tried several different orders of reading/writing to the FIFOs and different ways of ensuring this order (i.e. case strucutres, flat and stacked sequence). The FIFOs are also large enough to store the amount of data buffered no matter if I write or read first.
    Thank you very much,
    Bjorn
    Attachments:
    FPGA-code.png ‏61 KB
    FPGA-output.png ‏45 KB

    During the last couple of days I tried the following:
    1. Running the FPGA code on the development PC with simulated I/O. The behavior was normal, i.e. like I've intended the code to perform.
    2. I tested the code on the development PC with the square and sine wave generation VI as 'simulated' I/O. The code performed normal.
    3. I replaced the FIFOs with queues and ran my logic on the dev. PC. The logic performed totally normal.
    4. Right now the code is compiling with constants as inputs like you suggested...
    I am currently trying to get LabView 2013 on the development machine. It seems like my last real hope is that the issue is a bug in the XILINX 13.4 compiler tools and that the 14.4 tools will just make it disappear...
    Nevertheless I am still open for suggestions. Some additional info about my FIFOs of concerne:
    Buffer 1 and 2:
    - Type: Target Scoped
    - Elements Requested: 1023
    - Implementation: Block Memory
    - Control Logic: Target Optimal
    - Data Type: U16
    - Arbitrate for Read: Never Arbitrate
    - No. Elements Per Read: 1
    - Arbitrate for Write: Never Arbitrate
    - No. Elements Per Write: 1
    The inputs from the NI 5734 are U16 so I am wirering the right data type to the FIFOs. I also don't have any coercion dots within my FPGA VI. And so far it has only occured after the VI has been compiled onto the FPGA. Could some of the FIFOs/block memory be corrupted because we have written stuff onto the FPGA too often?

  • Cisco ASA 5505 L2TP Pass through

    I am having trouble with L2TP pass through on an ASA 5505 device.
    L2TP server: OSX 10.6
    I can connect with any OSX system and it works fine straight away.
    When connecting with a windows computer I get a 789 error.  "Error 789: The L2TP connection attempt failed because the security layer encountere a processing error during the initial negotiations with the remote computer."
    I did not setup or configure the device to start with and apart from this issue its working fine so I am hessitant at trying to just mess around too much to try and find the problem.
    I am using the ASDM 6.4 to manage the device.
    Ports look to be forwarded correctly; 1701, 4500 & 500 UDP.
    Im just looking for other common issues?
    Rob

    Below is the commands you wanted.
    Where you see: IPNOTWHATIWASEXPECTING
    This is an IP I dont know. possible and old IP address.
    and
    default-domain value domain-notcorrect.local
    This is an old domain from years ago.
    Result of the command: "show run crypto"
    crypto ipsec transform-set aes-sha esp-aes esp-sha-hmac
    crypto ipsec transform-set aes-192-sha esp-aes-192 esp-sha-hmac
    crypto ipsec transform-set aes-256-sha esp-aes-256 esp-sha-hmac
    crypto ipsec transform-set 3des-sha esp-3des esp-sha-hmac
    crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac
    crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac
    crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac
    crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac
    crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac
    crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac
    crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac
    crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac
    crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac
    crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac
    crypto ipsec transform-set transform-amzn esp-aes esp-sha-hmac
    crypto ipsec security-association lifetime seconds 28800
    crypto ipsec security-association lifetime kilobytes 4608000
    crypto dynamic-map map-dynamic 1 set pfs group5
    crypto dynamic-map map-dynamic 1 set transform-set aes-256-sha aes-192-sha aes-sha 3des-sha
    crypto dynamic-map map-dynamic 2 set pfs
    crypto dynamic-map map-dynamic 2 set transform-set aes-256-sha aes-192-sha aes-sha 3des-sha
    crypto dynamic-map map-dynamic 3 set pfs
    crypto dynamic-map map-dynamic 3 set transform-set aes-256-sha aes-192-sha aes-sha 3des-sha
    crypto dynamic-map map-dynamic 4 set transform-set aes-256-sha aes-192-sha aes-sha 3des-sha
    crypto map outside_map 1 match address outside_1_cryptomap
    crypto map outside_map 1 set peer IPNOTWHATIWASEXPECTING3
    crypto map outside_map 1 set transform-set ESP-DES-SHA
    crypto map outside_map 2 match address acl-amzn
    crypto map outside_map 2 set pfs
    crypto map outside_map 2 set peer IPNOTWHATIWASEXPECTING IPNOTWHATIWASEXPECTING
    crypto map outside_map 2 set transform-set transform-amzn
    crypto map outside_map 255 ipsec-isakmp dynamic map-dynamic
    crypto map outside_map interface outside
    crypto isakmp identity address
    crypto isakmp enable outside
    crypto isakmp policy 1
    authentication pre-share
    encryption aes-256
    hash sha
    group 5
    lifetime 86400
    crypto isakmp policy 2
    authentication pre-share
    encryption aes-256
    hash sha
    group 2
    lifetime 86400
    crypto isakmp policy 3
    authentication pre-share
    encryption aes-256
    hash sha
    group 1
    lifetime 86400
    crypto isakmp policy 11
    authentication pre-share
    encryption aes-192
    hash sha
    group 5
    lifetime 86400
    crypto isakmp policy 12
    authentication pre-share
    encryption aes-192
    hash sha
    group 2
    lifetime 86400
    crypto isakmp policy 13
    authentication pre-share
    encryption aes-192
    hash sha
    group 1
    lifetime 86400
    crypto isakmp policy 21
    authentication pre-share
    encryption aes
    hash sha
    group 5
    lifetime 86400
    crypto isakmp policy 22
    authentication pre-share
    encryption aes
    hash sha
    group 2
    lifetime 86400
    crypto isakmp policy 23
    authentication pre-share
    encryption aes
    hash sha
    group 1
    lifetime 86400
    crypto isakmp policy 31
    authentication pre-share
    encryption 3des
    hash sha
    group 5
    lifetime 86400
    crypto isakmp policy 32
    authentication rsa-sig
    encryption des
    hash sha
    group 1
    lifetime 86400
    crypto isakmp policy 33
    authentication pre-share
    encryption 3des
    hash sha
    group 1
    lifetime 86400
    crypto isakmp policy 34
    authentication pre-share
    encryption 3des
    hash sha
    group 2
    lifetime 86400
    Result of the command: "show run group-policy"
    group-policy evertest internal
    group-policy evertest attributes
    dns-server value 10.100.25.252
    vpn-idle-timeout 720
    vpn-tunnel-protocol IPSec l2tp-ipsec
    pfs enable
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value vpnsplittunnel
    default-domain value domain-notcorrect.local
    group-policy petero internal
    group-policy petero attributes
    dns-server value 10.100.25.252
    vpn-idle-timeout 720
    pfs enable
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value vpnsplittunnel
    default-domain value domain-notcorrect.local
    group-policy awsfilter internal
    group-policy awsfilter attributes
    vpn-filter value amzn-filter
    group-policy vpnpptp internal
    group-policy vpnpptp attributes
    dns-server value 10.100.25.252
    vpn-tunnel-protocol l2tp-ipsec
    group-policy vanheelm internal
    group-policy vanheelm attributes
    dns-server value 10.100.25.252
    vpn-idle-timeout 720
    vpn-tunnel-protocol IPSec l2tp-ipsec
    pfs enable
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value vpnsplittunnel
    default-domain value domain-notcorrect.local
    group-policy ciscoVPNuser internal
    group-policy ciscoVPNuser attributes
    dns-server value 10.100.25.10
    vpn-idle-timeout 720
    pfs enable
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value vpnsplittunnel
    default-domain value domain-notcorrect.local
    group-policy chauhanv2 internal
    group-policy chauhanv2 attributes
    dns-server value 10.100.25.252
    vpn-idle-timeout 720
    pfs enable
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value vpnsplittunnel
    default-domain value domain-notcorrect.local
    group-policy oterop internal
    group-policy oterop attributes
    dns-server value 10.100.25.252
    vpn-idle-timeout 720
    vpn-tunnel-protocol IPSec l2tp-ipsec
    pfs enable
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value vpnsplittunnel
    default-domain value domain-notcorrect.local
    group-policy Oterop internal
    group-policy Oterop attributes
    dns-server value 10.100.25.252
    vpn-idle-timeout 30
    group-policy chauhanv internal
    group-policy chauhanv attributes
    dns-server value 10.100.25.252
    vpn-idle-timeout 30
    vpn-tunnel-protocol IPSec l2tp-ipsec
    group-policy bnixon2 internal
    group-policy bnixon2 attributes
    dns-server value 10.100.25.252
    vpn-idle-timeout 720
    vpn-tunnel-protocol IPSec l2tp-ipsec
    pfs enable
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value vpnsplittunnel
    default-domain value domain-notcorrect.local
    Result of the command: "show run tunnel-group"
    tunnel-group ciscoVPNuser type remote-access
    tunnel-group ciscoVPNuser general-attributes
    address-pool vpnippool
    default-group-policy ciscoVPNuser
    tunnel-group ciscoVPNuser ipsec-attributes
    pre-shared-key *****
    tunnel-group petero type remote-access
    tunnel-group petero general-attributes
    address-pool vpnippool
    default-group-policy petero
    tunnel-group petero ipsec-attributes
    pre-shared-key *****
    tunnel-group oterop type remote-access
    tunnel-group oterop general-attributes
    address-pool vpnippool
    default-group-policy oterop
    tunnel-group oterop ipsec-attributes
    pre-shared-key *****
    tunnel-group vanheelm type remote-access
    tunnel-group vanheelm general-attributes
    address-pool vpnippool
    default-group-policy vanheelm
    tunnel-group vanheelm ipsec-attributes
    pre-shared-key *****
    tunnel-group chauhanv type remote-access
    tunnel-group chauhanv general-attributes
    default-group-policy chauhanv
    tunnel-group Oterop type remote-access
    tunnel-group Oterop general-attributes
    default-group-policy Oterop
    tunnel-group chauhanv2 type remote-access
    tunnel-group chauhanv2 general-attributes
    address-pool vpnippool
    default-group-policy chauhanv2
    tunnel-group chauhanv2 ipsec-attributes
    pre-shared-key *****
    tunnel-group bnixon2 type remote-access
    tunnel-group bnixon2 general-attributes
    address-pool vpnippool
    default-group-policy bnixon2
    tunnel-group bnixon2 ipsec-attributes
    pre-shared-key *****
    tunnel-group vpnpptp type remote-access
    tunnel-group vpnpptp general-attributes
    address-pool vpnippool
    default-group-policy vpnpptp
    tunnel-group IPNOTWHATIWASEXPECTING4 type ipsec-l2l
    tunnel-group IPNOTWHATIWASEXPECTING4 ipsec-attributes
    pre-shared-key *****
    tunnel-group evertest type remote-access
    tunnel-group evertest general-attributes
    address-pool vpnippool
    default-group-policy evertest
    tunnel-group evertest ipsec-attributes
    pre-shared-key *****
    tunnel-group evertest ppp-attributes
    authentication ms-chap-v2
    tunnel-group IPNOTWHATIWASEXPECTING3 type ipsec-l2l
    tunnel-group IPNOTWHATIWASEXPECTING3 ipsec-attributes
    pre-shared-key *****
    tunnel-group IPNOTWHATIWASEXPECTING2 type ipsec-l2l
    tunnel-group IPNOTWHATIWASEXPECTING2 general-attributes
    default-group-policy awsfilter
    tunnel-group IPNOTWHATIWASEXPECTING2 ipsec-attributes
    pre-shared-key *****
    isakmp keepalive threshold 10 retry 3
    tunnel-group IPNOTWHATIWASEXPECTING type ipsec-l2l
    tunnel-group IPNOTWHATIWASEXPECTING general-attributes
    default-group-policy awsfilter
    tunnel-group IPNOTWHATIWASEXPECTING ipsec-attributes
    pre-shared-key *****
    isakmp keepalive threshold 10 retry 3
    Result of the command: "show vpn-sessiondb detail remote filter protocol L2TPOverIPsec"
    INFO: There are presently no active sessions of the type specified
    Result of the command: "show vpn-sessiondb detail remote filter protocol L2TPOverIPsecOverNAT"
    INFO: There are presently no active sessions of the type specified

  • NW 7.3 specific - Database partitioning on top of logical partitioning

    Hello folks,
    In NW 7.3, I would like to know if it is possible to add a specific database partition rule on top of a logical partitioned cube. For example, if I have a LP cube by fiscal year - I would also like to specifically partition all generated cubes at DB level. I could not find any option in the GUI. In addition, each generated cube can be viewed only (cannot be changed in the GUI). Would anybody know if it is possible?
    Thank you
    Ioan

    Fair point! Let me explain more in details what I am looking for - in 7.0x, a cube can be partitioned at the DB level by fiscal period. Let's suppose my cube has only fiscal year 2011 data. If I partition the cube at the DB level by fiscal period in 12 buckets, I will get 12 distinct partitions (E table only) in the database. If the user runs a query on 06/2012, then the DB will search for the data only in the 06/2012 bucket - this is obviously faster than  browsing entire cube (even with indexes).
    In 7.3, cubes can be logical partitioned (LP). I created a LP by fiscal year - so far so good. Now, I would like to partition at the DB level each individual cube created by the LP. Right now I could not - this means that my fiscal year 2012 cube will have entire data residing in only 1 large partition, so a 06/2012 query will take longer (in theory).
    So my question is --> "Is it possible to partition a cube generated by a LP in fiscal period buckets"? I believe the answers is no right now (Dec 2011).
    By the way, all the above is true in a RDBMS environment - this is not a concern for BWA / HANA since data is column based and stored in RAM (not same technology as RDBMS).
    I hope this clarifies by question
    Thank you
    Ioan

  • Data Guard as a pass through?

    Scenario is . . .
    Host A is the Primary
    Host B is a Standby
    Host C is a Standby
    Now I know we can set up A->B and A->C
    Can we set up A->B->C ?
    Essentially using B as a pass through between A and C. Or you can see it as B being in a DMZ.
    Would B have to be Active Data Guard or anything special?
    I guess what I am really asking is can a Standby be used as the source for another Standby.

    See http://download.oracle.com/docs/cd/E11882_01/server.112/e10700/cascade_appx.htm#i638620 for more information on Cascaded Standby Destinations. There are a few restrictions:
    Cascading has the following restrictions:
    * Logical and snapshot standby databases cannot cascade primary database redo.
    * SYNC destinations cannot cascade primary database redo in a Maximum Protection Data Guard configuration.
    * Cascading is not supported in Data Guard configurations that contain an Oracle Real Applications Cluster (RAC) primary database.
    * Cascading is not supported in Data Guard broker configurations.
    Keep an eye on this chapter and Note 409013.1 "Cascaded Standby Databases" when the next patch set for 11.2 comes out :^)
    Larry

  • Web Policy Pass Through On Standalone AP

    On Cisco WLC SSID layer 3 configuration you can setup a web policy pass through to redirect a connected clients web browser to a certain starting page. Is this possible with a standalone Cisco AP not connected to a WLC?
    Thanks.

    No it's not possible. If you have a stand alone AP, you will need a 3rd party appliance or software to have the splash page option.
    Sent from Cisco Technical Support iPhone App

  • "Pass Through" blending

    How is it possible to achieve blending like "Pass Through" on
    a group in Photoshop CS2?
    For example, if in Flash you have a background image and then
    a Sprite object containing two images. The first image has a blend
    mode of NORMAL and the second image has a blend mode of ADD. The
    second image is partially overlapping the first image. If this
    example were setup in Photoshop, it would be the background layer
    and a group with two layers, the first with a blend mode of
    "Normal" and the second with a blend mode of "Linear Dodge"
    (additive). The group's blend mode is "Pass Through".
    In Photoshop, the second (additive) layer is properly blended
    with the first layer and the background layer. You can adjust the
    opacity of the group and all becomes more transparent as expected.
    In Flash, however, the two options I know for the Sprite
    object are NORMAL and LAYER, but they don't exhibit the same
    behavior as Photoshop's "Pass Through" option. With NORMAL, the
    contained objects (images) aren't pre-composited, so an alpha of
    50% will render the first image on the background, then the second
    (additive) image on top of that. With LAYER, the images are
    pre-composited, so the alpha applies to the whole, but the additive
    image is only properly blended with the first image, but not the
    background. (I'm aware this is because the pre-compositing buffer
    is initialized to black, and of course 0 + n = n.)
    Does anyone know any way to achieve or emulate this "Pass
    Through" behavior in Flash?
    I am developing for Flash Player 10.

    These are things that you can do in After Effects. If you want these features in Premiere Pro, please submit a feature request

  • How to mark PCI devices for pass through in host using Powercli?

    PCi devices in host can be retrieved using Get-VMHost command . How do i mark the device for pass through in host?
    Please help on how this can be done. Thanks in advance.

    Hi,
    I don't think supressing through Global Personalization will change the business logic. Within the Business Logic it checks for the mandatory field.
    After the changes I guess you need to make the changes accordingly.
    The below link might be of some help.
    http://wiki.sdn.sap.com/wiki/pages/viewpage.action?spaceKey=profile&title=ESSPersonalInformationUIenhancementwithoutmodification&decorator=printable
    Please correct if I am wrong.
    Cheers-
    Pramod

Maybe you are looking for

  • Can I move contacts from my computer to my phone?

    I just got a new MacBook Pro and created a new iTunes and iCloud account so I no longer have to share with my family. When I did that I lost all my contacts on my iPhone, but they are all in the contact app on my MacBook. Is there any way that I can

  • Show suppressed unassigned nodes in Hierachy

    Dear all, is there a way to show unassigned nodes of a hierarchy although the hierarchy attributes are set to hide them? If not, is there a way to suppress unassigned nodes in a query if the attribute is not set for a hierarchy? Thanks in advance Oli

  • New ISE PSN Does Not Do Anything

    Hello, In my Cisco ISE deployment, I have: - 1 Primary Admin / Secondary Monitoring Server - 1 Secondary Admin / Primary Monitoring Server - 1 Policy Server (up and running without any issues) - 1 Policy Server (the one that has a problem right now).

  • Attribute allowfullscreen not allowed on element iframe at this point. [HTML5]

    I have a youtube video on my index page which is giving me errors when i validate the code www.smartfisherboats.co.uk I think i understand but not sure how to fix it? It's saying style with css... Thanks Yet again Jenny

  • Pre-requisites for working on CRM WEBUI

    Hi, Please can you tell me what are the pre-requisites to work on CRM WEBUI. I know some of them as mentioned below: 1. Knowledge of CRM 2. Knowledge of Object Oriented Concepts 3. Knowledge of MVC architecture 4. Knowledge of ABAP 5. Knowledge of Bu