Best Practice complex Transforamtions

Hello together,
I have same complex transformation with more then one lookups to DSOs in the transformation.
Also I have to create more records in the transformation and I have more characteristics in the target Cube.
Have anyone a "best practice solution" for this requierment? Which way is the best way with the best performance?
What do you think about this way/concept?
Create Startroutine:
  - Create a global ITAB (RESULTTABLE, with the structure from the target Cube)
  - Create all other ITAB´s for the lookups..
  - Loop at SOURCE_PACKAGE into WA_SOURCE_PACKAGE
      - Fill all fields into the Resultpackage (all lookups....)
Rules:
  - In the first Rule: READ G_IT_RESULTPACKAGE INTO G_WA_RESULTPACKAGE with table key RECORD = SOURCE_FIELDS-RECORD.
  - In all other Rules only allocation the field from the G_WA_RESULTPACKAGE
I´m not sure if this is a good way. What is with the Expert Routine?
thank you very much!
Regards,
andreas

Hi Andreas,
for this I would create an Analysis Process to join all the data.
One of the biggest problems of your solution is, that you have to consider the data packages of a request. You have to reload your iTabs for each of those packages because data cannot be exchanged between packages.
An AP can do this, if your joined data does not exceed about 50% of your engines main memory during calculation. This is a figure I use myself to limit the AP, because I have found no suggestions from SAP yet.
In an AP you can do outer joins and manipulate the data by ABAP code.
Biggest disadvantages of AP:
- Runs mainly in Memory and not in the database, therefore limited by available memory
- Results has to be stored in transactional ODS/DSOs or Infoobjects. To move results to cubes, you need an additional standard DSO to do the delta.
Biggest Advantage:
- Almost unlimited access to the content of Infocubes, DSO and Infoobjects. You can even use Queries as data provider, though this has lower performance
- Very flexibel abilities to filter
- Good documentation on every transformation step.
- Very flexibel ways to implement transformations.
TA: RSANWB
Kind regards,
Jürgen Kirsch

Similar Messages

  • Best Practices Web Services - Complex Data Types

    Can someone provide some best practices documentation or info that for customers using CR against web services? Speciffically any information on complex data types such as  String[] or Address.
    Thanks
    Ian S

    And That's what I did.
    name.cfc is the component that has complex data types
    created.

  • Building complex flash game in Flash Builder 4 - Workflow/Best Practices

    I'm investigating switching to Flash Builder 4 for building a complex game that currently lives purely inside Flash CS4.  CS4 is a pretty terrible source code editor and debugger.  It's also quite unstable.  Many crashes caused by bad behavior in the SWF will take out the entire IDE so are almost impossible to debug.  And I've heard other horror stories.  To be clear, for this project I'm not interested in the Flex API, just the IDE.
    Surprisingly, it seems Flash Builder 4 isn't really set up for this type of development.  I was hoping for an "Import FLA" option that would import my Document Class, set it as the main entry point, and figure out where other assets live and construct a new project.  What is the best workflow for developing a project like this?
    What I tried:
    -Create a new Actionscript Project in the same directory where my CS4  lives
    -Set the primary source file to match the original project's source file and location
    -Set my main FLA as "export to SWC", and added "SWC PATH" to my flash builder 4 project.
    -Compile and run.. received many errors due to references to stage instance. I changed these to GetChildByName("stagename").  Instead, should I declare them as members of the main class?  (this would mimic what flash CS4 does).
    -My project already streams in several external SWF's.  I set these to "Export SWC" to get compile-time access to classes and varaibles. This works fine in cs4, the loaded SWF's behave as if they were in the native project.  Is the same recommended with FB4?
    -Should I also be setting the primary FLA as "export to swc"?  If not, how do I reference it from flex, and how does flex know which fla it should construct the main stage with?
    Problems:
    -I'm getting a crash inside a class that is compiled in one of the external SWF's (with SWC).  I cannot see source code for the stack inside this class at all.  I CAN see member variables of the class, so symbol information exists.  And I do see the stack with correct function names.  I even see local variables and function parameters in the watch window! But no source.  Is this a known bug, or "by design"? Is there a workaround?  The class is compiled into the main project, but I still cannot see source.  If FLEX doesn't support source level debugging of SWC's, then it's pretty useless to me.   The project cannot live as a single SWF.  It needs to be streaming and modular for performance and also work flow. I can see source just fine when debugging the exact same SWC/SWF through CS4.
    -What is the expected workflow with artists/designers working on the project?  Currently they just have access to all the latest source, and to test changes they run right through flash.  Will they be required to license Flash Builder as well so they can test changes?  Or should I be distributing the main "engine" as a SWF, and having it reference other SWF files that artists can work on?  They they compile their SWF in CS4, and to test the game, they can load the SWF I distribute.
    A whitepaper on this would be awesome, since I think a lot of folks are trying to go this direction.  I spent a long time searching the web and there is quite a bit of confusion on this issue, and various hacks/tricks to make things work.  Most of the information is stale from old releases (AS2!).
    If a clean workflow I would happily adopt Flash Builder 4 as the new development tool for all the programmers.  It's a really impressive IDE with solid performance, functional intellisense, a rich and configurable interface, a responsive debugger..I could go on and on.  One request is shipping with "visual studio keyboard layout" for us C++ nerds.
    Thanks very much for reading this novel!

    Flash builder debugging is a go!  Boy, I feel a bit stupid, you nailed the problem Jason - I didn't have "Permit Debugging set".  I didn't catch it because debugging worked fine in CS4 because, well, CS4 doesn't obey this flag, even for externally loaded SWF files (I think as long as it has direct access to the SWC). Ugh.
    I can now run my entire, multi SWF, complex project through FB with minimal changes.  One question I do have:
    In order to instantiate stage instances and call the constructor of the document class, I currently load the SWF file with LoaderContext.  I'm not even exporting an SWC for the main FLA (though I may, to get better intellisense).  Is this the correct way of doing it?  Or should I be using , or some other method to pull it into flex?  They seem to do the same thing.
    The one awful part about this workflow is that since almost all of my code is currently tied to symbols, and lives in the SWF, any change i make to code must first be recompiled in CS4, then I have to switch back to FB.  I'm going to over time restructure the whole code base to remove the dependency of having library symbols derive from my own custom classes.  It's just a terrible work flow for both programmers and artists alike.  CS5 will make this better, but still not great.  Having a clean code base and abstracted away assets that hold no dependencies on the code  seems like the way to go with flash.  Realistically, in a complex project, artists/designers don't know how to correctly set up symbols to drive from classes anyway, it must be done by a programmer.  This will allow for tighter error checking and less guess work.  Any thoughts on this?
    Would love to beta test CS5 FYI seeing as it solves some of these issues.
    Date: Thu, 21 Jan 2010 15:06:07 -0700
    From: [email protected]
    To: [email protected]
    Subject: Building complex flash game in Flash Builder 4 - Workflow/Best Practices
    How are you launching the debug session from Flash Builder? Which SWF are you pointing to?
    Here's what I did:
    1) I imported your project (File > Import > General > Existing project...)
    2) Create a launch configuration (Run > Debug Configuration) as a Web Application pointing to the FlexSwcBug project
    3) In the launch config, under "URL or path to launch" I unchecked "use default" and selected the SWF you built (I assume from Flash Pro C:\Users\labuser\Documents\FLAs\FlexSwcBug\FlexSwcBugCopy\src\AdobeBugExample_M ain.swf)
    4) Running that SWF, I get a warning "SWF Not Compiled for Debugging"
    5) No problem here. I opened Flash Professional to re-publish the SWF with "Permit debugging" on
    6) Back In Flash Builder, I re-ran my launch configuration and I hit the breakpoint just fine
    It's possible that you launched the wrong SWF here. It looks like you setup DocumentClass as a runnable application. This creates a DocumentClass.swf in the bin-debug folder and by default, that's what Flash Builder will create a run config for. That's not the SWF you want.
    In AdobeBugExample_Main.swc, I don't see where classCrashExternal is defined. I see that classCrashMainExample is the class and symbol name for the blue pentagon. Flash Builder reads the SWC fine for me. I'm able to get code hinting for both classes in the SWC.
    Jason San Jose
    Quality Engineer, Flash Builder
    >

  • Best Practice for CQ Updates in complex installations (clustering, replication)?

    Hi everybody,
    we are planning a production setup of CQ 5.5 with an authoring cluster replicating to 4 publisher instances. We were wondering what the best update process looks like in a scenario like this. Let's say, we need to install the latest CQ 5 Update - which we actually have to -:
    Do we need to do this on every single instance, or can replication be utilized to distribute updates?
    If updating a cluster - same question: one instance at a time? Just one, and the cluster does the rest?
    The question is really: can update packages (official or custom) be automatically distributed to multiple instances? If yes, is there a "best practice" way to do this?
    Thanks for any help on this!
    Henning

    Hi Henning,
    The CQ5.5 servicepacks are distributed as CRX packages. You can replicate these packages and on the publishs they are unpacked and installed.
    In a cluster the situation is different: You have only 1 repository. So when you have installed the servicepack on one node, the new versions of bundles and other stuff is unpacked to the repository (most likely to /libs). Then the magic (essentially the JcrInstaller) takes care, that the bundles are extracted to started.
    I would not recommend to activate the service pack in a production environment, because then all publishs will be updated the same time. And as a restart is required, you might encounter downtimes. Of course you can make it work when you play with the replication agents :-)
    cheers,
    Jörg

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Add fields in transformations in BI 7 (best practice)?

    Hi Experts,
    I have a question regarding transformation of data in BI 7.0.
    Task:
    Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
    Possible solutions:
    1) Add the new fields to first level DSO as well (empty)
    - Pro: Simple, easy to understand
    - Con: Disc space consuming, performance degrading when writing to first level DSO
    2) Use routines in the field mapping
    - Pro: Simple
    - Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
    3) Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
    Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
    Thank you in advance,
    Mikael

    Hi Mikael.
    I like the 3rd option and have used this many many times.  In answer to your question:-
    Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized  - Yes have read and tested this that it works faster.  A OSS consulting note is out there indicating the speed of the end routine.
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
    Hope it helps.
    Thanks,
    Pom

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • LRCC Face recognition - best practices?

    Ok so we are all new to the wonderful world of face recognition in LR.  I'm trying to work out what would be the best practices for using this.
    A little bit of background - I have a catalog of over 200,000 images.  In addition to portrait and wedding clients, a significant part of my work is with models and another significant part of my work is theatre photography.  I have be wanting some sort of face recognition to help with both for some time.
    What are your namining conventions for people? - here's mine:
    Ideally I would label people as "surname, firstname" so that I can keep members of a family together in "named people" display, but commas are not allowed in names.  Also the professional name of many models doesn't fit that pattern eg "Strawberry Venom" or "Cute as Sin" are to models I have worked with.
    I am trying to come up with a sensible naming convention at the moment it is "Surname/ Firstname" for clients, theatre folk and friends/family.  Models are still a problem, at present I am thinking of "Surname/ Firstname (model name(s))"  While I may not be able to remember the real names of models, I do usually know the names from model releases.  This naming will still permit me to filter/find them in the keyword List panel by just entering the model name.
    On final addition I am making to this this naming convention is the use of a hashtag suffix to the name:  #F for friends and family, #C for clients, #T for theatre/actors and #M for models.  This enables me to filter on just models, or just actors, or just friends and familiy.  Where people fall into multiple categories I add multiple hashtags.  So photos of me would be keyworded with "Butterfield/ Ian #F #T"
    Unknown / unidentified people.
    What I am not yet certain about is how to handle unknown / unidentified people.  Unidentified people fall into a number of different categories.
    People I don't know and I am never likely to know (Eg random strangers on the street, local tour guides on holiday, random people in the background etc)
    This group is relatively easy to deal with - that is to simply delete the face recognition, End of story.
    People I don't know the names of yet but I am likely to find out (Eg actors in a production for which I don't have a programme)
    For these people I am making up a unique name using the format "date/ Context-Gendernn" Eg an unknown male actor at Stockport Garrick Theatre would be named as "20150313/ SGT-M01"  Although this may appear a complex solution it has a number of advantages.  If/when I do learn the name of the individual (Eg I photograph them in a different production) it is simply a case of renaming the people keyword.  Creating a unique name and not simple assigning all unknowns to a bucket name will help the face recognition algorithms find this person without it being confused by have different faces assigned to the same name. I am also using the hashtage #U to make it easier to filter the unknown faces when I need to.
    People I don't know the names of and there is only a slim possibility of meeting/photographing again (Eg guests as a client weeding)
    It feels as though I out to just delete the face recognition and have done with it, and this is what I would do except for thing. Other than manually drawing face regions I have not yet found a way to get lightroom to rescan a folder for faces if you have previously deleted the face recognition.  This means that deleting face regions from a large number of people is something that cannot be easily reversed.  I might just leave these people in the "Unnamed People" category... at lease until such time as there is a way to rescan a folder or colectoin.
    Summary
    My practices are still evolving. But I hope these thoughts and idea will help others think through the issues and come up with solutions that work for their situation.  I am interested in hearing how other people are using the face recognition system.  Especially if anyone is aware of any 'best practices' that Adobe or anyone else has recommended.

    Glad it helped.
    Yes and no.  You can still put the people keywords into hierarchies within the keyword list - you can arrange them just like any other keywords.so you just create a "smith family" keyword and store "john smith" under it.  What you can't do is apply BOTH smith family and john smith the the same face.
    My use of the hash tags came about because I initially had a top level keyword for models, one for clients, one for theatre peple and one for family and firends.  Then discovered that some of the theatre folk were also clients (headshots) and what to do when a friend is also a client.  So the hash tag system means a person can be both a friend, a model, an actor as well as being a client!  (#T #C #M #F).

  • Best Practices for Passwords

    Is there a "Best Practices" guide on setting passwords for HandHelds?
    When we create accounts for users, we can either do it one at a time via
    ConsoleOne or bulkload, and we usually put in the same temporary
    first-time password. This gets them into their PC, portal, etc.
    How would this work for HandHelds? I'd like to do the same thing. Who
    has done this, what does it take to do, and what do you recommend?
    Thanks!

    Jason Doering wrote:
    > JimmyV wrote:
    >> Thanks Jason.
    >>
    >> I had hoped to get more feedback from people.
    >>
    >> Can I implement Password policies on a Handheld just as I can for a PC?
    >> Complexity, length, etc?
    >>
    >
    > Have a look at screen shots in the documentation:
    > BlackBerry Security Policy:
    >
    http://www.novell.com/documentation/...9.html#ajla0a2
    > Palm Security Policy:
    >
    http://www.novell.com/documentation/...9.html#ah36cbk
    > PocketPC Security Policy:
    >
    http://www.novell.com/documentation/...9.html#ah36cc6
    >
    > As stated before, I have my Palm and PocketPC settings to use the
    > eDirectory password. This way, it inherits whatever requirements I have
    > set for that, and so far my users have preferred this to a separate
    > password for their device.
    >
    > The first time a user syncs a device, it prompts them to enter the
    > eDirectory password. If a password already exists on the device, the
    > user gets prompted to enter the existing device password and then the
    > eDirectory password. In either case, the device password then gets set
    > to the eDirectory password. When the eDirectory password gets changed,
    > the user will need to enter the old device password and the new
    > eDirectory password.
    >
    Having done the both IP only version and the regular desktop sync I can tell
    you that the user administration is probably not similar enough to what you
    have been used to with desktops. Most handhelds (of any kind) are not
    capable of multiple profiles (multi-user) on the device. So the catch is a
    single user and single profile of that user on a handheld is going to have
    the device access (NOT THE PROFILE) password set to the eDir password. If
    another user were to need to login to the device and the eDir controls are
    in place then the device access (power on password) would change to that
    user. The profile would not change nor would the devices relationship with
    anything it sync's with.
    Sorry - long answer and late. This is not a very active forum.

  • What is the best practice for changing view states?

    I have a component with two Pie Charts that display
    percentages at two specific dates (think start and end values).
    But, I have three views: Start Value only, End Value only, or show
    Both. I am using a ToggleButtonBar to control the display. What is
    the best practice for changing this kind of view state? Right now
    (since this code was inherited), the view states are changed in an
    ActionScript function which sets the visible and includeInLayout
    properties on each Pie Chart based on the selectedIndex of the
    ToggleButtonBar, but, this just doesn't seem like the best way to
    do this - not very dynamic. I'd like to be able to change the state
    based on the name of the selectedItem, in case the order of the
    ToggleButtons changes, and since I am storing the name of the
    selectedItem for future reference.
    Would using States be better? If so, what would be the best
    way to implement this?
    Thanks.

    I would stick with non-states, as I have always heard that
    states are more for smaller components that need to change under
    certain conditions, like a login screen that changes if the user
    needs to register.
    That said, if the UI of what you are dealing with is not
    overly complex, and if it will not become overly complex, maybe
    states is the way to go.
    Looking at your code, I don't think you'll save much in terms
    of lines of code.

  • Best Practice setting up NICs for Hyper V 2008 r2

    I am looking at some suggestions for best practice for setting up a hyper V 2008 r2 at a remote location with 5 nics, one for managment vlan and other 4 on the data vlan.  This server will host  2 virtual machines, one is a DC and the other
    is a member local DHCP server.  The server is setup now with one nic on the management Vlan and the other nic's set to get there ip from the local dhcp server on on the host.   We have the virtual networks setup in Hyper V to
    point to each of the nics using the "external connection".  The virtual servers 'DHCP and AD" have there own ip set within them.  Issues we are seeing,  when the site looses external connections for a while they cannot get ip
    addresses from the local dhcp server anymore.
    1. NIC on management Vlan -- IP Static -- Physical host
    2. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V  -- virtual server DHCP
    3. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- Virtual server domain controller
    4. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    5. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    Thanks in advance

    Looks like you may be over complicating things here.  More and more of the recommendations from Microsoft at this point would be to create a Logical Switch and then layer on Logical Networks for your management layers, but here is what I would do for
    you simple remote office.  
    Management NIC:  Looks good (Teaming would be better, but only if you had 2 different switching to protect against link failures at the switch level.  Doesn't seem relevant in this case however.
    NIC for Data Network VLAN:  I would use one NIC in your case if you can have the ability to Trunk multiple VLANs at the switch level to the NIC.  That way you are setting the VLAN on the VMs NIC that you want to access and your
    Virtual Switch configuration is very simple.  On this virtual switch however, I would uncheck IPv4 and IPv6.  There is no need to give this NIC an address as you are just passing traffic through them from the VMs that are marked with VLAN tags.  Again,
    if you have multiple physical switches in the building teaming could be an option, but probably adds more complexity than is necessary for a small office. 
    Even if you keep your Virtual Switches linked to separate NICs unchecking IPv4 and IPv6 makes sense. 
    Disable all the other NICs
    Beyond that, check your routing.  Can you ping between all hosts when there is not interruption? What DHCP server are they getting there addresses on normally?  Where are your name resolution servers (DNS, WINS)?  
    No silver bullet here, but maybe a step in the right direction.
    Rob McShinsky (VirtuallyAware.com)
    VirtuallyAware - Experiences in a Virtual World (Microsoft MVP - Virtual Machine)

  • Best Practice for SRST deployment at a remote site

    What is the best practice for a SRST deployment at a remote site? Should a separate router such as a 3800 series be deployed for telephony in addition to another router to be deployed for Data? Is there a need for 2 different devices?

    Hi Brian,
    This is typically done all on one ISR Router at the remote site :)There are two flavors of SRST. Here is the feature comparison;
    SRST Fallback
    This feature enables routers to provide call-handling support for Cisco Unified IP phones if they lose connection to remote primary, secondary, or tertiary Cisco Unified Communications Manager installations or if the WAN connection is down. When Cisco Unified SRST functionality is provided by Cisco Unified CME, provisioning of phones is automatic and most Cisco Unified CME features are available to the phones during periods of fallback, including hunt-groups, call park and access to Cisco Unity voice messaging services using SCCP protocol. The benefit is that Cisco Unified Communications Manager users will gain access to more features during fallback ****without any additional licensing costs.
    Comparison of Cisco Unified SRST and
    Cisco Unified CME in SRST Fallback Mode
    Cisco Unified CME in SRST Fallback Mode
    • First supported with Cisco Unified CME 4.0: Cisco IOS Software 12.4(9)T
    • IP phones re-home to Cisco Unified CME if Cisco Unified Communications Manager fails. CME in SRST allows IP phones to access some advanced Cisco Unified CME telephony features not supported in traditional SRST
    • Support for up to 240 phones
    • No support for Cisco VG248 48-Port Analog Phone Gateway registration during fallback
    • Lack of support for alias command
    • Support for Cisco Unity® unified messaging at remote sites (Distributed Exchange or Domino)
    • Support for features such as Pickup Groups, Hunt Groups, Basic Automatic Call Distributor (BACD), Call Park, softkey templates, and paging
    • Support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0 on same computer
    • No support for secure voice in SRST mode
    • More complex configuration required
    • Support for digital signal processor (DSP)-based hardware conferencing
    • E-911 support with per-phone emergency response location (ERL) assignment for IP phones (Cisco Unified CME 4.1 only)
    Cisco Unified SRST
    • Supported since Cisco Unified SRST 2.0 with Cisco IOS Software 12.2(8)T5
    • IP phones re-home to SRST router if Cisco Unified Communications Manager fails. SRST allows IP phones to have basic telephony features
    • Support for up to 720 phones
    • Support for Cisco VG248 registration during fallback
    • Support for alias command
    • Lack of support for features such as Pickup Groups, Hunt Groups, Call Park, and BACD
    • No support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0
    • Support for secure voice during SRST fallback
    • Simple, one-time configuration for SRST fallback service
    • No per-phone emergency response location (ERL) assignment for SCCP Phones (E911 is a new feature supported in SRST 4.1)
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/prod_qas0900aecd8028d113.html
    These SRST hardware based restrictions are very similar to the number of supported phones with CME. Here is the actual breakdown;
    Cisco 880 SRST Series Integrated Services Router
    Up to 4 phones
    Cisco 1861 Integrated Services Router
    Up to 8 phones
    Cisco 2801 Integrated Services Router
    Up to 25 phones
    Cisco 2811 Integrated Services Router
    Up to 35 phones
    Cisco 2821 Integrated Services Router
    Up to 50 phones
    Cisco 2851 Integrated Services Router
    Up to 100 phones
    Cisco 3825 Integrated Services Router
    Up to 350 phones
    Cisco Catalyst® 6500 Series Communications Media Module (CMM)
    Up to 480 phones
    Cisco 3845 Integrated Services Router
    Up to 730 phones
    *The number of phones supported by SRST have been changed to multiples of 5 starting with Cisco IOS Software Release 12.4(15)T3.
    From this excellent doc;
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/data_sheet_c78-485221.html
    Hope this helps!
    Rob

  • Best practice on monitoring Endeca health / defining outage

    (This is a double post from the Endeca Experience Management forum)
    I am looking for best practice on how to define Endeca service outage and monitor the health of the system. I understand this depends on your user requirements and it may vary from customer to customer. Specifically what criteria do you use to notify your engineer there is a problem? We have our load balancers pinging dgraphs on an interval. However the ping operation is not sufficient in our use case. We are also experimenting running a "low cost" query to the dgraphs on an interval and using some query latency thresholds to determine outage. I want to hear from people on the field running large commercial web site about your best practice of monitoring/notifying health of the system.
    Thanks.

    The performance metric should help to analyse the query and metrics for fine tuning.
    Here are few best practices:
    1. Reduce the number of components per page
    2. Avoid complex LQL queries
    3. Keep the LQL threshold small
    4. Display the minimum number of columns needed

  • SAP PI conceptual best practice for synchronous scenarios

    Hi,
    <br /><br />Apologies for the length of this post but I'm sure this is an area most of you have thought about in your journey with SAP PI.
    <br /><br />We have recently upgraded our SAP PI system from 7.0 to 7.1 and I'd like to document  best practice guidelines for our internal development team to follow.
    I'd be grateful for any feedback related to my thoughts below which may help to consolidate my knowledge to date.
    <br /><br />Prior to the upgrade we have implemented a number of synchronous and asynchronous scenarios using SAP PI as the hub at runtime using the Integration Directory configuration.
    No interfaces to date are exposes directly from our backend systems using transaction SOAMANAGER.
    <br /><br />Our asynchronous scenarios operate through the SAP PI hub at runtime which builds in resilience and harnesses the benefits of the queue-based approach.
    <br /><br />My queries relate to the implementation of synchronous scenarios where there is no mapping or routing requirement.  Perhaps it's best that I outline my experience/thoughts on the 3 options and summarise my queries/concerns that people may be able to advise upon afterwards.
    <br /><br />1) Use SAP PI Integration Directory.  I appreciate going through SAP PI at runtime is not necessary and adds latency to the process but the monitoring capability in transaction SXMB_MONI provide full access for audit purposes and we have implemented alerting running hourly so all process errors are raised and we handle accordingly.  In our SAP PI Production system we have a full record of sync messages recorded while these don't show in the backend system as we don't have propogation turned on.  When we first looked at this, the reduction in speed seemed to be outweighed by the quality of the monitoring/alerting given none of the processes are particularly intensive and don't require instant responses.  We have some inbound interfaces called by two sender systems so we have the overhead of maintaing the Integration Repository/Directory design/configuration twice for these systems but the nice thing is SXMB_MONI shows which system sent the message.  Extra work but seemingly for improved visibility of the process.  I'm not suggesting this is the correct long term approach but states where we are currently.
    <br /><br />2) Use the Advanced Adapter Engine.  I've heard mixed reviews about this functionaslity, there areh obvious improvements in speed by avoiding the ABAP stack on the SAP PI server at runtime, but some people have complained about the lack of SXMB_MONI support.  I don't know if this is still the case as we're at SAP PI 7.1 EHP1 but I plan to test and evaluate once Basis have set up the pre-requisite RFC etc. 
    <br /><br />3) Use the backend system's SOAP runtime and SOAMANAGER.  Using this option I can still model inbound interfaces in SAP PI but expose these using transaction SOAMANAGER in the backend ABAP system.  [I would have tested out the direct P2P connection option but our backend systems are still at Netweaver 7.0 and this option is not supported until 7.1 so that's out for now.]  The clear benefits of exposing the service directly from the backend system is obviously performance which in some of our planned processes would be desirable.  My understanding is that the logging/tracing options in SOAMANAGER have to be switched on while you investigate so there is no automatic recording of interface detail for retrospective review. 
    <br /><br />Queries:
    <br /><br />I have the feeling that there is no clear cut answer to which of the options you select from above but the decision should be based upon the requirements.
    <br /><br />I'm curious to understand SAPs intention with these options  -
    <br /><br />- For synchronous scenarios is it assumed that the client should always handle errors therefore the lack of monitoring should be less of a concern and option 3 desirable when no mapping/routing is required? 
    <br /><br />- Not only does option 3 offer the best performance, but the generated WSDL is ready once built for any further system to implement thereby offering the maximum benefit of SOA, therefore should we always use option 3 whenever possible?
    <br /><br />- Is it intended that the AAE runtime should be used when available but only for asynchronous scenarios or those requiring SAP PI functionality like mapping/routing otherwise customers should use option 3?  I accept there are some areas of functionality not yet supported with the AAE so that would be another factor.
    <br /><br />Thanks for any advice, it is much appreciated.
    <br /><br />Alan
    Edited by: Alan Cecchini on Aug 19, 2010 11:48 AM
    Edited by: Alan Cecchini on Aug 19, 2010 11:50 AM
    Edited by: Alan Cecchini on Aug 20, 2010 12:11 PM

    Hi Aaron,
    I was hoping for a better more concrete answer to my questions.
    I've had discussion with a number of experienced SAP developers and read many articles.
    There is no definitive paper that sets out the best approach here but I have gleaned the following key points:
    - Make interfaces asynchronous whenever possible to reduce system dependencies and improve the user experience (e.g. by eliminating wait times when they are not essential, such as by sending them an email with confirmation details rather than waiting for the server to respond)
    - It is the responsibility of the client to handle errors in synchronous scenarios hence monitoring lost through P-P services compared to the details information in transaction SXMB_MONI for PI services is not such a big issue.  You can always turn on monitoring in SOAMANAGER to trace errors if need be.
    - Choice of integration technique varies considerably by release level (for PI and Netweaver) so system landscape will be a significant factor.  For example, we have some systems on Netweaver 7.0 and other on 7.1.  As you need 7.1 for direction connection PI services we'd rather wait until all systems are at the higher level than have mixed usage in our landscape - it is already complex enough.
    - We've not tried the AAE option in a Production scenarios yet but this is only really important for high volume interfaces, something that is not a concern at the moment.  Obviously cumulative performance may be an issue in time so we plan to start looking at AAE soon.
    Hope these comments may be useful.
    Alan

  • Best Practice for creating Excel report from SSIS.

    I have a requirement to create an Excel report on a daily basis which pulls data from SQL. I have attempted to resolve this by creating a stored procedure to save the results in SQL, a template in Excel to hold the graphs & pivot tables and an SSIS package
    to copy the data to the template.
    Problem 1: When the data turns up in Excel it is saved as text rather than numbers.
    Problem 2: When the data turns up in Excel it appends the data rather than overwriting it.
    I resolved problem 1 by having another sheet which converts the text to numbers (=int(sheet1!A1))
    I resolved problem 2 by adding some VB script to my SSIS package which clears the existing cells before copying the data
    The job runs fine, however when I schedule the job to run overnight it complains "System.UnauthorizedAccessException: Retrieving the COM class factory for component with CLSID". A little googling tells me that running the client side commands in
    my vb script (workSheet1.Range("A2:F9999").Clear(), workBook.Save(), workBook.Close() etc) from a server side task is bad practice.
    So, I am left wondering how people usually get around this problem; copy a SQL table into an existing Excel file and overwrite the data, without having the numbers turn up as text. My requirements are that the report must display pivot charts with selectable
    options and be automatically updated overnight.
    Help appreciated,
    Bish.
    Office 2013 on my PC, Office 2010 on the server, Windows Server 2008R2 Enterprise, SQL Server 2008R2.

    I think that the best practice in case like this is to Link an excel file to a view or directly to a table. So you don't have to struggle with changing template, with overnight packages, etc. If the data are too much complex and the desiderate too excessive
    then I tend to create a Cube and that's it...dashboard, graph and everyone is happy. In your case if the request is not too much try to don't use SSIS but directly build a view and point directly on SQL.
    SSIS is really strong for the ETL, to run some stored procedure too heavy, to use a cut time scheduled, etcetera , etcetera, etcetera...I love it. But sometimes we need to find the easier solutions...
    I hope this post helped you

Maybe you are looking for