Some clarifications on IP

Hi IP Folks,
Actually I am preaty ok with how BPS works. But I want to know some things on IP.
1) Is Planning sequence tab used to test the developments in particular aggregation level for the modeller (Not the end user). Like the modeller does in bps0 by executing planning package by "Enter plan data" and does the intermidiate testing ??
2) I understand that the filter defined in planning modeller in IP for dat set only we can write data into info cube.
So when we make a input ready query do these filters come alongwith in query results ??
If yes if we have filtered for plan version how will we do actual version comparsion and then put data in plan in input ready query ?
3) If we are making a planning function distribute ref. data, can the ref. data be kept outside the filter range ??
Thanks,
Mihir

Hi,
1) Is Planning sequence tab used to test the developments in particular aggregation level for the modeller (Not the end user). Like the modeller does in bps0 by executing planning package by "Enter plan data" and does the intermidiate testing ??
Yes planning sequence is used to execute and test the planning functions that are created on aggregatio level
2) I understand that the filter defined in planning modeller in IP for dat set only we can write data into info cube.
So when we make a input ready query do these filters come alongwith in query results ??
If yes if we have filtered for plan version how will we do actual version comparsion and then put data in plan in input ready query ?
Yes.
3) If we are making a planning function distribute ref. data, can the ref. data be kept outside the filter range ??
Yes it can be kept out the filter range
For structure elements that you do not want to be ready for input, you can also specify whether these components are treated as reference data or if they should be protected against manual entries.
If they are to be reference data, the structure elements are not  locked for exclusive access by one user because this data is used as a reference by many users. This is the standard default setting
Hope this helps you.
With Regads,
PCR

Similar Messages

  • Need some clarification on Replacement Path with Variable

    Hello Experts,
    Need some clarification on Replacement Path with Variable.
    We have 2 options with replacement path for characteristic variables i.e.
    1) Replace with query
    2) Replace with variable.
    Now, when we use  "Replace with variable" we give the variable name. Then we get a list for "Replace with" as follows:
    1) Key
    2) External Characteristic Value Key
    3)Label
    4)Attribute value.
    I need detailed explanation for the above mentioned 4 options with scenarios.
    Thanks in advance.
    Regards
    Lavanya

    Hi Lavanya,
    Please go through the below link.
    http://help.sap.com/saphelp_nw70/helpdata/EN/a4/1be541f321c717e10000000a155106/frameset.htm
    Hope this gives you complete and detailed explaination.
    Regards,
    Reddy

  • Reviewed documentation on Reporting: I will appreciate some clarification

    Hi,
    Reviewed documentation on Reporting: I will appreciate some clarification:
    1. Can you help me understand what "Staging" means as used here:
    A data warehouse system serves primarily to stage information from various data source
    The information is staged in various forms including personalized reports, freely definable queries, and predefined reports.
    2. Is RemoteCube the same as Virtual Cubes? if not, what is the difference?
    3. Can you help me understand what "flexible update" means as used here:
    Characteristic InfoObjects have to be included in the InfoProvider tree in the Data Warehousing Workbench to make them available as data targets for flexible updates and as InfoProviders for reporting.
    4. "DataStore Objects are available for transaction data and for master data."
    When do we decide to store master data in a DSO?
    5. Formula Collision vs. "Exception Aggregation"
    I understand what formula collision is about but is it the same as "Exception Aggregation"?
    If not, do you have a real-life example to help me understand the concept of "Exception Aggregation"?
    The documentation/link is not helping.
    6. i. With the followin limitation on Display attribute, why not always make the attribute Navigational?
    "You can show Display Attributes in a report in the drilldown. However, navigation steps are not possible. (For example, you cannot choose values from a display attribute as a filter.)"
    ii. Any example where Nav Attrib does not make sense but it must be Display attrib?
    Thanks

    Hi Amanda,
    2. Is RemoteCube the same as Virtual Cubes? if not, what is the difference?
         Yes, they are one and the same.
    3.Can you help me understand what "flexible update" means as used here:
    Characteristic InfoObjects have to be included in the InfoProvider tree in the Data Warehousing Workbench to make them available as data targets for flexible updates and as InfoProviders for reporting.
    in 3.x, you have infosources and not transformations. So you have basically two types of infosources, direct update and flexible update.
    Direct update infosources do not have any update rules. These are used for master data loading since the format of data is usually fixed as in fields etc.
    Flexible update datasources have update rules, which means that you can use it for loading pretty much anything. You can load master data by changing the update rules and you can use the same infosource for loading transactional data as well. What the statement means is that the infoobject has been set as an infoprovider. and therefore it can be used as a source in a query. The loading is done via flexible updates.
    More info on direct and flexible updates can be found here :[3.x InfoSource Types|http://help.sap.com/erp2005_ehp_04/helpdata/EN/87/3fdf9587f211d5b2ff0050da4c74dc/frameset.htm]
    5)Formula Collision vs. "Exception Aggregation"
    I understand what formula collision is about but is it the same as "Exception Aggregation"?
    If not, do you have a real-life example to help me understand the concept of "Exception Aggregation"?
    The documentation/link is not helping.
    I'm glad that you understand what formula collision is...since its a difficult concept. Exception aggregation is much simpler.
    For every key figure that is present in BW,  there are two types of aggregations that can be maintained.
    Standard: can be sum etc.
    and exception aggregation like avg etc. When you do this, you need to give a reference characteristic. By default this is time. and usually it is summation. Now when you drilldown in your query based on the ref. characteristic, the exception aggregation is used.
    Eg [http://help.sap.com/saphelp_nw70/helpdata/en/d2/e0173f5ff48443e10000000a114084/frameset.htm]
    6.  With the followin limitation on Display attribute, why not always make the attribute Navigational?
    "You can show Display Attributes in a report in the drilldown. However, navigation steps are not possible. (For example, you cannot choose values from a display attribute as a filter.)"
    ii. Any example where Nav Attrib does not make sense but it must be Display attrib?
    Essentially this is a preference or a personal choice basically. You can have several attributes. but keeping all of them as navigational does not help and degrades system performance as they have to be interlinked. For eg.
    You have an employee ID as your characteristic.
    attributes are as follows
    Date of birth
    division
    salary stack
    *** etc
    now things like division,salary stack and  *** could be used as filters in a query but date of birth would in general not be used as filter and hence would make no sense to use it as a navigational attribute(unless you want to find all employees whose birthday is a certain date ).
    In the end, it just comes down to the requirement. If you are required to put a filter on an attribute of master data, then it needs to be a nav. attribute else a display attribute would suffice.
    Hope this helps.
    Regards.

  • Need some clarifications on Quality-of-services

    Hi Everybody.
                       I need some clarification on Quality-of-services.the question is which one is better in Quality-of-services (Exactly-Once or Exactly-Once-In-order)?why?wht is the differenc between them?

    Hi Narayana
    refer the below Urls
    make the QOS of file as EO or EOIO and then use this blog,
    /people/arpit.seth/blog/2005/06/27/rfc-scenario-using-bpm--starter-kit
    It depends upon the Adapter,Can you please spicific
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/41/b714fe5ffc11d5b3ea0050da403d6a/content.htm
    Check these blogs written on QoS:
    XI Asynchronous Message Processing: Understanding XI Queues
    How to deal with stuck EOIO messages in the XI 3.0 Adapter Framework
    EO = Exactly Once ( Used in Asynchronous Communication)
    EOIO = Exactly Once In Order ( Used in Asynchronous Communication)
    BE = Best Effort ( Used in Synchronous Communication)
    Best Effort --> Used for Synchronous Calls.
    EO and EOIO --> Asynchronous Calls.
    EOIO --> Asynchronous with Sequential Processing Guranteed.
    http://help.sap.com/saphelp_nw04/helpdata/en/41/b714fe5ffc11d5b3ea0050da403d6a/frameset.htm
    For the QOS, u can refer the following library links .
    http://help.sap.com/saphelp_nw04/helpdata/en/41/b714fe5ffc11d5b3ea0050da403d6a/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/ae/d03341771b4c0de10000000a1550b0/frameset.htm
    For receiver channels QoS BE (Best Effort) will result in a
    synchronous call (sRFC) , QoS EO (Exactly Once) will create a
    transactional call (tRFC) to the BC. For sender channels a synchronous
    call (sRFC) will result in a message with QoS BE, a transactional call
    (tRFC) will result in a message with QoS EO.
    QoS EOIO is not supported by the BC-Adapter.
    SAP XI term Quality of service describing how the transmission and process-ing of messages is to be handled. Possible values are:
    BE = BestEffort (synchronous call, no transactional guarantees for transmission and processing)
    EO = ExactlyOnce (asynchronous call, guarantee for local transactional handling, exactly-once transmission and exactly-once processing)
    EOIO = ExactlyOnceInOrder (as for EO but with serialization guarantee on a given queue name).
    <b>Pls reward if useful</b>

  • Show ip cef switching statistic output - some clarifications required

    Good morning, everyone!
    I need some clarifications regarding ouput of show ip cef switching statistic on Catalyst 6k and some entries apply to other platforms as well. Consider output:
    #show ip cef switching statistics
           Reason                          Drop       Punt  Punt2Host
    RP LES No route                      179269          0      12866
    RP LES Packet destined for us             0  153061543          2
    RP LES No adjacency                12460594          0          0
    RP LES Incomplete adjacency           10400          0          0
    RP LES Unresolved route                1476          0          0
    RP LES Bad checksum                      17          0          0
    RP LES TTL expired                        0          0   24725338
    RP LES IP options set                     0          0          3
    RP LES Fragmentation failed, DF  1757274333          0     481160
    RP LES Features                         975          0       5278
    RP LES IP redirects                       0          0          7
    RP LES Unknown input if                 192          0          0
    RP LES Neighbor resolution req     32540982       7290          0
    RP LES Total                     1802468238  153068833   25224654
    All    Total                     1802468238  153068833   25224654
    Questions are:
    1) What the difference between rows
    RP LES No adjacency
    RP LES Incomplete adjacency
    RP LES Neighbor resolution req
    2) What the difference between rows
    RP LES No route 
    RP LES Unresolved route   
    3) What does mean collum name - Punt2Host?
    I will higly aprecciate your answers! Thnx in advance.

    So far I partially have found answer on first question.
    RP LES Neighbor resolution req - here counted packets that have no adjacency and they are punted to arp request to send.
    RP LES No adjacency  - after ARP request sent throttling adjacency is installed for this destination for 2 seconds and all subsequent packets get drop.  Those packets are counted here.
    RP LES Incomplete adjacency - here packets counted which matches to incomplete adjacencies. Which are stays in adjacency table marked incomplete - as i understand it can happen after arp entry aged out. But those adjacencies must be deleted as well, but as i can see in my environment some of them stays for a while, i am a bit confused here.

  • Just needed some clarification regarding the Viewer Builder and actually publishing your App...

    If someone could let me know if my understanding is correct, that'd be a huge help... So I've designed my publication in InDesign and exported the .zip file from the Folio Producer. I've created all of my certificates/splash screens/icons. Lastly, I just recently went through the steps of the Viewer Builder. I'm now at the stage of this process that requires me to purchase the $395 single edition so that I can enter the serial number in the last stage of the Viewer Builder. Now, to my knowledge, once I get the serial number, Viewer Builder will then give me access to an .ipa file and a .zip file. The .ipa file is for me to test on my iPad, and the .zip would be used to distribute to the App Store. I guess this is where I get confused... Let's say after I test the .ipa on my iPad, I don't like some part of my publication. I know how to update my own documents obviously, and I understand that I would have to export another .zip file from the Folio Producer, in turn requiring me to edit the exported folio link in the Viewer Builder. If I had to do that, would I need to purchase another single edition serial number since the original App was edited? Or would the same serial number apply since I'm editing that same App in the Viewer Builder? My next question is somewhat similar. Let's say all of the information is up to date and I go ahead and publish the App to the App Store. However, maybe a month later or some time in the future, I needed to update a phone number or email address--some little detail like that. Again, I understand that I'd have to update the export link in the Viewer Builder, but would I then need to create a new app since my app was already published? Would I then have to purchase another $395 single edition serial number just so that I can update my information? This seems to be the only thing in this whole process that I could use some clarification on so that I don't run into any surprises in the future. Any help would be great, thanks!

    Hi Joshua,
    When you have purchased the serial, you can rebuild your app with your updated content, as long as you use the same bundleID (applicationID), that is tied to your Apple mobile provisioning profile. The serial number is valid for a one year period.
    After you have submitted your app to Apple and it has been approved, please read: http://forums.adobe.com/message/4172167#4172167
    With kind regards,
    Klaasjan Tukker
    Adobe Systems

  • Some clarifications regarding Aironet settings

    Hi,
    i need some clarifications regarding configuring Aironet stand-alone AP (in this case AIR-LAP1131AG).
    Under Security->SSID Manager:
    what is the purpose of Network ID?
    Under Guest Mode/Infrastructure SSID Settings - what is the purpose of Set Infrastructure SSID?
    and Force Infrastructure Devices to associate only to this SSID?
    Cheers,

    Assign a Service Set Identifier (SSID) to each VLAN configured on the AP. SSIDs enable endpoints to select the wireless VLAN they will use for sending and receiving traffic. These wireless VLANs and SSIDs map to wired VLANs. For voice endpoints, this mapping ensures priority queuing treatment and access to the voice VLAN on the wired network
    For further information click this link,
    http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/srnd/4x/42nstrct.html#wp1098806

  • Migrating from SRM 7.0 Classic to ECS - Seeking some clarifications

    Hello Friends,
    We are now planning to upgrade from SRM 5.0 to SRM 7.0 and then migrated to Classic to ECS. We have the below challenging points which I need clarification,
    1) How do we go about the Taxation and Pricing Part, If we migrate from Classic to ECS
    2) Presently we have R/3 contracts in place and the same is linked to MDM catalogue and now the client wants the contracts to 
        be in SRM. In that case how do we go about the strategy.
    3) Asset Procurement: Now all WBS element are converting into assets by running the Z-Program in between PR and PO in R/3
       system. But when we migrate to ECS PO gets created in SRM, so how do we go about the asset procurement in ECS scenario.
    4) Confirmation and Invoice transferring to R/3 through Idoc distribution Model. Going forward in SRM 7.0 the same will continue or
        do we go with SOA. Please confirm.
    Further to this can you please let me know, what are all the major challenges for upgrading the system to SRM 7.0 and migration from Classic to ECS.
    I would appreciate for you quick response. Points will be awarded for appropriate answers.
    Thanks and Regards,
    Ram.

    Ramesh,
    Few inputs for this project
    1. You need to clearly seperate out the upgrate (5.0 to 7.0) and Migration (Classic to ECS) in your project planning to derisk the project i.e. you should have seperate testing cycle time/ efforts for  upgrade and then on migration so that functional/ teachnical team working out in the project can clearly identifies issues between upgrade and migration seperately and work on them.
    2. You have to work with business to make a decision on conversion of old classic PO's (incomplete GR/IR still required) and PO in process (SC created under approval in SRM). I strongly suggest you should not involve into conversion and keep it out of this project.
    3. SRM do have option to take care of your tax configuration. BTW most of the company use external software to calculate tax amount.
    4. You need to build workflow for PO in SRM since now PO will be created in SRM.
    5. You need to create smartform / output determination in SRM for the PO. Please note SAP has provided some consulting notes if you continue to handle document transmission in backend with ECS.
    6. Yes a distribution model needs to be setup for confirmation and Invoice if you are not doing it now in classic scenario.
    Asset procurement solution you mentioned looks specific to your Org business needs, you need to explore to fit into new environment.
    All the best
    Thanks, Sachin
    Edited by: sachindubey on Apr 10, 2010 12:13 PM

  • Wanted to get some clarification on JavaCompiler and Reflection

    Hello,
    I am working on building a modular, dynamic framework for web apps. The idea is highly reflection driven controllers and DAO's (no real news here, everyone does this, I was just giving some background).
    One of the pieces I wanted to build was a dynamic search criteria object to pass to the DAO, used to filter collections coming back from the persistence layer. I wanted these to be runtime compiled so they would change with any changes to their corresponding data transfer object(I have written many of these, they are repetitive, and that means that they are a good candidate for a program to write for me). I settled on this because I needed to be able to fill the objects with data, and it seemed like the best way to me(I think an enum could also have worked, but I have not used these much and runtime compilation just sounds so neat).
    So I have the service built to dole out the dynamic search criteria objects, and I have them compiling with all dependencies satisfied, and that all looks good.
    Then I hit the hitch, a runtime class - MyDynamicSearchCriteriaObject is used in the following manner MyDynamicSearchCriteriaObject.class.getMethods(), causing a serious failure(one that is not caught by Exception), and the program just fails.
    So from what I have seen, this is due to the fact that reflection information on the classpath is stored at jvm startup in memory, and any runtime loaded class cannot be reflected on with the standard API (it looks like javassist is built just for this, but I have not fiddled with it much yet). I just wanted to get any thoughts anyone has on this, is my analysis way off base, is this how it works, is there some way to trigger a runtime refresh of the reflection structure (I doubt this would ever work due to the different classloaders).
    My other question is this, if you cannot reflect on runtime classloaded classes, then they will only ever really be of value if they implement a compile-time-known interface. This would allow for dynamic implementations of a given interface via runtime compilation, but not use of the runtime loaded class by it's own type definition(via reflection).
    I am a little new to all of this, this is my first stab at building such a framework, so any thoughts, help, or clarification are greatly appreciated!
    Thanks,
    Scott

    Scott_Taylor wrote:
    So I really have one core question here, do you really want to understand, or do you just want to dazzle me with your credentials (which I am not impressed with at all, I find people who flaunt their credentials generally do so because it is all that they have)?
    You were obviously attempting to 'educate' me as to the benefits of code generation. That has nothing to do with your problem nor does it have anything to do with me, since obviously I am quite familar with why one would actually use code generation.
    I will answer though, one last time.
    And I still have no idea what you are talking about.I must be crazy or something, since there is probably nothing in the programming world that you could not understand, right? Generally, when you don't understand you probe with questions so you can understand. Unless you are looking to be dismissive, in which case, why even answer?
    First there are many things that I don't understand. And which I freely admit. For example GUIs. And embedded programming. And business domains such as natural gas exploration.
    However we are not discussing one of those topics. But rather we are discussing a code generation and, apparently, SQL/data dynamic expression creation. And those are in fact topics that I know a great deal.
    Any compilation of that is only going to provide a minimal gain when compared to the database hit itself.I really don't understand this, and it leads me to believe we are not understanding each other. I am not compiling sql, consider this:
    Collection<C> list = myNeatDAO.search(Object someDTOSearchCriteria);This searchCriteria object is what I am trying to dynamically compile, again, not for performance in any way, rather to avoid writing code that is 100% based on another class (a class which may change over time). The formation of the query (whether sql or hql or whatever) is handled via reflection on the search criteria and a parameterized class in the DAO subclass's definition:
    class MyNeatDAO<SomeDTO> extends AbstractDAOImplementation{}So we compare the classes and form some kind of query in the DAO. This is pretty standard stuff. I think where the wires are crossed is that you are 1)not reading attentively (if you had you would know that I marked this complete many posts ago) and 2) are not understanding the goal of the code.
    You got part of it correct in that someone isn't reading carefully and that someone doesn't understand.
    Myself I understand your problem several posts ago.
    What you failed to understand that you are attempting to create a system that I have already done so. Several times. It never required runtime compilation.
    Perhaps I wasn't clear before. Hopefully this will make it clearer.
    1. Given a system of DTOs that exist as compiled units (they could even be created at runtime.)
    2. A solution can be built which allows for full expression creation for queries based on those DTOs.
    3. That solution does NOT require runtime compilation.
    4. That solution does NOT require changes for future DTOs changes (which would seem obvious given that it can support DTOs compiled at runtime.)
    >
    Code generation is dependent on code patterns. That is true whether
    one shot or ongoing. Nothing else.
    Runtime functionality on the other hand is driven by user requirements.
    Code generation is a tool to achieve that and nothing else.I have two thoughts here.
    First, if you have a clear, unwavering requirement(or pattern) based purely on some other single class, how does it make sense not to automate that? Why would I want to write what is basically a near copy of the original dto each time? Not only does this increase development time (basically paying a developer to do a job a monkey could do), but it also increases modification time later when the true requirements are either discovered or the original communicated requirements change. If this single class is runtime compiled based on the dto, if you change the dto, the change to the corresponding search criteria class is automatic and perfect. There is no debug time, it just works since you are not writing it and introducing errors. The drawback is that you lose control over that class, which in this case does not create a major problem.
    Again you failed to read "attentively". Why do you think that I myself would have been using code generation since before java existed?
    Did you not understand that I have in fact been doing it? Or did you not understand that "before java" means that I have been doing it for a long time?
    I didn't bother mentioning it before but I also worked on a system which used runtime compilation extensively. Matter of fact one of the expression engines that I created was used in that system. However there was no need for that engine to be compiled at runtime.
    My second thought is that code generation is a tool to meet customer requirements, just like anything else in the software world. There are internal and external customers, and their needs must be balanced. The tool is in the box, and I am going to use it as I see fit. If you think this is wrong or off base, please express this in a useful way, I have laid my cards on the table pretty clearly (I think), if you see something glaring, explain what is wrong, don't just spew dismissive one-liners. That only makes me believe that you can't explain what is wrong(I have seen this a lot with programmers, they don't answer in a clear manner because they can't), and that you just want to be condescending, which I find a little sad. Consequently, this leads me to believe that you probably can't back up anything you are saying, and I may miss a very valid point since you never expressed it in any clear manner.
    You have written customer requirements that state explicitly that the system must have a SQL/data query system in place that is compiled at runtime?
    Unusual requirements. Most requirements would be along the lines of "call center employee must be able to enter customer last name or phone number or both"

  • Need some clarification in the following programs

    Hi ABAPers
    I am learning ABAP programming. I am having some doubts in some programs which are there in the book TEACH YOURSELF ABAP/4 in 21 Days.
    I hope you will clear all my doubts
    1)     when I execute this program it is giving me error message and I am unable to resolve the error in it.
    I don’t remember the error message but it is something related to data conversion.
    Listing 9.5  Variables Filled with Characters Other than Blanks or Zeros Using the WITH Addition of the CLEAR Statement
    1 report ztx0905.
    2 tables ztxlfa1.
    3 data: f1(2) type c value 'AB',
    4       f2(2) type c,
    5       f3    type i value 12345,
    6       begin of s1,
    7           f1(3) type c value 'XYZ',
    8           f2    type i value 123456,
    9           end of s1.
    10 write: / 'f1=''' no-gap, f1 no-gap, '''',
    11        / 'f2=''' no-gap, f2 no-gap, '''',
    12        / 'f3=''' no-gap, f3 no-gap, '''',
    13        / 's1-f1=''' no-gap, s1-f1 no-gap, '''',
    14        / 's1-f2=''' no-gap, s1-f2 no-gap, '''',
    15        / 'ztxlfa1-lifnr=''' no-gap, ztxlfa1-lifnr no-gap, '''',
    16        / 'ztxlfa1-land1=''' no-gap, ztxlfa1-land1 no-gap, '''',
    17        /.
    18 clear: f1 with 'X',
    19        f2 with f1,
    20        f3 with 3,
    21        s1 with 'X',
    22        ztxlfa1 with 0.
    23 write: / 'f1=''' no-gap, f1 no-gap, '''',
    24        / 'f2=''' no-gap, f2 no-gap, '''',
    25        / 'f3=''' no-gap, f3 no-gap, '''',
    26        / 's1-f1=''' no-gap, s1-f1 no-gap, '''',
    27        / 's1-f2=''' no-gap, s1-f2 no-gap, '''',
    28        / 'ztxlfa1-lifnr=''' no-gap, ztxlfa1-lifnr no-gap, '''',
    29        / 'ztxlfa1-land1=''' no-gap, ztxlfa1-land1 no-gap, ''''.
    according the book the output should be
    The code in Listing 9.5 produces this output:
    f1='AB'
    f2='  '
    f3='    12,345 '
    s1-f1='XYZ'
    s1-f2='   123,456 '
    ztxlfa1-lifnr='          '
    ztxlfa1-land1='   '
    f1='XX'
    f2='XX'
    f3='50,529,027 '
    s1-f1='XXX'
    s1-f2='1482184792 '
    ztxlfa1-lifnr='##########'
    ztxlfa1-land1='###'
    can you please explain me this program.
    2)     This program  is giving me the following error message at line 7.
    ALPHA AND IT-F1 ARE TYPE- INCOMPATIBLE.
    The program is as follows.
    Listing 12.9  Deleting Rows from an Internal Table Can also be Done Using the delete Statement
    1  report ztx1209.
    2  data: begin of it occurs 12,
    3            f1,
    4            end of it,
    5            alpha(12) value 'ABCDEFGHIJKL'.
    6
    7  do 12 times varying it-f1 from alpha0 next alpha1.
    8      append it.
    9      enddo.
    10
    11 loop at it.
    12     write: / sy-tabix, it-f1.
    13     endloop.
    14
    15 delete it index 5.
    16 skip.
    17 loop at it.
    18     write: / sy-tabix, it-f1.
    19     endloop.
    20
    21 delete it from 6 to 8.
    22 skip.
    23 loop at it.
    24     write: / sy-tabix, it-f1.
    25     endloop.
    26
    27 delete it where f1 between 'B' and 'D'.
    28 skip.
    29 loop at it.
    30     write: / sy-tabix, it-f1.
    31     endloop.
    32
    33 loop at it where f1 between 'E' and 'J'.
    34     delete it.
    35     endloop.
    36
    37 skip.
    38 loop at it.
    39     write: / sy-tabix, it-f1.
    40     endloop.
    41
    42 read table it with key f1 = 'K' binary search.
    43 write: /, / 'sy-subrc=', sy-subrc, 'sy-tabix=', sy-tabix, / ''.
    44 if sy-subrc = 0.
    45     delete it index sy-tabix.
    46     endif.
    47
    48 skip.
    49 loop at it.
    50     write: / sy-tabix, it-f1.
    51     endloop.
    52
    53 free it.
    And the out put  according to the book is as follows
    The code in Listing 12.9 produces this output:
             1  A
             2  B
             3  C
             4  D
             5  E
             6  F
             7  G
             8  H
             9  I
            10  J
            11  K
            12  L
             1  A
             2  B
             3  C
             4  D
             5  F
             6  G
             7  H
             8  I
             9  J
            10  K
            11  L
             1  A
             2  B
             3  C
             4  D
             5  F
             6  J
             7  K
             8  L
             1  A
             2  F
             3  J
             4  K
             5  L
             1  A
             2  K
             3  L
    sy-subrc=     0  sy-tabix=          2
             1  A
             2  L
    How to rectify the error in this program.
    3)     In this program I want to ask is there any way that I can be able to see the write statement output which is there in the initialization event block. If yes then how .
    (Note I don’t want to remove the paramter (selection screen) statement.)
    Listing 18.13  Explain the Sequence of Events That Occurs in this Program
    report ztx1813 no standard page heading.
    ztx1813 data: flag,
          ctr type i.
    parameters p1.
    initialization.
      flag = 'I'.
      write: / 'in Initialization'.
    start-of-selection.
      flag = 'S'.
      write: / 'in Start-Of-Selection',
             / 'p1 =', p1.
    top-of-page.
       add 1 to ctr.
       write: / 'Top of page, flag =', flag, 'ctr =', ctr.
       uline.
    4) can anybody please mail me some exercise program queries  ( right from the basic to complex one) on this mail id  [email protected]  or else u can give me the url of the website which contains such program.
    EARGERLY WAITING FOR UR REPLIES
    Regards,
    maqsood

    Maqsood,
    I had tested all your programs and I am getting the correct output. I didnt get any error message and I didnt do any changes. I just copied your code and executed it.
    To your Q3, I can say...INITIALIZATION is used only to assign values to variables. It is not used for output.
    Only the start-of-selection and top-of-page will be printed.

  • StringTokenizer vs. split and empty strings -- some clarification please?

    Hi everybody,
    I posted a question that was sort of similar to this once, asking if it was best to convert any StringTokenizers to calls to split when parsing strings, but this one is a little different. I rarely use split, because if there are consecutive delimiters, it gives empty strings in the array it returns, which I don't want. On the other hand, I know StringTokenizer is slower, but it doesn't give empty strings with consecutive delimiters. I would use split much more often if there was a way to use it and not have to check every array element to make sure it isn't the empty string. I think I may have misunderstood the javadoc to some extent--could anyone explain to me why split causes empty strings and StringTokenizer doesn't?
    Thanks,
    Jezzica85

    Because they are different?
    Tokenizers are designed to return tokens, whereas split is simply splitting the String up into bits. They have different purposes
    and uses to be honest. I believe the results of previous discussions of this have indicated that Tokenizers are slightly (very
    slightly and not really meaningfully) faster and tokenizers do have the option of return delimiters as well which can be useful
    and is a functionality not present in just a straight split.
    However. split and regex in general are newer additions to the Java platform and they do have some advantages. The most
    obvious being that you cannot use a tokenizer to split up values where the delimiter is multiple characters and you can with
    split.
    So in general the advice given to you was good, because split gives you more flexibility down the road. If you don't want
    the empty strings then yes just read them and throw them away.
    Edited by: cotton.m on Mar 6, 2008 7:34 AM
    goddamned stupid forum formatting

  • Need some clarification in WSDL based Proxy & Business Services

    Hi,
    Whenever we configure a Proxy service or Business service based on a WSDL, we use to select Port option instead of Binding when selecting our WSDL for the proxy/business service. Now my ques is why we go for a port instead of binding? what is the difference between them?
    My ques may seem very basic but i have this doubt for long time so i am postin this.
    Thanks,
    Arun

    Hi Arun,
    In my opinion that is not such a big deal, and in most of the cases you can choose any of the options...
    I usually prefer port because it keeps the service name... If you are proxying someone else's service it is better to keep it in a way that the service consumer doesn't notice the difference...
    I'm guessing there are cases out there where the service consumer client requires that the binding operations be unchanged... But I've never came across with one of them...
    Hope that clarifies...
    Cheers,
    Vlad

  • Hi frnds,  need some clarification on OBIEE.

    How the details of the reports created in OBIEE gets saved? Database or FlatFile. I have created some reports in OBIEE but not able to find the structure of the report or data stored in table in the database. Kindly help in figuring out this.    Thanks in advance. Have a nice day.

    Details of the report get stored in the Web catalog and web catalog reside on the Weblogic Server.If you want to store report details like Logical Query,Creation Time,Creation Date....etc then you need to enable Usage Tracking,Where this information will store in the database,If you want to export you can do that into a flat file,
    Do let me know if you need any support in enabling usage tracking.
    Mark if helps,
    Thanks,
    Sasi Nagireddy

  • Loading 0IC_C02 infocube -- some clarifications

    Hi BW experts,
    I have loaded data into 0IC_C02(matreials movements) infocube using the datasource 2lis_03_s195. It is an initialization load. The data has been successfully loaded. Now my point is, as this cube is related to inventory management is there any certain proceedure to follow in loading, compressing & running queries on this cube. your clarifications will be highly appreciated.
    thanks & regards.

    Hi Vamshi,
    Check these links:
    Inventory Management
    Re: Inventory Management
    Besides these you would find lots of links on IM in these forums.
    Bye
    Dinesh

  • Decentralised WM some clarifications

    Please clarify some of the below points
    1. Inbound/outbound are created in ERP and replicated in DWM via BAPI. When we are doing PGR/PGI will there be any accounting entries/documents in DWM? From sap help we saw the mentioned BAPI to be used u2013 InboundDelivery.ConfirmDecentral and OutboundDelivery.ConfirmDecentral but, both are not having accounting information. We donu2019t need financial entries in DWM. It should always happen in ERP. Is this possible?
    2. Will DWM support physical inventory for the materials at warehouse level. Will there be any accounting entries coming in when we are writing off the stock as part of regular physical inventory counts. Are there any standard reports available to compare ECC and DWM for stocks , open IBDs / OBDs etc OR checking unprocessed / erroneous messages is the only way to compare them ?
    3. How SAP is managing contingency for DWM? When PS1 is down for 4-24 hours how can we still continue to work in DWM without Inbound/Outbound updated information in DWM?  Can we create deliveries directly in DWM system?
    4. In case of contingency we need Purchase order and sales order information in DWM as this information cannot be extracted from PS1. Do we need all the purchase/sales related settings in DWM to create PO and SO? What all masters need to be migrated from PS1 to DWM?
    5. Do we still need to configure some SD and Costing related settings in DWM to post PGR/PGI? Or Can DWM act as independent stand-alone SAP ERP system? How the WM related customization will be taken care after DWM implementation? Do we need to make customization in ECC and then import them to DWM or DWM will remain open for customization?  What is the recommended best practice for it?
    6. Please give all the supported transaction codes for DWM. What all standard transactions are NOT supported in DWM?

    Hi,
    To your questions:
    1. You do not want to create accounting docs in the DWM system...You must see the DWM system purely as a stock system. All material docs and accounting entries sit on the ERP system. These are created from the BAPI's you mentioned once they are received and posted in the ERP system.
    2. Yes to all. For the DWM storage location, you execute the physical inventories on the DWM system. Any diff's will post with mvnt types 711-714 back to the ERP system and create the material and accounting docs. You get tcode LX23 to compare stocks on the lowest level between DWM and ERP. You also cannot create a physical inventory on a bin where a current activity (PGR or PGI) is taking place. You can get these details from delivery monitors or open transfer orders for ST or Material or Bin, etc.
    3. We went with the following business rule: Once the main ERP system is down, DWM can continue with any document which is already in the DWM system. NO new deliveries will be created from the DWM system (remember document flows, number ranges, etc...!!!). Any PGI or PGR posted from DWM will sit in the ALE monitor to be executed once the connection is back again. In our invironment (with a 24/7 factory scenario) we start the inbound from DWM (by creating a TO) and thus keep the factory going and all putaway tasks wait for the connection to be established again.
    4. Believe me...you want only 1 system to be the one where PO's / SO's are created. The deliveries of these documents exit in the DWM system. You will need the customer/vendor/material master in DWM (but only at header level for the 1st 2 mentioned). For materials you will need other views than the ERP system, for example you do not need accounting/costing vies, but you need WM views.
    5. You will need to configure the sales structure (sales org/ shipping point / distr channel). No costing configuration neccessary. You will need to define the WH number and assignment in the ERP system, also the Decentralized WM Part. The you go to the DWM system and do all WM config there. If you have an already WM system in the ECC ERP system, then I would recommend that you choose a new sloc for the DWM environment. this would ease the transition process.
    6. Wow...this is everything under LE....Transfer Orders are used, In/Outbound Deliveries are used, LT01 / LT10 / ....and lots more. It's difficult to list it here. See the SAP menu.
    Good luck.
    Hein

  • IDOCs Packaging ? Need some clarification ;-)

    Team,
    Simple scenario:
      =>   JMS (xml) --> ( … XI …  ) --> (IDOCs) --> SAP ECC.
    This is scenario working fine.
    However, If we have 10 IDOC, we have 10 LUW on the backend system (not good for the performance).
    To avoid these multiple luw, we are using the IDOC packaging approach (no ccBPM) the wizard (idxpw).
    So …
    1) XML messages are coming in XI
    2) Since we have a filter (sxmsfilter), messages are kept aside for a while
    3) Until a job (created using idxpw) wake up and start processing the messages
    Our questions:
    When does the “packaging” trick arrive ?  We do not see the messages in IDXP (with the package icon), but with the yellow dot
    thank for you help
    MO A+

    Hi XI Team,
    Have  a look at Sravya's blog
    /people/sravya.talanki2/blog/2005/12/09/xiidoc-message-packages
    Usually IDOC packaging is created at the sender side as explained in the blog. Have never tried IDOC packaging at receiver end.
    In our case, we changed the IDOC XSD to unlimited and used it. It will create a single message in XI.
    But in R/3 it is not an IDOC package. It will be a collection of IDOC. Each will trigger a new LUW.
    Hope the weblog will give you some idea. If you are able to create an IDOC package at th R/3 do update us. I will give this scenario a try tomo. It is 12:20AM my time
    Good Luck
    Regards,
    Jaishankar

Maybe you are looking for