How to reduce downtime for setup table

Scenario u2013According to system data, Setup table will normally take 5 days to fill but client agreed only for max 2 days downtime. User can do change only last 3 month documents not before that. For filling 3 month data in set up table 1 day required so I have to mange options accordingly.
Datasource u2013 2LIS_13_VDITM -> DSO u2013 ZBIllIG ->Info cube
I have to Reduce Downtime for Setup table so planning following optionsu2013
1.     First run the info package for Initialization without data transfer. Then start filling setup table without blocking the User. In case Users changes any document at the time of filling setup table then these changes will move to delta queue. Once setup table filled then execute full repair request and then Delta info package.
2.     Early delta initialization u2013 no idea how to perform steps.
Please share your views with detail steps.
OLI*BW doesnu2019t have any date range in selection criteria so manually I will find out document for particular dates and use these document range.
Checked lot of post in SDN but still expecting final answer to go ahead in Production.

Hi ,
Your requirement is Billing ODS and Cube - Reset up in R/3 SYSTEM & Initialization in BW SYSTEM .
Before starting find the previous data load volume and size.
1.Go to LBWG application value=13 (Always Schedule the job in the back-ground mode)
2.Verify using tcode u2018SE16u2019 that there are NO records in u2018MC13VD0ITMSETUPu2019 table after above delete job is complete.
3.Suspend the process chain job in BW.This is to avoid it getting kicked off while the reload process is still in progress.
4.Need to check LBWQ in R/3 system for MCEX13, unprocessed Outbound queue (records). This should be empty as the last delta would have processed all.
5.Delete the initflag in BW.
6.Need to check RSA7 in R/3 SYSTEM to verify that there is NO record for 2LIS_13_VDITM    (to be done right before the Setup job).
7.Create New Info Package for Info Source '2LIS_13_VDITM' for u2018Initialize without Data Transfer Optionu2019 .Execute the package.Re-establish the Delta processing flags in R/3 and BW for the Billing TD load .
8.Save the record count for table u2018VBRPu2019 using SE16 right before the setup job.
9.Schedule Billing Data Setup Job 'OLI9BW'  in R/3 SYSTEM .
10.After the Billing Setup job is complete in R/3 system, get the record count of table u2018VBRPu2019 again using u2018SE16u2019
Expeted time in R/3:5 to 7 hrs(setupjobs)
Expeted time for init and fullload : 6 hrs
ODS activation : 3hrs
Cube and with agrregates fill all : 8hrs.
Thanks,
naidu.

Similar Messages

  • How downtime can be reduced for setup table update.

    Hi;
    Can anyone tell me various ways to reduced system downtime for setup table updates.
    thanks
    Warm Regards
    Sharebw

    Hi,
    You will need to fill the set up tables in 'no postings period'. In other words when no trasnactions are posted for that area in R/3 otherwise those records will not come to BW. Discuss this with end user and decide. Weekends are a general choice for this activity.
    try Early Delta Initialization
    With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the application during the initialization request in the source system. This means that you are able to execute the initialization of the delta process (the init request), without having to stop the posting of data in the source system. The option of executing an early delta initialization is only available if the DataSource extractor called in the source system with this data request supports this.
    Extractors that support early delta initialization are delivered with Plug-Ins as of Plug-In (-A) 2002.1.
    You cannot run an initialization simulation together with an early delta initialization.
    hope this link may make you clear about early delta init
    http://help.sap.com/saphelp_nw04s/helpdata/en/80/1a65dce07211d2acb80000e829fbfe/frameset.htm
    thanks,
    JituK

  • Ways to reduce downtime for filling up setup table

    Hi Experts,
    Can anyone tell me the step by step process so that i can reduce downtime for filling up setup tables?
    I know that setup tables can be filled by considering sales document numbers....but the further steps are not clear with me...........specially with data loadin till PSA and then to ODS/Cube
    So plz throw some light on this.......
    Regards,
    Vaishnavi.

    Hi,
    You will need to fill the set up tables in 'no postings period'. In other words when no trasnactions are posted for that area in R/3 otherwise those records will not come to BW. Discuss this with end user and decide. Weekends are a general choice for this activity.
    You can run them after business hours so that there wont be any transactions, or in the night times or you can do it on week ends so that there is no need to take down time.
    Fill the setup tables with already closed values and then fill up again with open values.This will reduce the down time.
    Initialize  closed periods first in which users wont enter data ( for example in 2007 or 2006), this initializations can be done while users are working. Then the initialize last period at night/weekends.holidays etc.
    If you know documents that are in closed periods, and you are sure that these documents can no longer be changed, you can only fill the SetUp tables for these documents or only for these periods, by continuing to post in open periods. You then initialize only for these intervals, delete the setup table, and only then do you fill the setup table with the rest of the documents  this procedure can drastically reduce the downtimes of your system.
    However, there is a risk that user exits (and in LIS, formulas and conditions) can be used to retrieve documents that are in periods that are already "closed periods".
    One more thing what you need to bear in mind is, to check if there are any Scheduled jobs which are updating the transaction tables, which would definitely cause Data Reconciliation Issues.
    Try Early Delta Initialization
    With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the application during the initialization request in the source system. This means that you are able to execute the initialization of the delta process (the init request), without having to stop the posting of data in the source system. The option of executing an early delta initialization is only available if the DataSource extractor called in the source system with this data request supports this.
    Extractors that support early delta initialization are delivered with Plug-Ins as of Plug-In (-A) 2002.1.
    You cannot run an initialization simulation together with an early delta initialization.
    Hope this link may make you clear about Early Delta Initialization
    http://help.sap.com/saphelp_nw04s/helpdata/en/80/1a65dce07211d2acb80000e829fbfe/frameset.htm
    http://www.allinterview.com/showanswers/2907.html
    http://sap.ittoolbox.com/groups/technical-functional/sap-bw/early-delta-initialization-459379
    http://books.google.co.in/books?id=qYtz7kEHegEC&pg=PA293&lpg=PA293&dq=early+delta&source=web&ots=AM1PtX6wcZ&sig=xKOF85Gb8UtszY44zt06K6R0n3M&hl=en#PPA290,M1
    http://www.blackwellpublishing.com/journal.asp?ref=1069-6563&site=1
    EARLY DELTA
    Early delta Initialization
    How To… Minimize Downtime For Delta Initialization
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d51aa90-0201-0010-749e-d6b993c7a0d6
    How To Minimize Effects of Planned Downtime (NW7.0)
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/901c5703-f197-2910-e290-a2851d1bf3bb
    Note 753654 - How can downtime be reduced for setup table update
    602260 - Procedure for reconstructing data for BW 
    437672 - LBWE: Performance for setup of extract structures 
    436393 - Performance improvement for filling the setup tables 
    Note 739863
    /thread/756626 [original link is broken]
    Re: How to Setup and INIT from 2LIS_13_VDITM with millions of records
    How downtime can be reduced for setup table update.
    Fill setup tables without locking users
    Initialization Setup Tables.
    Hope this helps.
    Thanks,
    JituK

  • How to Reduce cost of full table scan or remove full table scan while execu

    Dear Experts
    need your help.
    I execute a query and create a explain plan in that plan i found cost of a table is very high (2777) and it was full table scan.
    Please guide me How to Reduce cost of full table scan or remove full table scan while execute the query.
    Thanks

    Need your help to tune this query..
    SELECT DISTINCT ool.org_id, ool.header_id, ooh.order_number, ool.line_id,
    ool.line_number, ool.shipment_number,
    NVL (ool.option_number, -99) option_number, xcl.GROUP_ID,
    xcl.attribute3, xcl.attribute4
    FROM oe_order_headers ooh,
    xxcn_comp_header xch,
    xxcn_comp_lines xcl,
    fnd_lookup_values_vl fvl,
    oe_order_lines ool
    WHERE 1 = 1
    AND ooh.org_id = 1524
    AND xch.src_ref_no = TO_CHAR (ooh.order_number)
    AND xch.src_ref_id = ooh.header_id
    AND xch.org_id = 1524
    AND xcl.header_id = xch.header_id
    AND ool.line_id = xcl.oe_line_id
    AND ool.flow_status_code IN
    ('WWD_SHIPPED',
    'FULFILLED',
    'SHIPPED',
    'CLOSED',
    'RETURNED'
    AND ool.org_id = 1524
    AND ool.header_id = ooh.header_id
    AND xch.org_id = 1524
    AND fvl.lookup_type = 'EMR OIC SOURCE FOR OU'
    AND fvl.tag = '1524'
    AND fvl.description = xch.SOURCE
    AND EXISTS (
    SELECT 1
    FROM oe_order_lines oe
    WHERE oe.header_id = ool.header_id
    AND oe.org_id = 1524
    AND oe.line_number = ool.line_number
    AND oe.ordered_item = ool.ordered_item
    AND oe.shipment_number > ool.shipment_number
    AND NVL (oe.option_number, -99) =
    NVL (ool.option_number,
    -99)
    AND NOT EXISTS (
    SELECT 1
    FROM xxcn_comp_lines xcl2
    WHERE xcl.GROUP_ID = xcl2.GROUP_ID
    AND oe.line_id = oe_line_id))
    call count cpu elapsed disk query current rows
    Parse 1 0.07 0.12 12 25 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 2 103.03 852.42 176206 4997766 0 12
    total 4 103.10 852.55 176218 4997791 0 12
    In this LIO is very high...can u please help in resolving this performance issue

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • What is authorization object and how to create it for a table

    Hi All,
    What is authorization object and how to create it for a table?
    Thanks

    Hi
    Authorization
    For authorization checks, there are many ways of linking authorization objects with user actions in an SAP system. The following discusses three possibilities in the context of ABAP programming.
    Authorization Check for Transactions
    You can directly link authorization objects with transaction codes. You can enter values for the fields of an authorization object in the transaction maintenance. Before the transaction is executed, the system compares these values with the values in the user master record and only starts the transaction if the appropriate authorization exists.
    Authorization Check for ABAP Programs
    For ABAP programs, the two objects S_DEVELOP (program development and program execution) and S_PROGRAM (program maintenance) exist. They contains a field P_GROUP that is connected with the program attribute authorization group. Thus, you can assign users program-specific authorizations for individual ABAP programs.
    Authorization Check in ABAP Programs
    A more sophisticated, user-programmed authorization check is possible using the Authority-Check statement. It allows you to check the entries in the user master record for specific authorization objects against any other values. Therefore, if a transaction or program is not sufficiently protected or not every user that is authorized to use the program can also execute all the actions, this statement must be used.
    AUTHORITY-CHECK OBJECT object
                            ID name1 FIELD f1
                            ID name2 FIELD f2
                            ID namen FIELD fn.
    object is the name of an authorization object. With name1, name2 ... , and so on, you must list all fields of the authorization object object. With  f1, f2 ... , and so on, you must specify the values that the system is to check against the entries in the relevant authorization of the user master record. The AUTHORITY-CHECK statement searches for the specified object in the user profile and checks the useru2019s authorizations for all values of f1, f2 ... . You can avoid checking a field name1, name2 ... by replacing FIELD f1  FIELD f2 with DUMMY.
    After the FIELD addition, you can only specify an elementary field, not a selection table. However, there are function modules available that execute the AUTHORITY-CHECK statement for all values of selection tables. The AUTHORITY-CHECK statement is supported by a statement pattern.
    Only if the user has all authorizations, is the return value sy-subrc of the AUTHORITY-CHECK statement set to 0. The most important return values are:
    ·        0: The user has an authorization for all specified values.
    ·        4: The user does not have the authorization.
    ·        8: The number of specified fields is incorrect.
    ·        12: The specified authorization object does not exist.
    A list of all possible return values is available in the ABAP keyword documentation. The content of sy-subrc has to be closely examined to ascertain the result of the authorization check and react accordingly.
    REPORT demo_authorithy_check.
    PARAMETERS pa_carr LIKE sflight-carrid.
    DATA wa_flights LIKE demo_focc.
    AT SELECTION-SCREEN.
      AUTHORITY-CHECK OBJECT 'S_CARRID'
                      ID 'CARRID' FIELD pa_carr
                      ID 'ACTVT' FIELD '03'.
      IF sy-subrc = 4.
        MESSAGE e045(sabapdocu) WITH pa_carr.
      ELSEIF sy-subrc <> 0.
        MESSAGE e184(sabapdocu) WITH text-010.
      ENDIF.
    START-OF-SELECTION.
      SELECT  carrid connid fldate seatsmax seatsocc
        FROM  sflight
        INTO  CORRESPONDING FIELDS OF wa_flights
        WHERE carrid = pa_carr.
        WRITE: / wa_flights-carrid,
                 wa_flights-connid,
                 wa_flights-fldate,
                 wa_flights-seatsmax,
                 wa_flights-seatsocc.
      ENDSELECT.
    Regards
    Hitesh

  • How to create view for xmltype table in oracle

    hi:
    Can some one help me how to create view for xmltype table in oracle?
    XMLType do not have column
    Sem

    Thank you !!
    I read it and become very hard to implement what I want to do.
    Can you give me example please?
    My main goal to create view for xmltype table is to XQuery the XML data?
    Do you have any other suggestion?
    Please help
    Ali_2

  • How to change tablespace for a table in 10g?

    Does anyone know how to change tablespace for a table (like changing tablespace for an index [alter index ... rebuild tablespace ... ])? Many thanks in advance.

    alter table tablename move tablespace newtsname;
    You need to rebuild the indexes after the move.

  • Underl;ying database tables for Setup Tables

    Experts,
    Can anyone tell me what are the underlying tables for Setup tables for Purchasing which I can check via SE11 in R/3.
    Thanks,
    Jain

    Hi Rajesh,
    You can check the records in NPRT transaction.
    Hope this helps you.
    Regards,
    Saravanan.

  • 8 hrs downtime, how to do the init setup tables

    Hello BW Experts,
    I have 8 hrs downtime per day and i have to do the fill of 30 million records to the init setup tables. If i could breakdown each day init setup table fill based on the company code + sales org.
    day 1 fill cc1
    day 2 fill cc2
    How what happens if someone does any records changess in the cc1 on day2. how is that captured. what is the solution.
    Please explain.
    Regards,
    BWer

    After your initialization Setup Tables you must do a Init Delta Queue for that company only.....the delta load is cummulative respecto to the init delta loads....
    If yo u have three init delta one for each company ...the delta loads load the data for alll three companies at same time.....you can initialize companies one by one and load delta..
    This prevents lost changes in companies that you have setting up...
    I hope this helps you
    Regards
    Message was edited by:
            Oscar Díaz

  • How to reduce time for gather statistics for a table.

    I have a table size 520 gb
    Its one of the partition size is 38 gb
    and total indexes of related table is 412 gb.
    Server/instance details.
    ==========
    56 cpu -> Hyper threading enable
    280 gb ram
    35 gb sga
    27 gb buffer cache
    4.5 gb shared pool size
    25 gb pga
    undo size 90gb
    temp size 150 gb
    Details :
    exec dbms_stats.gather_table_stats('OWNER','TAB_NAME',PARTNAME=>'PART_NAME',CASCADE=>FALSE,ESTIMATE_PERCENT=>10,DEGREE=>30,NO_INVALIDATE=>TRUE);
    when i am firing this in an ideal time when there is no load that time also is is taking 28 mins to complete.
    Can anybody please reply me how can we reduce the stats gather time.
    Thanks in advance,
    Tapas Karmakar
    Oracle DBA.

    Enable tracing to see where the time is going.
    parallel 30 seems optimistic - unless you have a large number of discs to support the I/O ?
    you haven't limited histogram collection, and most of the time spent of histograms may be wasted time - what histograms do you really need, and how many does Oracle analyse for and then discard ?
    Using a block sample may help slightly
    You haven't limited the granularity of the stats collection to the partition - the default is partition plus table, so I think you're also doing a massive sample on the table after completing the partition. Is this what you want to do, or do you have an alternative strategy for generating table-level stats.
    Regards
    Jonathan Lewis

  • How to reduce time for replicating large tables?

    Hi
    Any suggestions on how to reduce the amount of time it takes to replicate a large table when it is first created?
    I have a table with 150 million rows in it, and it takes forever to start the replication process even if I run it in parallel, and I can’t afford the downtime.

    What downtime are you referring to? The primary doesn't need to be down when you're setting up replication and you're presumably still in the process of doing the initial configuration on the replicated database, so it's not really down, it's just not up yet.
    Justin

  • Period for Setup table Fill

    Hi Experts,
    Is there anyway of giving time ranges for filling up sales order & billing document setup tables?
    LIke, i want to fillup setup table for sales order for the month of Jun to August 2007.....can do this?how?
    Any help is appreciated.
    Rgrds,
    Vaishnavi
    Edited by: Vaishnavi S on Apr 17, 2008 5:34 AM

    Hi,
    I think you can. I believe you are trying to fill setup tables using some selections (probably to minimize init down time).
    This should help.
    Fill the setup tables with already closed values and then fill up again with open values.This will reduce the down time.
    Initialize  closed periods first in which users wont enter data ( for example in 2007 or 2006), this initializations can be done while users are working. Then the initialize last period at night/weekends.holidays etc.
    If you know documents that are in closed periods, and you are sure that these documents can no longer be changed, you can only fill the SetUp tables for these documents or only for these periods, by continuing to post in open periods. You then initialize only for these intervals, delete the setup table, and only then do you fill the setup table with the rest of the documents  this procedure can drastically reduce the downtimes of your system.
    However, there is a risk that user exits (and in LIS, formulas and conditions) can be used to retrieve documents that are in periods that are already "closed periods".
    One more thing what you need to bear in mind is, to check if there are any Scheduled jobs which are updating the transaction tables, which would definitely cause Data Reconciliation Issues.
    Multiple init  
    Thanks,
    JituK

  • Best practice to reduce downtime  for fulllaod in Production system

    Hi Guys ,
    we have  options like "Initialize without data transfer  ", Initialization with data transfer"
    To reduce downtime of production system for load setup tables , first I will trigger info package for  Initialization without data transfer so that pointer will be set on table , from that point onwards any record added as delta record , I will trigger Info package for Delta , to get delta records in BW , once delta successful, I will trigger info package for  the repair full request to  get all historical data into setup tables , so that downtime of production system will be reduced.
    Please let me know your thoughts and correct me if I am wrong .
    Please let me know about "Early delta Initialization" option .
    Kind regards.
    hari

    Hi,
    You got some wrong information.
    Info pack - just loads data from setup tables to PSA only.
    Setup tables - need to fill by manual by using related t codes.
    Assuming as your using LO Data source.
    In this case source system lock is mandatory. other wise you need to go with early delta init option.
    Early delta init - its useful to load data into bw without down time at source.
    Means at same time it set delta pointer and will load into as your settings(init with or without).
    if source system not able lock as per client needs then better to go with early delta init options.
    Thanks

  • How to Reduce Clusetering Factor on Table?

    I am seeing a very high clustering factor on an SDO geometry table in our 10g RAC DB on our Linux boxes. This slow performance is repeateable on othe r Linux as well as Solaris DBs for the same table. Inserts go in at a rate of 44 milliseconds per insert and we only have about 27000 rows in the table. After viewing a VERY slow insert of about 600 records into this same table, I saw the clustering factor in OEM. The clustering factor is nearly identical to the # rows in the table indicating that useability of the index is fairly low now. I have referenced Metalink Tech Note 223117.1 and, while it affirms what I've seen, I am still trying to determine how to reduce the Clustering Factor. The excerpt on how to do this is below:
    "The only method to affect the clustering factor is to sort and then store the rows in the table in the same order as in they appear in the index. Exporting rows and putting them back in the same order that they appeared originally will have no affect. Remember that ordering the rows to suit one index may have detrimental effects on the choice of other indexes."
    Sounds great, but how does one actually go about storing the rows in the table in the same order as they appear in the index?
    We have tried placing our commits after the last insert as well as after every insert and the results are fairly neglible. We also have a column of type SDE.ST_GEOMETRY in the table and are wondering if this might also be an issue. Thanks in advance for any help.
    Matt Sauter

    Joel is right that the clustering factor is going to have absolutely no effect on the speed of inserts. The clustering factor is merely one, purely statistical, factor the optimiser makes use of to determine how to perform a SELECT statement (i.e., do I bother to use this index or not for row retrieval). It's got nothing to do with the efficiency of inserts.
    If I were you, I'd be looking at factors such as excessive disk I/O taking place for other reasons, inadequate buffer cache and/or enqueue and locking issues instead.
    If you're committing after every insert, for example, then redo will have to be flushed (a commit is about the only foreground wait event -i.e., one that you get to experience in real time- that Oracle has, so a commit after every insert's really not a smart idea). If your redo logs are stored on, say, the worst-performing disk you could buy that's also doing duty as a fileserver's main hard disk, then LGWR will be twiddling its thumbs a lot! You say you've tested this, and that's fine... I'm just saying, it's one theoretical possibility in these sorts of situations. You still want to make sure you're not suffering any log writer-related waits, all the same.
    Similarly, if you're performing huge reads on a (perhaps completely separate) table that is causing the buffer cache to be wiped every second or so, then getting access to your table so your inserts can take place could be problematic. Check if you've got any database writer waits, for example: they are usally a good sign of general I/O bottlenecks.
    Finally, you're on a RAC... so if the blocks of the table you're writing to are in memory over on another instance, and they have to be shipped to your instance, you could have high enqueue waits whilst that shipment is taking place. Maybe your interconnect is not up to the job? Maybe it's faulty, even, with significant packet loss along the way? Even worse if someone's decided to switch off cache fusion transfer for the datafiles invoved (for then block shipment happens by writing them to disk in one instance and reading from disk in the other). RAC adds a whole new level of complexity to things, so good luck tracking that lot down!!
    Also, maybe you're using Freelists and Freelist groups rather than ASSM, so perhaps you're fighting for access to the freelist with whatever else is happening on your database at the time...
    You get the idea: this could be a result of activity taking place on the server for reasons completely unconnected with your insert. It could be a feature of Spatial (with which not many people will be familiar, so good luck if so!) It could be a result of the way your RAC is configured. It could be any number of things... but I'd be willing to bet quite a bit that it's got sod-all to do with the clustering factor!
    You'll need to monitor the insert using a tool like Insider or Toad so you can see if waits and so on happen, more or less in real time -or start using the built-in tools like Statspack or AWR to analyze your workload after it's completed- to work out what your best fix is likely to be.

Maybe you are looking for

  • Installation problem for Weblogic Server 1035 Generic

    Running 11G database on Win 2008 Server 64 bit. I downloaded the 1035 version of Weblogic, following the step by step installation guide provided by Oracle Support and additional documentation such as: http://download.oracle.com/docs/cd/E2176http://w

  • Can't add one particular email account to my curve

    Hi there, Until last week, I was able to access emails from my own domain without any problems but I had to perform a device wipe and now i am unable to add this account - the email setup produces the error "unable to your password or email address"

  • Internal table declaration -OO ABAP

    Hi,           I am looking to declare an internal table using OO objects. My declaration are as follows.. TYPES : Begin of itab,                      f1 type c,                      f2 type c,              End of itab. DATA : itab1 type standard tabl

  • Image exceeding my header?

    I placed an image to the left hand side of my header and as I placed it it made the header bigger lengthwise. What did I do wrong? Thanks

  • Rename & deletion of planning book & Data view

    Hi Experts, Is it possible to rename & deletion of planning book & Data view, if yes what is the procedure. Also suggest if hiding of same is possible