Problem when loading from ODS to the CUBE

Hi Experts,
I am facing an unusual problem when loading data from ODS to the cube.
I have a first level ODS where the delta postings are updated. I have checked the active table of the ODS and the data is accurate.
I have deleted the entire data from the Cube and trying to do a full load from ODS to the CUBE.
I am sure when I run a full load the data goes from Active table of the ODS.
After the full load the the keyfigure values are 0. I have tried doing the testing by loading couple of sales documents and still the keyfigure values are 0.
I wonder when I load the full. The data should be picked exactly the way it is in active table of the ODS.
I also dont have any fancy routines in the update rules.
Please help me in this regard.
Regards
Raghu

Hi,
Check the procedure did u do that exactly or not. just follow the the laymen steps here:... and let me know ur issue
o     First prepare a flat file in Microsoft excel sheet for master data and transaction data and save it in .CSV format and close it.
o     First select info objects option and create info area then create info object catalog then char, & key figure. Then create ‘id’ in Char, Name as attribute activate it then in key figures create no and activate it.
o     In info sources create application component then create info source with direct update for master data and flexible update for transaction data. The flat files would be created for master data create info package and execute it.
o     For transaction data go to info provider then select the file right click on it then select option create ODS object. Then a new screen opens give the name of the ODS and create other screen opens then drag character to key fields.
o     Activate it then update rules it will give error to it then go to communication structure and give 0Record mode then activate it then go to update rules it will activate it.
o     Then go to info source then create info package then external data then processing update, Data targets then the scheduling then click start then the monitor.
o     Then other screen opens there we can see if we doesn’t able to see the records then come back to the info provider right click on it then other screen opens then select the option activate data in ODS then a screen opens.
o     so select the option QM where it would be in a yellow co lour then by double click then another screen opens then select green color option then click continue it and it comes to the previous screen then select the option and click save it would be underneath of it just click it up and close it.
o     Then come back to the main screen to the info source to the info package go to the package and right click on it then a screen opens then click schedule without distributing any of the tab directly click the monitor tab. Then refresh it until the records come to the green.
o     Once it is green the click the data target and see the file executed.
o     Now go to the info provider then a right click on the file name then create info cube. Then a new screen opens and gives the name and activates it. The drag the key figures to the other side, time characteristics to the other side, and chacetristics to the other side.
o     Then create the dimension table then assign it and activate it .Then go to file name right click on it then select the option create update rules select ODS option and give the file name and activate it .
o     Then come back to the main screen and refresh it. Whenever you see 8 file names you will know that it is undergoing the data mart concept.
o     In the main screen click ODS file right click then a pop up screen opens select the option update ODS data target. Then another screen opens in that we are able to see 2 options full update, initial update then select initial update.
o     Then another screen opens info Package there the external data , data targets , scheduler select the option later in background option then a another screen opens just click immediate then click save automatically it closes then click start option.
o     Then select monitor option then the contents then the field to be elected then the file to be executed.
regards
ashwin

Similar Messages

  • The load from ODS - 0FIAP_O03 to cube 0FIAP_C03  is duplicating records

    Hello Everyone
    I need some help/advice for the following issue
    SAP BW version 3.5
    The Delta load for ODS - 0FIAP_O03 works correctly
    The load from ODS - 0FIAP_O03 to cube 0FIAP_C03  is duplicating records
    NB Noticed one other forum user who has raised the same issue but question is not answered
    My questions are 
    1. Is this a known problem?
    2. Is there a fix available from SAP?
    3. If anyone has had this issue and fixed it, could you please share how you achieved this
    i have possible solutions but need to know if there is a standard solution
    Thankyou
    Pushpa

    Hello Pushpa,
    I assume that you are using the Delta load to the initial ODS and then to the CUBE as well.
    If the Delta is placed in both the places then there should not be any issue while sending the data to CUBE.
    If you are using the FULL Load then normally the data will gets aggregated in the cube as objects are with the Addition mode in the Cube.
    Can you post the exact error that you are facing here as this also can be of the design issue.
    Murali

  • Error when loading from ODS to Cube

    Hello Friends,
    I am having trouble loading data from ODS to Infocube in my 2004S system.When loading data i get this message
    07/03/2007     13:10:25     Data target 'ODSXYZ ' removed from list of loadable targets; not loadable.
    07/03/2007     13:28:42     Data target 'ODSXYZ ' is not active or is incorrect; no loading allowed     
    I checked for ODSXYZ in my data target but there is nothing by that name.Even the infopackage doesnt have it.What needs to be done.Please help.
    Thanks.

    Its a expected behavior. When you migrate ur DS, the infopacks associated with it, will grey out all the data targets that they were feeding before, that applies to any infopack you create even after the migration. You womt be able to delete it.
    Having said this, this shouldnt impact ur loads from ODS to Cube. As this shoudl be taken care of by ur DTP's rather than ur infopacks.
    A few questions:
    How are you loading ur cube?
    Did the data get thru fine to PSA with the infopack in question?
    How did you load ur DSO(assuming the load was successful)?
    Message was edited by:
            voodi

  • I'm receiving the following problem when load firefox 5.0 The NTVDM CPU has encountered an illegal instruction

    I'm receiving the follow message when loading firefox 5.0
    The NTVDM CPU has encountered an illegal instruction

    Make sure that you do not run Firefox in compatibility mode.<br />
    You can open the Properties via the right-click context menu of the Firefox desktop shortcut and check that in the "Compatibility" tab.<br />
    Make sure that all items are deselected in the "Compatibility" tab of the Properties window.<br />

  • Data loads from ODS TO CUBE

    Hi,
       i have delta loads comming into the ODS. i do full update from ODS to the cube by date range for material moments no. last time when i loaded the data, it loaded few for the date range. rest did not load and sitting at ODS. this is full load and tried to load again. any suggestions...
    sp

    Hi Srinivas,
            check your update rules between ODS and cube whether they are mapped properly(check your date range for the cube load).
             Do a Init load and then do the delta load.
    hope this will help.

  • Start routine from ODS to Info cube in bw 3.5

    Hi Folks,
                     We have a code for START ROUTINE from ODS to Info Cube.
    we are doing FULL Load to ODS(0CRM_QUT1) and from ODS to INFO CUBE also we are doing FULL Load.
    Iam planning to change to DELTA Load from ODS to Info Cube to improve performance.
    Is there any situtation If  we write a Start routine from ODS to Info cube we can't do DELTA Load .
    Please clarify me.

    improving performance is a good thing. using delta load mechanism is also good for performance as loadings are faster and are less time cosuming. Nevertheless analyse carefully how the solution was built to understand clearly why a full load was required at the time of the design of the solution. maybe the full load was used for some reasons that are still valid or would require more than just changing from a full to a delta load . in conclusion it is always good to use delta but in your case look carefully at the existing coding it is possible that look up is coded for inserting records or other treatments.
    hope this could help you out.

  • ARFCSTATE = SYSFAIL ??????? loading from ODS to cube

    Hi,
    I am loading a delta initialization from ODS to a cube. The load goes fine but never ends. I saw through SM37 transaction and the job has finished but with these messages at the end:
    tRFC: Data Package = 6, TID = 0A11010C066C45438FC70221, Duration = 01:01:03, <b>ARFCSTATE = SYSFAIL</b>
    tRFC: Start = 28.10.2006 12:10:34, End = 28.10.2006 13:11:37                                    
    tRFC: Data Package = 7, TID = 0A11010C066C45438FE00222, Duration = 01:01:38, <b>ARFCSTATE = SYSFAIL</b>
    tRFC: Start = 28.10.2006 12:10:59, End = 28.10.2006 13:12:37                                    
    tRFC: Data Package = 8, TID = 0A11010C066C45438FE90223, Duration = 01:01:32, <b>ARFCSTATE = SYSFAIL</b>
    tRFC: Start = 28.10.2006 12:11:07, End = 28.10.2006 13:12:39                                    
    tRFC: Data Package = 10, TID = 0A11010C04BC454390020000, Duration = 01:01:12, <b>ARFCSTATE = SYSFAIL</b>
    tRFC: Start = 28.10.2006 12:11:32, End = 28.10.2006 13:12:44                                    
    Synchronized transmission of info IDoc 5 (0 parallel tasks)                                     
    tRFC: Data Package = 0, TID = , Duration = 00:00:03, ARFCSTATE =                                
    tRFC: Start = 28.10.2006 13:12:45, End = 28.10.2006 13:12:48                                    
    Job finished                                                                               
    What does this ARFCSTATE = SYSFAIL means and how to correct this?
    Thanks for your help!!!

    Hi Miguel Sanchez,
    I hope you have with PSA in IP.  If not run with PSA option and goto Detail tab of the Monitor , Expand the node Extraction Check for the message 'Data Selection is Ended'.
    If you find this message you can update from PSA.
    otherwise Go for Repeat the Delta.
    and also refer the notes 516251.
    also the error you are reporting could be for several reasons.  Could you therefore please check the following:
    1) check the RFC destination in SM59 for both your BW my self system
    2) in SM59, in BW source system, go to menu path "test"
       -> "connection" and test this for errors
    3) in SM59, in your BW  source system, go to menu path "test"
       -> "authorization" and check if the user and password are o.k.
    4) check the port definitions in WE20 and WE21
    5) finally check that you have sufficient DIA processes defined in
       your BW and R/3 source system (you should have at least one more
       DIA process than all other work processes combined; this is described
       in more detail in notes 561880 and 74141).
    Please check if this solves the problem for you.
    Hope it helps.
    Regards,
    Srikanth.

  • Problem with update from ODS to Cube

    Hi All,
    I had an issue, when i was loading from ODS to Cube everything was fine except one data package failed coz of some object locked( User : myself, though i wasn't running anything else except this), but i check in SM12 no locks exist for that particular time.
    apparently the load failed coz of this single package, its a load from ODS to Cube, so no chance of Manual update from PSA.
    any suggestion: do i have to repeat that load all over again( sucks !!! it is for 40 mil)
    Regards,
    Robyn.

    Hi Hoggard
    Check the Job log fro ur Job in the SM37 and check for that Datapackage is there any thing failed liek
    "ARFCSTATE-SYSFAIL" if this is there .. it means its an Deadlock .. please check the ST22 dump...
    U can use the PSA...
    1) go to the 8ODS infosource and copy the Delta IP
    2) check the option in the processsing as Update PSA and the subsequently into the data targets
    3) Schedule the IP manually
    hope it helps
    regards
    AK

  • Loading from ODS to Cube in process chain

    Hi Experts,
    How can I do a full load from ODS to cube when using further processing in process chain? Your help is much appreciated.
    Thanks,
    Bill

    Hi,
    You can use a DTP for this.
    Create transformation between DSO and cube.
    Create DTP and run it.
    Loading data from one cube to another cube.
    Cube to cube data loading
    how to upload data from cube to cube
    Can we pull data from one cube to another cube
    Data Load Steps:
    Reading data from another cube
    Hope this helps.
    Thanks,
    JituK

  • Can I do Parallel Full loads from ODS to Cube.

    Hai,
       Usually I will do one a full update load from OD'S to Cube. to speed up the process can I do parallel Full update loads from ods to Cube?
    Please advise.
    Thanks, Vijay,

    Assuming that the only connection we are talking about is between a single ODS and a single Cube.
    I think the only time you could speed anything up is in full drop an reload scenario. You could create multiple InfoPackages based on selection and execute them simultaneously.
    If the update is a delta there is really no way to do it.
    How many records are we talking about? Is there logic in the update rule?

  • Summing up key figure in Cube - data load from ODS

    Hello Gurus,
    I am doing a data load from ODS to cube.
    The records in ODS are at line-item level, and all needs to be summed up at header level. There is only one key-figure. ODS has header and line-item fields.
    I am loading only header field data ( and not the item-field data) from ODS to Cube.
    I am expecting only-one record in cube ( for all the item-level records) with all the key-figures summed up. But couldn't see that.
    Can anyone please explain how to achieve it.
    I promise to reward points.
    =====
    Example to elaborate my point.
    In ODS
    Header-field  item-field   quantity
    123                301          10
    123                302           20
    123                303           30
    Expected record in Cube
    Header-field       Quantity
       123                    60  
    ====================
    Regards,
    Pramod.

    Hello Oscar and Paolo.
    Thanks for the reply. I am using BW 7.0
    Paolo suggested:
    >>If you don't add item number to cube and put quantity as adition in update rules >>from ODS to cube it works.
    I did that still I get 3 records in cube.
    Oscar Suggested:
    >>What kind of aggregate do you have for your key figure in update rules (update >>or no change)?
    It is "summation". And it cannot be changed. (Or at least I do not know how to change it.)
    There are other dimensions in the cube - which corresponds to the field(s) in ODS.
    But, I just mentioned these two (i.e. header and item-level) for simplicity.
    Can you please help?
    Thank you.
    Pramod.

  • Index for loads from ODS To Cube

    The load from ODS to cube is taking a long time - In the start routine another ODS is being looked up - The keys for look up is say X and Y
    There is already an index existing on keys X , Y & Z -
    Will this index be used while doing the select on that ODS or I need to create a new index with only X and Y keys ?
    Thnx

    When you are running the start routine - run an SQL trace - ST05 - that will tell you if the index is being used.
    Arun

  • I have a problem when buying from within the game

    I have a problem when buying from within the game
    i used visa..

    All games? Settings > General > Restrictions has an option to prevent In-App Purchases. Is that on?

  • Load from ODS into InfoCube gives TIME-OUT runtime error after 10 minutes ?

    Hi all,
       We have a full load from ODS into InfoCube and it was working fine till the last week upto with 50,000 records. Now, we have around 70,000+ records and started failing with TIME_OUT runtime error.
       The following is from the Short Dump (ST22):
       The system profile "rdisp/max_wprun_time" contains the maximum runtime of a
    program. The current setting is 600 seconds. Once this time limit has been exceeded, the system tries to terminate any SQL statements that are currently being executed and tells the ABAP processor to terminate the current program.
      The following are from ROIDOCPRMS table:
       MAXSIZE (in KB) : 20,000
       Frequency       :  10
       Max Processes : 3
      When I check the Data Packages under 'Details' tab in Monitor, there are four Data Packages and the first three are with 24,450 records.  I will right click on each Data Package and select 'Manual Update' to load from PSA. When this Manual Update takes more than 10 minutes it is failing with TIME_OUT again.
      How could I fix this problem, PLEASE ??
    Thanks,
    Venkat.

    Hello A.H.P,
    The following is the Start Routine:
    PROGRAM UPDATE_ROUTINE.
    $$ begin of global - insert your declaration only below this line  -
    TABLES: /BIC/AZCPR_O0400, /BIC/AZCPR_O0100, /BIC/AZCPR_O0200.
    DATA: material(18), plant(4).
    DATA: role_assignment like /BIC/AZCPR_O0100-CPR_ROLE, resource like
    /BIC/AZCPR_O0200-CPR_BPARTN.
    $$ end of global - insert your declaration only before this line   -
    The follow definition is new in the BW3.x
    TYPES:
      BEGIN OF DATA_PACKAGE_STRUCTURE.
         INCLUDE STRUCTURE /BIC/CS8ZCPR_O03.
    TYPES:
         RECNO   LIKE sy-tabix,
      END OF DATA_PACKAGE_STRUCTURE.
    DATA:
      DATA_PACKAGE TYPE STANDARD TABLE OF DATA_PACKAGE_STRUCTURE
           WITH HEADER LINE
           WITH NON-UNIQUE DEFAULT KEY INITIAL SIZE 0.
    FORM startup
      TABLES   MONITOR STRUCTURE RSMONITOR "user defined monitoring
               MONITOR_RECNO STRUCTURE RSMONITORS " monitoring with record n
               DATA_PACKAGE STRUCTURE DATA_PACKAGE
      USING    RECORD_ALL LIKE SY-TABIX
               SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS
      CHANGING ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update
    $$ begin of routine - insert your code only below this line        -
    fill the internal tables "MONITOR" and/or "MONITOR_RECNO",
    to make monitor entries
       clear DATA_PACKAGE.
       loop at DATA_PACKAGE.
          select single /BIC/ZMATERIAL PLANT
             into (material, plant)
             from /BIC/AZCPR_O0400
             where CPR_EXT_ID = DATA_PACKAGE-CPR_EXT_ID
             and ( MATL_TYPE = 'ZKIT' OR MATL_TYPE = 'ZSVK' ).
           if sy-subrc = 0.
              DATA_PACKAGE-/BIC/ZMATERIAL = material.
              DATA_PACKAGE-plant = plant.
              modify DATA_PACKAGE.
              commit work.
           endif.
           select single CPR_ROLE into (role_assignment)
                         from /BIC/AZCPR_O0100
                         where CPR_GUID = DATA_PACKAGE-CPR_GUID.
            if sy-subrc = 0.
              select single CPR_BPARTN into (resource)
                         from /BIC/AZCPR_O0200
                         where CPR_ROLE = role_assignment
                         and CPR_EXT_ID = DATA_PACKAGE-CPR_EXT_ID.
                   if sy-subrc = 0.
                      DATA_PACKAGE-CPR_ROLE = role_assignment.
                      DATA_PACKAGE-/BIC/ZRESOURCE = resource.
                      modify DATA_PACKAGE.
                      commit work.
                   endif.
              endif.
           clear DATA_PACKAGE.
           endloop.
    if abort is not equal zero, the update process will be canceled
      ABORT = 0.
    $$ end of routine - insert your code only before this line         -
    Thanks,
    Venkat.

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

Maybe you are looking for

  • Bug report?-Can't create PDF's using scanner presets

    Help! Acrobat 10.1.0 Windows 7 64bit Lexmark Prevail Pro705 using USB connection After about an hours use today I suddenly found that couldn't create a pdf via my scanner using the PROGRAMME PRESETS (which are now greyed out). To the best of my knowl

  • Bluetooth doesn't function after installing windows 8.1 on DV7 4190sd

    Warm hello tot you All, I have a Pavilion DV7 4190SD  PC XE258E Windows 8.1 has been recently installed. Tried everything to enable my installed Bluetooth again. Didn't succeed even after 5 hours trying several solution. I think i miss the real knowl

  • Battery Drain

    I have the Blackberry 8900 v 4.6.1.199 (platform 4.2.0.102) I've just come back from a holiday where I had switched off my data connection to prevent charges whilst roaming.  Since returning and switching the data connection back on my battery only s

  • Help required in extending a standard OAF screen

    Hi, The requirement is as follows Once user enters data in a MessagetextInput fo std OAF page and TABs out, a new value has to be displayed in the same fiedl or some other field.For this I m thinkning of extending the CO.Please help me with answers t

  • Access data on multi-logical database with ad hoc query

    Dear all, I would like to know if it is possible to access data in multi-logical database with ad hoc query. I recall that I read some document stated that you can access data in multi-logical database with PNP as base. But I find no document about h