Announcing 3 new Data Loader resources

There are three new Data Loader resources available to customers and partners.
•     Command Line Basics for Oracle Data Loader On Demand (for Windows) - This two-page guide (PDF) shows command line functions specifc to Data Loader.
•     Writing a Properties File to Import Accounts - This 6-minute Webinar shows you how to write a properties file to import accounts using the Data Loader client. You'll also learn how to use the properties file to store parameters, and to use the command line to reference the properties file, thereby creating a reusable library of files to import or overwrite numerous record types.
•     Writing a Batch File to Schedule a Contact Import - This 7-minute Webinar shows you how to write a batch file to schedule a contact import using the Data Loader client. You'll also learn how to reference the properties file.
You can find these on the Data Import Resources page, on the Training and Support Center.
•     Click the Learn More tab> Popular Resources> What's New> Data Import Resources
or
•     Simply search for "data import resources".
You can also find the Data Import Resources page on My Oracle Support (ID 1085694.1).

Unfortunately, I don't believe that approach will work.
We use a similar mechanism for some loads (using the bulk loader instead of web services) for the objects that have a large qty of daily records).
There is a technique (though messy) that works fine. Since Oracle does not allow the "queueing up" of objects of the same type (you have to wait for "account" to finish before you load the next "account" file), you can monitor the .LOG file to get the SBL 0363 error (which means you can submit another file yet (typically meaning one already exists).
By monitoring for this error code in the log, you can sleep your process, then try again in a preset amount of time.
We use this allow for an UPDATE, followed by an INSERT on the account... and then a similar technique so "dependent" objects have to wait for the prime object to finish processing.
PS... Normal windows .BAT scripts aren't sophisticated enough to handle this. I would recommend either Windows POWERSHELL or C/Korn/Borne shell scripts in Unix.
I hope that helps some.

Similar Messages

  • TS1474 my PC data corrupted, new data loaded, when I try to sync my I phone, massage comes, you cannot authorize more than five computers

    After reforamatting my PC, & reloading all data. I tried to sync my I Phone to I tunes. Massage comes U can't authorize more than five computers.

    In iTunes, select iTunes Store under STORE (left panel), select your Apple ID (top right).
    Sign in and in Account Infomation, click the De-authorise All button.
    After that sign in your new computer.
    (Note: you can only do this ONCE a year)

  • Dense Restructure 1070020 Out of disk space. Can't create new data file

    During a Dense Restructure we receive: Error(1070020) Out of disk space. Cannot create a new [Data] file.
    Essbase 6.5.3 32-bit
    Windows 2003 32bit w/16GB RAM
    Database is on E: drive with 660GB space total, database is ~220GB.
    All cubes are unlimited
    Tried restoring from backup same problem.
    Over years and years the database is never recalculated, never exported and imported, never verified. Only new data loaded and dense restructured.
    Towards the end of a dense restructure (about 89 pan files through about 101 2GB pag files), getting an error: Error(1070020) Out of disk space. Cannot create a new [Data] file.
    There are still several hundred GB of free space available, and we can write to this free space outside of the essbase application within windows.
    The server's file system is consistent, defragmented, and can prove use of additional space. Hard drive controller and system does not report any "hardware issues".
    Essbase.cfg file
    ; The following entry specifies the full path to JVM.DLL.
    JvmModuleLocation C:\Hyperion\Essbase\java\jre13\bin\hotspot\jvm.dll
    ;This statement loads the essldap.dll as a valid authentication module
    ;AuthenticationModule LDAP essldap.dll x
    DATAERRORLIMIT 30000
    ;These settings are here to deal with error 1040004
    NETRETRYCOUNT 2000
    NETDELAY 1600
    App log
    [Sat Oct 17 13:59:32 2009]Local/removedfrompost/removedfrompost/admin/Info(1007044)
    Restructuring Database [removedfrompost]
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost/removedfrompost/admin/Error(1070020)
    Out of disk space. Cannot create a new [Data] file. [adIndNewFile] aborted
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost///Info(1008108)
    Essbase Internal Logic Error [7333]
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost///Info(1008106)
    Exception error log [C:\HYPERION\ESSBASE\app\removedfrompost\log00002.xcp] is being created...
    log00002.xcp
    Assertion Failure - id=7333 condition='((!( dbp )->bFatalError))'
    - line 11260 in file datbuffm.c
    - arguments [0] [0] [0] [0]
    Additional log info from database start to restructure failure
    Starting Essbase Server - Application [removedfrompost]
    Loaded and initialized JVM module
    Reading Application Definition For [removedfrompost]
    Reading Database Definition For [removedfrompost]
    Reading Database Definition For [TempOO]
    Reading Database Definition For [WTD]
    Reading Database Mapping For [removedfrompost]
    Writing Application Definition For [removedfrompost]
    Writing Database Definition For [removedfrompost]
    Writing Database Definition For [TempOO]
    Writing Database Definition For [WTD]
    Writing Database Mapping For [removedfrompost]
    Waiting for Login Requests
    Received Command [Load Database]
    Writing Parameters For Database [removedfrompost]
    Reading Parameters For Database [removedfrompost]
    Reading Outline For Database [removedfrompost]
    Declared Dimension Sizes = [289 125 2 11649 168329 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 6 ]
    Actual Dimension Sizes = [289 119 1 1293 134423 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 5 ]
    The number of Dynamic Calc Non-Store Members = [80 37 0 257 67 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [34391]
    Maximum Declared Blocks is [1960864521] with data block size of [72250]
    Maximum Actual Possible Blocks is [173808939] with data block size of [17138]
    Formula for member [4 WK Avg Total Sls U] will be executed in [CELL] mode
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [removedfrompost] can hold a maximum of [76] blocks.
    The Dyn.Calc.Cache for database [removedfrompost], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [removedfrompost]
    Reading Parameters For Database [removedfrompost]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [removedfrompost]...
    Data cache size ==> [3145728] bytes, [22] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\removedfrompost\removedfrompost.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Load Database]
    Writing Parameters For Database [TempOO]
    Reading Parameters For Database [TempOO]
    Reading Outline For Database [TempOO]
    Declared Dimension Sizes = [277 16 2 1023 139047 ]
    Actual Dimension Sizes = [277 16 1 1022 138887 ]
    The number of Dynamic Calc Non-Store Members = [68 3 0 0 0 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [4432]
    Maximum Declared Blocks is [142245081] with data block size of [8864]
    Maximum Actual Possible Blocks is [141942514] with data block size of [2717]
    Essbase needs to retrieve [1] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [TempOO] can hold a maximum of [591] blocks.
    The Dyn.Calc.Cache for database [TempOO], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [TempOO]
    Reading Parameters For Database [TempOO]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [TempOO]...
    Data cache size ==> [3145728] bytes, [144] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\TempOO\TempOO.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Load Database]
    Writing Parameters For Database [WTD]
    Reading Parameters For Database [WTD]
    Reading Outline For Database [WTD]
    Declared Dimension Sizes = [2 105 2 11649 158778 1279 609 971 531 208 78 2017 11 9 9 1 1 1 1 6 1 2 1 1 2 1 1 1 2 77 1 1 1 1 1 1 1 1 1 1 1 1 1 260 3 2954 52 6 39 4 1581 6 ]
    Actual Dimension Sizes = [1 99 1 1293 127722 1279 609 971 531 208 78 2017 11 9 9 1 1 1 1 6 1 2 1 1 2 1 1 1 2 77 1 1 1 1 1 1 1 1 1 1 1 1 1 260 3 2954 52 6 39 4 1581 5 ]
    The number of Dynamic Calc Non-Store Members = [0 29 0 257 57 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [99]
    Maximum Declared Blocks is [1849604922] with data block size of [420]
    Maximum Actual Possible Blocks is [165144546] with data block size of [70]
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [WTD] can hold a maximum of [26479] blocks.
    The Dyn.Calc.Cache for database [WTD], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [WTD]
    Reading Parameters For Database [WTD]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [WTD]...
    Data cache size ==> [3145728] bytes, [5617] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\WTD\WTD.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Set Database State]
    Writing Parameters For Database [removedfrompost]
    Writing Parameters For Database [removedfrompost]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [Set Database State]
    Writing Parameters For Database [TempOO]
    Writing Parameters For Database [TempOO]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [Set Database State]
    Writing Parameters For Database [WTD]
    Writing Parameters For Database [WTD]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [SetApplicationState]
    Writing Application Definition For [removedfrompost]
    Writing Database Definition For [removedfrompost]
    Writing Database Definition For [TempOO]
    Writing Database Definition For [WTD]
    Writing Database Mapping For [removedfrompost]
    User [admin] set active on database [removedfrompost]
    Clear Active on User [admin] Instance [1]
    User [admin] set active on database [removedfrompost]
    Received Command [Restructure] from user [admin]
    Reading Parameters For Database [Drxxxxxx]
    Reading Outline For Database [Drxxxxxx]
    Reading Outline Transaction For Database [Drxxxxxx]
    Declared Dimension Sizes = [289 126 2 11649 168329 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 6 ]
    Actual Dimension Sizes = [289 120 1 1293 134423 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 5 ]
    The number of Dynamic Calc Non-Store Members = [80 37 0 257 67 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [34680]
    Maximum Declared Blocks is [1960864521] with data block size of [72828]
    Maximum Actual Possible Blocks is [173808939] with data block size of [17347]
    Formula for member [4 WK Avg Total Sls U] will be executed in [CELL] mode
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [Drxxxxxx] can hold a maximum of [75] blocks.
    The Dyn.Calc.Cache for database [Drxxxxxx], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Reading Parameters For Database [Drxxxxxx]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Data cache size ==> [3145728] bytes, [22] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Performing transaction recovery for database [Drxxxxxx] following an abnormal termination of the server.
    Restructuring Database [removedfrompost]
    Out of disk space. Cannot create a new [Data] file. [adIndNewFile] aborted
    Essbase Internal Logic Error [7333]
    Exception error log [C:\HYPERION\ESSBASE\app\removedfrompost\log00002.xcp] is being created...
    Exception error log completed -- please contact technical support and provide them with this file
    RECEIVED ABNORMAL SHUTDOWN COMMAND - APPLICATION TERMINATING

    To avoid all these things as a best practice
    we didn't allow dense restructure on the cubes size>30 GB
    As an altrnative, we will export the level0 data, clear the DB, and load the new data. After that aggregate the cube to store the data at all the consolidation levels.

  • Increase the number of background work processes for data load performance

    Hi all,
    There are 10 available background work processes in the BW system. We're doing some mass load to multiple ODS.But system uses only 3 background processes. How can i
    increase the number of used background work processes for new data load.
    I tried to change number of prosesses with RSODSO_SETTINGS. But no successes. Are there any other settings need to change?
    thanks,
    Yigit

    Hi sankar,
    I entered the max proc. number into ROIDOCPRMS. But it doesn't make difference. System still uses only 3 of background processes. RSCUSTA2 is replaced with
    RSODSO_SETTINGS in BI 7.0 and this trans. can only change the processes for data activation, SID generation and rollback. I need to change the process numbers for data extraction.

  • Adding new date field to already loaded data target.

    Hi,
        we have a cube containing date feild such as 0CALMONTH. the data is being loaded to the cube. now they have added new date feild (0FISCYEAR). how to get data to this feild. there is no data coming from source system for this feild. please can any one tell me how to include this feild and load data into it.
    with regards,
    sreekanth.

    Sreekanth,
       If Record creation date is the right field for deriving fiscal year, Why cant you derive the year from the date...by using automatioc time conversion...?? In update rules...??
      For exising data, you can do loop back to populate the data. see the below doc, for more info:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f421c86c-0801-0010-e88c-a07ccdc69601
    Hope it Helps
    Srini
    Message was edited by: Srini

  • HT1386 I have an older iPhone (3gs) and need to upgrade to a newer phone (4S).  I need to get my NOTES, CALENDAR, CONTACTS, PICTURES, etc backed up on iTunes so I can get that data loaded onto the new phone.  But not sure how to do that.

    I have an older iPhone (3gs) and need to upgrade to a newer phone (4S).  I need to get my NOTES, CALENDAR, CONTACTS, PICTURES, etc backed up on iTunes so I can get that data loaded onto the new phone.  But not sure how to do that.  When I open iTunes it has a button that say "Back Up iPhone", but I'm not sure what that does.  When I go into the sync options it say I have another user account and asks me if I want to merge of replace. I'm assuming it's trying to tell me I have an older iTunes Library, but don't know that.  Geez, maybe people over 60 shouldn't have iPhones, iTunes just bafles me.

    http://manuals.info.apple.com/en_US/iphone_user_guide.pdf

  • How to trigger process chain when datasource is loaded with new data? PUSH

    Hi all,
    Till now we use the pull method to load data in BW which is done manually......but we would like to work with the PUSH method where whenever new data is loaded in the datasource an event is triggered which inturn triggers the process chain...
    how is this possible? can we do this on a timestamp on the datasource to trigger the event?
    rgds,
    wills

    hi Geo,
    Thanks for ur response. I appreciate it.
    The case is slightly different. I am working on Bank analyzer data which is residing in a source system defined to load the results from the Results DataBase, a part of the Bank analyzer.
    If it was R3 we have the standard calling procedures...but now the data in not in R3 but in Bank analyzer.
    I am keen to look at some procedure to push the data automatically whenever an end user execution is done at the BA level into the BW.
    ur help would be highly appreciated....
    thks,
    rgds.
    wills

  • Using spreadsheets to load new data to system

    Ok, I am GREEN as green can be here. I am a manager of a group of users that are having a mainframe system re-built in APEX and we have a requirement to load data in to the new system from both an excel sheet as well as a Janus unit barcode reader. Well, the developers are saying this cant be done and that the REXX and CSP of our old system was more powerful than this new APEX.
    Can we do what our requirement is or are the developers right on and APEX cant support external data loads from these sources?
    TIA,
    CPhilip

    Ok, I am GREEN as green can be here. I am a manager of a group of users that are having a mainframe system re-built in APEX and we have a requirement to load data in to the new system from both an excel sheet as well as a Janus unit barcode reader. Well, the developers are saying this cant be done and that the REXX and CSP of our old system was more powerful than this new APEX.
    Can we do what our requirement is or are the developers right on and APEX cant support external data loads from these sources?
    TIA,
    CPhilip

  • Data load component - add new column name alias

    Apex 4.2.2.
    Using the data load wizard a list of column and column name aliases had been created.
    When looking at the component (shared components/Data load tables/column name aliases) it is possible to edit and delete a column alias there. Unfortunatly it does not seem possible to add a new alias. Do I overlook something, or is there a workaround for this?

    Try this:
    REPORT ztest LINE-SIZE 80 MESSAGE-ID 00.
    DATA: name_int TYPE TABLE OF v_usr_name WITH HEADER LINE.
    DATA: BEGIN OF it_bkpf OCCURS 0.
            INCLUDE STRUCTURE bkpf.
    DATA:   username LIKE v_usr_name-name_text,
          END   OF it_bkpf.
    SELECT * FROM bkpf INTO TABLE it_bkpf UP TO 1000 ROWS.
    LOOP AT it_bkpf.
      name_int-bname = it_bkpf-usnam.
      APPEND name_int.
    ENDLOOP.
    SORT name_int BY bname.
    DELETE ADJACENT DUPLICATES FROM name_int.
    LOOP AT it_bkpf.
      READ TABLE name_int WITH KEY
        bname = it_bkpf-usnam
        BINARY SEARCH.
      IF name_int-name_text IS INITIAL.
        SELECT SINGLE name_text
          FROM v_usr_name
          INTO name_int-name_text
          WHERE bname = it_bkpf-usnam.
        MODIFY name_int index sy-tabix.
      ENDIF.
      it_bkpf-username = name_int-name_text.
      MODIFY it_bkpf.
    ENDLOOP.
    Rob

  • Explorer/Polestar: Need to re-index every time new data was loaded?

    I have a question concerning the indexing functionality of BO Explorer/Polestar. It's clear that I need to re-index my infospace everytime the structure of the infospace has changed (e.g. I added a new object from my universe). What I'm not sure about is whether I also need to re-index my infospace as soon as new data was loaded in the warehouse which is supposed to be a part of my infospace. Example: I crate an infospace consisting of countries (UK, USA, Germany and Japan) and revenue. I index this infospace. The next day a new country (France) is loaded in the dwh. Do I need to re-index the infospace so that users can see "France" in BO Explorer?
    Thanks for your help!
    Agnes

    Hi Agnes,
    according to the Explorer documentation new data are available AFTER reindexing.
    Indexing refreshes the data and metadata in Information Spaces. After
    indexing, any new data on the corporate data providers upon which those
    Information Spaces are based becomes available for search and exploration.
    Regards,
    Stratos

  • Master Data Load for New Attribute

    Hi Users,
    We had to implement a separate load flow for a new field coming from R3. This field was to be added to existing master data object.
    I added a new Display attribute for an existing 0GL_ACCOUNT master data object.
    This new attribute along with some other existing fields is getting data from another master data object with an infosource in between because two transformations cannot be created for same source and target.
    When i load the data i dont see data being populated for this new field. I did ACR, checked the keys e.t.c.
    Source object has data but after executing DTP no data comes to this attribute. No routines or anything.
    Please suggest
    Regards
    Zabi

    Hi,
    The situation is:
    Field x from source maps to
    1. field y ( which was existing field ) and also maps to
    2. field z which is the new attribute.
    field y has to get updated for company codes 10 for example.
    field z for company codes 30.
    now if i use same flow and map field x to both y and z then there is overwriting happening if 10 does not have value for x and 20 has then its not good.
    So if i use a separate flow with infosource then i will map only x to z so after loads which means for 10 code if no value went in first dtp to y then if code 30 has value for x then z will only be updated and y remains empty....
    Master Data Load for New Attribute 

  • Does the Full load really remove previous requests and extract new data?

    We wonder if the Full load does remove the previous requests or data and bring over new data?
    For example, we extract data from a R3 table by using Full load.  1st day, we load data to R3 table, then extract the data to BW, then BW data contains only 1st day data.  2nd day, we delete R3 table data and load data for 2nd day only, then extract to BW still using the Full load, then what BW data should contain?  both 1st day and 2nd day data?  or 2nd day data only?
    Any answer?
    Thanks

    Hi Kevin,
      What data the cube should have should be decided by the business requirement. 
      Current data load will delete the existing data only if that option has been selected in the Infopackage, else it is going to get appended.
    You may assign points if helpful ****
    Thanks,
    Raj

  • ANNOUNCEMENT - Lightroom Training courses in the UK - new dates

    Adobe® Photoshop® Lightroom Seminars
    Venue: Tackley, Oxfordshire, UK.
    NEW DATES AVAILABLE
    More information can be found by clicking here.
    Bookings can be made by clicking here.
    Half day Seminars
    Introduction to Adobe® Photoshop® Lightroom 
    Morning session Saturday 26th May 2007
    Afternoon session Saturday 26th May 2007
    Full day Seminars
    Adobe® Photoshop® Lightroom Workflow
    Saturday 23rd June 2007
    Sunday 1st July 2007
    Sid
    The LightroomExtra home page is right here.

    Angie Taylor is in Brighton. She is an excellent trainer.
    Home - angietaylor | angietaylor

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Criticism of new data "optimization" techniques

    On February 3, Verizon announced two new network practices in an attempt to reduce bandwidth usage:
    Throttling data speeds for the top 5% of new users, and
    Employing "optimization" techniques on certain file types for all users, in certain parts of the 3G network.
    These were two separate changes, and this post only talks about (2), the "optimization" techniques.
    I would like to criticize the optimization techniques as being harmful to Internet users and contrary to long-standing principles of how the Internet operates. This optimization can lead to web sites appearing to contain incorrect data, web sites appearing to be out-of-date, and depending on how optimization is implemented, privacy and security issues. I'll explain below.
    I hope Verizon will consider reversing this decision, or if not, making some changes to reduce the scope and breadth of the optimization.
    First, I'd like to thank Verizon for posting an in-depth technical description of how optimization works, available here:
    http://support.vzw.com/terms/network_optimization.html
    This transparency helps increase confidence that Verizon is trying to make the best decisions for their users. However, I believe they have erred in those decisions.
    Optimization Contrary to Internet Operating Principles
    The Internet has long been built around the idea that two distant servers exchange data with each other by transmitting "packets" using the IP protocol. The headers of these packets contain the information required such that all the Internet routers located between these servers can deliver the packets. One of the Internet's operating principles is that when two servers set up an IP connection, the routers connecting them do not modify the data. They may route the data differently, modify the headers in some cases (like network address translation), or possibly, in some cases, even block the data--but not modify it.
    What these new optimization techniques do is intercept a device's connection to a distant server, inspect the data, determine that the device is downloading a file, and in some cases, to attempt to reduce bandwidth used, modify the packets so that when the file is received by the device, it is a file containing different (smaller) contents than what the web server sent.
    I believe that modifying the contents of the file in this matter should be off-limits to any Internet service provider, regardless of whether they are trying to save bandwidth or achieve other goals. An Internet service provider should be a common carrier, billing for service and bandwidth used but not interfering in any way with the content served by a web server, the size or content of the files transferred, or the choices of how much data their customers are willing to use and pay for by way of the sites they choose to visit.
    Old or Incorrect Data
    Verizon's description of the optimization techniques explains that many common file types, including web pages, text files, images, and video files will be cached. This means that when a device visits a web page, it may be loading the cached copy from Verizon. This means that the user may be viewing a copy of the web site that is older than what the web site is currently serving. Additionally, if some files in the cache for a single web site were added at different times, such as CSS files or images relative to some of the web pages containing them, this may even cause web pages to render incorrectly.
    It is true that many users already experience caching because many devices and nearly all computer browsers have a personal cache. However, the user is in control of the browser cache. The user can click "reload" in the browser to bypass it, clear the cache at any time, or change the caching options. There is no indication with Verizon's optimization that the user will have any control over caching, or even knowledge as to whether a particular web page is cached.
    Potential Security and Privacy Violations
    The nature of the security or privacy violations that might occur depends on how carefully Verizon has implemented optimization. But as an example of the risk, look at what happened with Google Web Accelerator. Google Web Accelerator was a now-discontinued product that users installed as add-ons to their browsers which used centralized caches stored on Google's servers to speed up web requests. However, some users found that on web sites where they logged on, they were served personalized pages that actually belonged to different users, containing their private data. This is because Google's caching technology was initially unable to distinguish between public and private pages, and different people received pages that were cached by other users. This can be fixed or prevented with very careful engineering, but caching adds a big level of risk that these type of privacy problems will occur.
    However, Verizon's explanation of how video caching works suggests that these problems with mixed-up files will indeed occur. Verizon says that their caching technology works by examining "the first few frames (8 KB) of the video". This means that if multiple videos are identical at the start, that the cache will treat them the same, even if they differ later on in the file.
    Although it may not happen very frequently, this could mean that if two videos are encoded in the same manner except for the fact that they have edits later in the file, that some users may be viewing a completely different version of the video than what the web server transmitted. This could be true even if the differing videos are stored at completely separate servers, as Verizon's explanation states that the cataloguing process caches videos the same based on the 8KB analysis even if they are from different URLs.
    Questions about Tethering and Different Devices
    Verizon's explanation says near the beginning that "The form and extent of optimization [...] does not depend on [...] the user's device". However, elsewhere in the document, the explanation states that transcoding may be done differently depending on the capabilities of the user's device. Perhaps a clarification in this document is needed.
    The reason this is an important issue is that many people may wish to know if optimization happens when tethering on a laptop. I think some people would view optimization very differently depending on whether it is done on a phone, or on a laptop. For example, many people, for, say, business reasons, may have a strong requirement that a file they downloaded from a server is really the exact file they think they downloaded, and not one that has been optimized by Verizon.
    What I would Like Verizon To Do
    With respect to Verizon's need to limit bandwidth usage or provide incentives for users to limit their bandwidth usage, I hope Verizon reverses the decision to deploy optimization and chooses alternate, less intrusive means to achieve their bandwidth goals.
    However, if Verizon still decides to proceed with optimization, I hope they will consider:
    Allowing individual customers to disable optimization completely. (Some users may choose to keep it enabled, for faster Internet browsing on their devices, so this is a compromise that will achieve some bandwidth savings.)
    Only optimizing or caching video files, instead of more frequent file types such as web pages, text files, and image files.
    Disabling optimization when tethering or using a Wi-Fi personal hotspot.
    Finally, I hope Verizon publishes more information about any changes they may make to optimization to address these and other concerns, and commits to customers and potential customers about their future plans, because many customers are in 1- or 2-year contracts, or considering entering such contracts, and do not wish to be impacted by sudden changes that negatively impact them.
    Verizon, if you are reading, thank you for considering these concerns.

    A very well written and thought out article. And, you're absolutely right - this "optimization" is exactly the reason Verizon is fighting the new net neutrality rules. Of course, Verizon itself (and it's most ardent supporters on the forums) will fail to see the irony of requiring users to obtain an "unlimited" data plan, then complaining about data usage and trying to limit it artificially. It's like a hotel renting you a room for a week, then complaining you stayed 7 days.
    Of course, it was all part of the plan to begin with - people weren't buying the data plans (because they were such a poor value), so the decision was made to start requiring them. To make it more palatable, they called the plans "unlimited" (even though at one point unlimited meant limited to 5GB, but this was later dropped). Then, once the idea of mandatory data settles in, implement data caps with overages, which is what they were shooting for all along. ATT has already leapt, Verizon has said they will, too.

Maybe you are looking for

  • Invalid value for variable 0calyear in planning modeller

    Hi Experts, I have an urgent requirement. In my planning modeller I have a variable for 0calyear for accepting current year and target year. If I give target year(for example, 2015) which is not present in 0calyear it is showing error that 2015 is an

  • How to manually uninstall Adobe Acrobat 9 or force installation?

    Acrobat was not cleanly uninstalled with a installation manager Total Uninstall, and now the standard uninstaller is no longer available.  Reinstallation is currently not possible because the installer believes Acrobat is still installed.  It seems m

  • Counting words in a text widget

    Hi, Is there a way I can count the number of words in a text box widget?  I realise that means counting the number of words in the attached variable - but can I do this? I have tried playing with Javascript (about which I know nothing) I put the scri

  • Dealing with Denormalized Tables - jdev 11.1.2.3 redhat 5.8

    Hello: In a Legacy oracle system, we have many tables that are not in 3rd normal form. i.e. they are denormalized. Example: A student has many teachers and a teacher has many students... a classic many-to-many. The normal solution would be to create

  • Convert numeric in words in adobe forms (start with the currency)

    Hi to all, Below script i found here for converting numeric into words and its work perfectly. Just I edit in UAE currency. It is possible to move the dirhams (currency) in front like. 1,900,000.55 = Dirhams One Million Nine Hundred Thousand and Fift