Inventory cube load: can I use full load?

hi,
I have an inventory cube with non-cumolative chars. Can I use full load option or I thave to load with delta for non-cumulative KFs?
Regards,
Andrzej

Hi Aksik,
The load will not matter in the Addregation option.You can load by full or delta.
Now the reports creation has to be done with this in mind that the figures has to come correctly.Specify the aggregation required in the properties of the keyfigure in the query.If required use a ODS in the data flow.
regards
Happy Tony

Similar Messages

  • Can I use full load for daily loads in LO Cockpit extraction.

    I am using LO cockpit extraction for 2LIS_11_VAITM and i am using delta load for daily load and I have following doubt.
    Can i use full load for daily loads instead of delta in LO Cockpit extraction.
    Because, my understanding is that full load always takes data from setup tables in ECC and set up tables will not have delta data.
    pls reply. Thanks.

    You are right abt the understanding that full load (atleast for the extractor at hand) brings data from setup tables. You are also right that delta records are not loaded into setup tables.
    So if you plan to do full loads every day, you have to load the setup tables everyday. It is at this juncture that I have to remind you that, filling of setup tables requires a downtime of the system to ensure  that no changes / creates happen during the process.
    Hence performing a full load is not a very good approach to go by...especially when SAP made the job easy by doing all the dirty work of identifying changes and writing them to extraction queue.
    Hope this helps!

  • Why my ipad 4 wifi only can't use full funtion of apple map?

    Hi, i had an ipad4,but i can not use full funtion of apple map .it mean when i use gps navigation it do not  speak ,it only bold the road you set .that 's it. So please someone help me out.or gps only work on wifi + cellurlar

    Only the cellular iPads have a built-in GPS chip, the wifi-only models don't. There are some external GPS units that apparently work with it e.g. Bad Elf GPS.

  • How to load date column using sql loader

    Hi,
    I am trying to load a file using sql loader. my date value in the file is '2/24/2009 8:23:05 pm',
    In control file for this column i specified like this
    rec_date date ''mm/dd/yyyy hh:mi:ss pm"
    But i am getting following error
    not avalid month.
    Thanks
    sudheer

    Hi,
    Use this example as reference:
    CTL file:
    LOAD DATA
    INFILE 'test.txt'
    BADFILE 'test.bad'
    truncate INTO TABLE T3
    FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '|' TRAILING NULLCOLS
    dt_date DATE "mm/dd/yyyy hh:mi:ss pm")DAT file:
    2/24/2009 8:23:05 pm
    C:\ext_files>sqlldr hr/hr control=test.ctl
    SQL*Loader: Release 10.2.0.1.0 - Production on Wed Jul 1 20:35:35 2009
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Commit point reached - logical record count 1
    Connected to Oracle Database 10g Express Edition Release 10.2.0.1.0
    Connected as hr
    SQL> desc t3;
    Name    Type Nullable Default Comments
    DT_DATE DATE Y                        
    SQL> select to_char(dt_date, 'mm/dd/yyyy hh24:mi:ss') from t3;
    TO_CHAR(DT_DATE,'MM/DD/YYYYHH2
    02/24/2009 20:23:05
    SQL> Regards,
    Edited by: Walter Fernández on Jul 1, 2009 8:35 PM - Adding example...
    Edited by: Walter Fernández on Jul 1, 2009 8:38 PM
    Edited by: Walter Fernández on Jul 1, 2009 8:41 PM - Fixing some information...

  • Problem when loading xml file using sql loader

    I am trying to load data into table test_xml (xmldata XMLType)
    i have an xml file and i want whole file to be loaded into a single column
    when i use the following control file and executed from command prompt as follows
    sqlldr $1@$TWO_TASK control=$XXTOP/bin/LOAD_XML.ctl direct=true;:
    LOAD DATA
    INFILE *
    TRUNCATE INTO TABLE test_xml
    xmltype(xmldata)
    FIELDS
    ext_fname filler char(100),
    xmldata LOBFILE (ext_fname) TERMINATED BY EOF
    BEGIN DATA
    /u01/APPL/apps/apps_st/appl/xxtop/12.0.0/bin/file.xml
    the file is being loaded into table perfectly.
    unfortunatley i cant hardcode file name as file name will be changed dynamically.
    so i removed the block
    BEGIN DATA
    /u01/APPL/apps/apps_st/appl/xxtop/12.0.0/bin/file.xml
    from control file and tried to execute by giving file path from command line as follows
    sqlldr $1@$TWO_TASK control=$XXTOP/bin/LOAD_XML.ctl data=/u01/APPL/apps/apps_st/appl/xxtop/12.0.0/bin/file.xml direct=true;
    but strangely it's trying to load each line of xml file into table instead of whole file
    Please find the log of the program with error
    Loading of XML through SQL*Loader Starts
    SQL*Loader-502: unable to open data file '<?xml version="1.0"?>' for field XMLDATA table TEST_XML
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file '<Root>' for field XMLDATA table TEST_XML
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file '<ScriptFileType>' for field XMLDATA table TEST_XML
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file '<Type>Forms</Type>' for field XMLDATA table TEST_XML
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file '</ScriptFileType>' for field XMLDATA table TEST_XML
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file '<ScriptFileType>' for field XMLDATA table TEST_XML
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file '<Type>PLL</Type>' for field XMLDATA table TEST_XML
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file '</ScriptFileType>' for field XMLDATA table TEST_XML
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file '<ScriptFileType>' for field XMLDATA table TEST_XML
    please help me how can i load full xml into single column using command line without hardcoding in control file
    Edited by: 907010 on Jan 10, 2012 2:24 AM

    but strangely it's trying to load each line of xml file into table instead of whole fileNothing strange, the data parameter specifies the file containing the data to load.
    If you use the XML filename here, the control file will try to interpret each line of the XML as being separate file paths.
    The traditional approach to this is to have the filename stored in another file, say filelist.txt and use, for example :
    echo "/u01/APPL/apps/apps_st/appl/xxtop/12.0.0/bin/file.xml" > filelist.txt
    sqlldr $1@$TWO_TASK control=$XXTOP/bin/LOAD_XML.ctl data=filelist.txt direct=true;

  • Delta load takes longer than Full load

    Currently we are doing full load every time it takes 15 minutes to complete the cube load. When I dod delta it takes 40 minutes .
    I was adivse to do "Reorganize /BI0/F0FIGL_B10 to apply the values to all the table/index blocks".
    If I do this in Quality and Production will it be huge impact in both the boxes.
    Please let me know is this a correct solutions. I and expecting BW expert feedback for this .
    Thanks

    Hi,
    first of all you need to identify at which stage the time is consumed. Is it at extraction time or the posting time to PSA, ODS, Cube or maybe in some routines. You can get this info in the monitor of the request by checking the timestamps of the entries.
    In your case I think it will be at extraction time and best way to find out what's going on is to schedule a upload and do a sql-trace for the extraction via transaction st05. Check the trace list for select statements with a long response time and check the indexes on the tables. You might need to create some indexes.
    regards
    Siggi
    PS: I guess the reason is that in case of a delta some change documents has to be read by the extractor.
    Message was edited by: Siegfried Szameitat

  • Delete and full load into ODS versus Full load into ODS

    Can someone please advise, that in the context of DSO being populated with overwrite function, would the FULL LOAD in following two options (1 and 2) take the same amount of time to execute or one would take lesser than the other. Please also explain the rationale behind your answer
    1. Delete of data target contents of ODS A, Followed by FULL LOAD into ODS A, followed by activation of ODS
    2. Plain FULL LOAD into ODS A (i.e. without the preceeding delete data target contents step), Followed by activation of ODS
    Repeating the question - whether FULL LOAD in 1 will take the same time as FULL LOAD in 2, if one takes lesser than the other then why so.
    Context : Normal 3.5 ODS with seperate change log, new and active record tables.

    Hi,
    I think the main difference is that you with the process "Delete and full load" you will only get the full loaded data inside the dso. With full load you the system uses the modify logic (overwrite for all datasets already in the dso) and insert for all new datasets which are not inside yet inside the dso
    Regards
    Juergen

  • Problem loading XML-file using SQL*Loader

    Hello,
    I'm using 9.2 and tryin to load a XML-file using SQL*Loader.
    Loader control-file:
    LOAD DATA
    INFILE *
    INTO TABLE BATCH_TABLE TRUNCATE
    FIELDS TERMINATED BY ','
    FILENAME char(255),
    XML_DATA LOBFILE (FILENAME) TERMINATED BY EOF
    BEGINDATA
    data.xml
    The BATCH_TABLE is created as:
    CREATE TABLE BATCH_TABLE (
    FILENAME VARCHAR2 (50),
    XML_DATA SYS.XMLTYPE ) ;
    And the data.xml contains the following lines:
    <?xml version="2.0" encoding="UTF-8"?>
    <!DOCTYPE databatch SYSTEM "databatch.dtd">
    <batch>
    <record>
    <data>
    <type>10</type>
    </data>
    </record>
    <record>
    <data>
    <type>20</type>
    </data>
    </record>
    </batch>
    However, the sqlldr gives me an error:
    Record 1: Rejected - Error on table BATCH_TABLE, column XML_DATA.
    ORA-21700: object does not exist or is marked for delete
    ORA-06512: at "SYS.XMLTYPE", line 0
    ORA-06512: at line 1
    If I remove the first two lines
    "<?xml version="2.0" encoding="UTF-8"?>"
    and
    "<!DOCTYPE databatch SYSTEM "databatch.dtd">"
    from data.xml everything works, and the contentents of data.xml are loaded into the table.
    Any idea what I'm missing here? Likely the problem is with special characters.
    Thanks in advance,

    I'm able to load your file just by removing the second line <!DOCTYPE databatch SYSTEM "databatch.dtd">. I dont have your dtd file, so skipped that line. Can you check if it's problem with ur DTD?

  • Loading XML file using sql*loader (10g)

    Hi Everyone....
    I have a major problem. I have a 10g database and I need to use sql loader to load a large XML file into the database. This file contains records for many, many customers. This will be done several times and the number of records will vary. I want to use sql loader and load to a staging table, BUT SPLIT THE RECORDS OUT and add a sequence. I am having 2 problems.
    In the 10g documentation, when you want to split the records you use the BEGINDATA clause and indicate something (like a 0) for each instance of a record. Well my first file has 3,722 records in it. How do I know the number of records in each XML file?
    The second problem is that because this is XML, I thought that I could use ENCLOSED BY. But the start tag has an attribute /counter in it, sql*loader doesnt recognize the starting tag. (i.e.: start tag is: </CustomerRec number=1>
    end tag is: </CustomerRec> )
    So, I used TERMINATED BY '</CustomerRec>'. This will split out the records, but it will NOT include the ending tag of </CustomerRec> and when I use extract, I receive an XML parsing error.
    I have tried to get the ending tag using CONTINUEIF and SKIP. But those options are for "records" and not lines.
    Does anyone have any ideas for the 2 problems above. I am at my wits end and I need to finish this ASAP. Any help would be appreciated.
    Thank you!

    Sorry.... here is an example of what my control file looks like. At the end, I have 92 "0", but remember, I have 3722 records in this first file.
    LOAD DATA (SKIP 1)
    INFILE *
    INTO TABLE RETURN_DATA_STG
    TRUNCATE
    XMLType(xmldata)
    FIELDS
    (fill FILLER CHAR(1),
    loadseq SEQUENCE(MAX,1),
    xmldata LOBFILE (CONSTANT F001AB.XML)
    TERMINATED BY '</ExtractObject>'
    ------ ":xmldata||'</ExtractObject>'"
    BEGINDATA
    0
    0
    0
    0
    0
    0

  • Need suggestions on loading 5000+ files using sql loader

    Hi Guys,
    I'm writing a shell script to load more than 5000 files using sql loader.
    My intention is to load the files in parallel. When I checked the maximum number of sessions in v$parameter, it is around 700.
    Before starting the data load, programmatically I am getting the number of current sessions and maximum number of sessions and keeping free 200 sessions without using them (max. no. of sessions minus 200 ) and utilizing the remaining ~300 sessions to load the files in parallel.
    Also I am using a "wait" option to make the shell to wait until the 300 concurrent sql loader process to complete and moving further.
    Is there any way to make it more efficient? Also is it possible to reduce the wait time without hard coding the seconds (For Example: If any of those 300 sessions becomes free, assign the next file to the job queue and so on..)
    Please share your thoughts on this.
    Thanks.

    Manohar wrote:
    I'm writing a shell script to load more than 5000 files using sql loader.
    My intention is to load the files in parallel. When I checked the maximum number of sessions in v$parameter, it is around 700. Concurrent load you mean? Parallel processing implies take a workload, breaking that up into smaller workloads, and doing that in parallel. This is what the Parallel Query feature does in Oracle.
    SQL*Loader does not do that for you. It uses a single session to load a single file. To make it run in parallel, requires manually starting multiple loader sessions and perform concurrent loads.
    Have a look at Parallel Data Loading Models in the Oracle® Database Utilities guide. It goes into detail on how to perform concurrent loads. But you need to parallelise that workload yourself (as explained in the manual).
    Before starting the data load, programmatically I am getting the number of current sessions and maximum number of sessions and keeping free 200 sessions without using them (max. no. of sessions minus 200 ) and utilizing the remaining ~300 sessions to load the files in parallel.
    Also I am using a "wait" option to make the shell to wait until the 300 concurrent sql loader process to complete and moving further.
    Is there any way to make it more efficient? Also is it possible to reduce the wait time without hard coding the seconds (For Example: If any of those 300 sessions becomes free, assign the next file to the job queue and so on..)Consider doing it the way that Parallel Query does (as I've mentioned above). Take the workload (all files). Break the workload up into smaller sub-workloads (e.g. 50 files to be loaded by a process). Start a 100 processes in parallel and provide each one with a sub-workload to do (100 processes each loading 50 odd files).
    This is a lot easier to manage than starting for example a 5000 load processes and then trying some kind of delay method to ensure that not all hit the database at the same time.
    I'm loading about 100+ files (3+ million rows) every 60 seconds 24x7 using SQL*Loader. Oracle is quite scalable and SQL*Loader quite capable.

  • Can I use property loader in a main sequence to load properties in subsequence?

    Hi, I have been trying to use the property loader to load test limits and local variabels into subsequences from the main sequence.
    I can export all the properties for my main sequence and all the subsequences contained within by selecing <all sequences> in the export function.
    When I try to load the exported file back in using the property loader I get differant errors depending on the format I exported/imported it with.
    For text or csv files iget error -17100
    "The file format is incorrect near the section 'StationGlobals'.  Make sure that you are using start and end markers correctly."
    For an xl format I get error -18 
    "Property loader step failed to import or export properties.
    310 property value(s) were found.
    43 property value(s) were imported from 920 row(s) of data"
    There is no where near 920 rows of data or 320 properties in the exported file.
    If i use the property loader to load properties in just main it works fine, is there extra formating I need to do to the file before importing it or is it not possible to load properties into a subsequence from a property loader in main?
    Solved!
    Go to Solution.

    Hi,
    I have tried several sequences and building the propertyloader file using the export tool,
    Moving the End_Mainsequence to the bottem did not help.
    I can load values into a single sequence with no problem it is only when I try to load properties into a sub sequence from the main sequence that I have issues.
    Attached is a more simple example of what I am trying to acheive. 
    Kind regards,
    Hugo
    Attachments:
    Sequence File 2.seq ‏9 KB
    Test.csv ‏2 KB

  • SQL*Loader: Can one use '||' as delimiter...?

    Hi,
    Our data contains '|' and all other special characters available on keyboard.
    So we decided to use '||' as delimiter - when we do so, the SQL*Loader puts an extra '|' at the end of line.
    Could correct it by putting delimiter ('||') at the end of line.... Well, we can change the extract program - but is there a way to (some option somewhere) avoid such '|' without changing our extract program.
    As of now, in .ctl file, we use trailing nullcols option and do not have delimiter at the end of line.
    Thanks,
    Sachin

    Thanks - on that basis I think I'll just reconstruct the record during the procedure because for several million rows daily that is a lot of wastage, in my opinion.
    Thanks for your time anyway.

  • How can I use sql loader to load a text file into a table

    Hi, I need to load a text file that has records on lines tab delimited into a table. How would I be able to use the sql loader to do this? I am using korn shell to do this. I am very new at this...so any kind of helpful examples or documentation would be very appreciated. I would love to see some examples to help me understand if possible. I need help! Thanks alot!

    You should check out the documentation on SQL*Loader in the online Oracle document titled Utilities. Here's a link to the 9iR2 version of it: http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/server.920/a96652/part2.htm#436160
    Hope this helps.

  • What tool can I use to load CD's made from a PC to my Mac for use with iMovie?

    I have digitized my old VCR home movies onto a CD, but did that on a PC before I got my iMac. 
    I am keen to transfer the material into iMovie for editing and ultimately producing a DVD, but my Mac
    doesn't recognize the CD when I insert it. 
    Is there an online tool or even perhaps a software that would convert these videos?
    Any help would be greatly appreciated.

    You need to convert the VOB files in the TS-Folder of the DVD back to DV which iMovie is designed to handle. For that you need mpegStreamclip:
    http://www.squared5.com/svideo/mpeg-streamclip-mac.html
    which is free, but you must also have the  Apple mpeg2 plugin :
    http://store.apple.com/us/product/D2187Z/A/quicktime-mpeg-2-playback-component-f or-mac-os-x
    (unless you are running Lion in which case see below))
    which is a mere $20.
    Another possibility is to use DVDxDV:
    http://www.dvdxdv.com/NewFolderLookSite/Products/DVDxDV.overview.htm
    which costs $25.
    For the benefit of others who may read this thread:
    Obviously the foregoing only applies to DVDs you have made yourself, or other home-made DVDs that have been given to you. It will NOT work on copy-protected commercial DVDs, which in any case would be illegal.
    And from the TOU of these forums:
    Keep within the Law
    No material may be submitted that is intended to promote or commit an illegal act.
    Do not submit software or descriptions of processes that break or otherwise ‘work around’ digital rights management software or hardware. This includes conversations about ‘ripping’ DVDs or working around FairPlay software used on the iTunes Store.
    If you are running Lion or later:
    From the MPEG Streamclip homepage
    The installer of the MPEG-2 Playback Component may refuse to install the component in Lion. Apple states the component is unnecessary in Lion onwards, however MPEG Streamclip still needs it. See this:
    http://support.apple.com/kb/HT3381
    To install the component in Lion, please download MPEG Streamclip 1.9.3b7 beta above; inside the disk image you will find the Utility MPEG2 Component Lion: use it to install the MPEG-2 Playback Component in Lion. The original installer's disk image (QuickTimeMPEG2.dmg) is required.
    The current versions of MPEG Streamclip cannot take advantage of the built-in MPEG-2 functionality of Lion. For MPEG-2 files you still need to install the QuickTime MPEG-2 Playback Component, which is not preinstalled in Lion. (The same applies to Mountain Lion and Mavericks even though they have it preinstalled.) You don't have to install QuickTime 7.

  • Load Every Day 2LIS_03_BX (Full Load landscape) BI 7.0

    Hi,
    We want a simple landscape of material stocks from an SAP R/3 system to SAP BW.  We would like to transfer the complete current material stock for a calendar day (u201CSnapshotsu201D), every day. We think that the correct datasource for taht is 2LIS_03_BX, in "How To ..." documents appears that in an Infopackage for this datasource you can set an indicator:
    - Full update
    - Create Opening Balance
    But in BI 7.0 this option doesn't appears ... we can set only "Generate Initial Status" indicator, in infopazkage ...
    We try another way, creating two differents DTP, one in "Full" mode and other in "Initial Non-Cumulative for Non-Cumulative Values", but it doesn't works ... The first time we load all data from stock inizialitation (R/3 transaction MCNB), is correct, but the netx day appears the same data ... Any Idea ? Is necessary to execute the inizialitation process in R/3 every day and after that the data load to BW ¿?
    We know that we can create a Datasource with R/3 table MARD, but we want to check this landscape ...
    Thanks in advanced.
    Best Regards,
    Carlos.

    Hello,
    thanks for your answer, it was very useful for us.
    But We don't understand your second paragraph, when you say "use the snapshot capability" ...
    We thinked that with the solution you described in the first paragraph we can implement "snapshot capability", without
    initializing delta or using LO cockpit and delta jobs ? Deleting setup table and after executing setup everyday, don't we obtain a snapshot scenario ? Because we think that with delta queues is not possible to execute setup process, is that true ?
    Another thing, I read the document you recommended, and is an old version, because if you see the infopackage to load BX appears the option full, page eleven, and the datasource is 2LIS_40_S278 ... for that we thinked that is not useful for us ... We read about snapshot capability in How To ... handle Inventory Management Scenarios in BW (Version 1.3 - January 2005), but is an old version too. We think for Bi 7.0 there is no documentation.
    Best regards,
    Carlos.

Maybe you are looking for

  • Windows 8.1 Pro x64 BSOD 0x9f DRIVER_POWER_STATE_FAILURE

    I bought a new ASUS N76VZ about 2 weeks ago that came with Windows 8. I did a clean upgrade to windows 8.1 Pro and I proceed to install the drivers for the various utilities for the system. Shortly afterward, I repeatedly get this bugcheck stop error

  • InDesign CS6 file will not open anymore

    Upon clicking the icon for the saved file, I get this message: "Cannot open the file'*****.indd'. Adobe InDesign may not support the file format, a plug-in that supports the file format may be missing, or the file may be open in another application."

  • Subscription, automatic recharge, Skype Credit, mo...

    I have found some answers on these questions, but they were separate. I have a question which includes them all. I have subscribed with 2 accounts to call landlines in Brazil (60minutes per month). Because I have paid for full year, I got 15 minutes

  • Serializable String size limits

    I have an remote EJB deployed on WebSphere Application Server 5.1. The EJB client is invoking this EJB passing it as large string as the input parameter. The size of the string can be upto 20 MB. The question is - what is the size limit of a serializ

  • GL COSOLIDATION PROCESS PERFORMANCE

    제품 : FIN_GL 작성날짜 : 2002-11-07 GL COSOLIDATION PROCESS PERFORMANCE =================================== GL의 Consolidation 작업시 발생할수 있는 Performance문제 PURPOSE 느린 GL Consolidation작업 Explanation Consolidation은 크게 두가지의 방법으로 mapping rule을 정의할수 있다. 1. Segment