Data loading in to BW based on selection

I have totally 5 evaluation criteria data out  like Price, Quality, Service
& its score & for this I have the subcriterions
Now I have to pick only evaluation criteria data of  Price and service & subcriterions of Price and service only & its score but I should not pick the data of evaluation criteria Quality & its  subcriterions in to BW
How can this be done

HI Roberto/Olivier
What I have done is as below,
Suppose there are 20 request(not activated) in an ODS.
An ABAP Code for Data packets is written that is copying the data from new table to an internal table, then the internal table is sorted on the basis of Request number and Data Packet number, the duplicate entries are then deleted.
Now we have 20 entries in internal table, we then divide these 20 entries on basis of data packet number such that each two entries have same request number but data packet as half example
say we have 20 packets in request 'A' so the internal table have two entries which have same request number as 'A' but first entry has data packet number as 1 to 10 and second entry has data packet number 11 to 20.
i have then included this routine in selection of infopackages, similarly we can write an ABAP Code for Request number.
Data Source of the Cube will be the New Table of ODS.
This Code requires lot of memory in case of large number of records.... hence it is avoidable insted of this we can use table RSODSACTREQ table in place of copying all the data in internal table
ASSIGN POINTS IF THE INFORMATION IS USEFUL.

Similar Messages

  • Data Load into the Cube based on Fiscal Year

    Hi All,
    I was told to load the data into the cube coming from 3 different datasources, but based on fiscal year. I was told to load the data for 2010, 2009 and so on.
    Any suggestions please...

    Hi Dear,
                Write the following code in start Routine of Update Rule:-
    In BI 3.x
                   Delete DATA_Package where calday lt '20090101' and calday gt '20091231'.
               to load data onlyfor 2009.
              same thing for 2010.
    In BI 7.0
               Start Routine of Transformation:-
              delete source_package where calday lt  '20090101' and calday gt '20091231'.
    Regards
    Obaid

  • Do we need to re-create data load rules if we upgrade from EBS 11 to 12?

    If so, please explain the reason. Thanks.

    If you upgrade from EBS 11 to EBS 12 you would need to create a new source system registration in ERPi.
    Once the new source system is created, you would then need to initalize the source system in ERPi.
    From there you would need to associate it with an import format and locations in ERPI and your data load rules are then based on the location.

  • Automate EIS dim builds and data loads

    I want to automate the dimesion builds and data loads from my ETL tool (DTS). I have not been able to find anything about scripting EIS automation in the documentation. Is there any?

    what you can do is create go into EIS metadata outline and create a member load and data load script. Do this by selecting the Outline menu item, then select member load. click next, on this screen, select only save load script. Click the button "Save scripts" to give it a name. click finish. repeat for the dataload script. (If you are using ASO cubes, you must use separate scripts, you can't do both in one script)Then create a batch file to run the member load and data loads. In DTS, use an execute process task to run the batch file

  • Data load Tuning

    Hello All,
    What are the Data Load Tuning ways from R/3 to BW we can possibly do, please help.
    Thanks,
    Suman

    Hi,
    To improve the data load performance
    1. If they are full loads then try to see if you make them delta loads.
    2. Check if there are complex routines/transformations being performed in any layer. In that case see if you can optimize those codes with the help of an abaper.
    3. Ensure that you are following the standard procedures in the chain like deleting Indices/secondary Indices before loading etc.
    For eg
    1) Create Index
    2) Delete Index
    3) Aggregate Creation on Info Cube
    4) Compressing Info Cube data
    5) Rollup Data to Aggregates
    6) Partitioning infoCube
    7) Load Master data before loading Transactional Data
    8) Adjusting Datapackage size
    https://forums.sdn.sap.com/click.jspa?searchID=10049032&messageID=4373697
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    4. Check whether the system processes are free when this load is running
    5. Try making the load as parallel as possible if the load is happening serially. Remove PSA if not needed.
    6. Goto manage ODS -> activate -> activate in parallel -> increase the number of processes from there.for direct access try TCode RSODSO_SETTINGS
    7. Remove Bex Reporting check box in ODS if not required.
    8. When the load is not getiing processed due to huge volume of data, or more number of records per data packet, Please try the below option.
    1) Reduce the IDOC size to 8000 and number of data packets per IDOC as 10. This can be done in info package settings.
    2) Run the load only to PSA.
    3) Once the load is succesfull , then push the data to targets.
    In this way you can overcome this issue.
    Ensure the data packet sizing and also the number range buffering, PSA Partition size, upload sequence i.e, always load master data first, perform change run and then transaction data loads.
    Use InfoPackages with disjoint selection criteria to parallelize the data export.
    Complex database selections can be split to several less complex requests.
    Number Range Buffering Performance  
    /thread/754694
    Check this oss note : 130253.Review the oss note 857998 and 130253. The first note tells you how to find the dimensions and infoobjects that needs number range buffering.
    Check this doc on BW data load perfomance optimization
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    BI Performance Tuning
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    SAP Business Intelligence Accelerator : A High - Performance Analytic Engine for SAP Ne tWeaver Business Intelligence
    http://www.sap.com/platform/netweaver/pdf/BWP_AR_IDC_BI_Accelerator.pdf
    BI Performance Audit
    http://www.xtivia.com/downloads/Xtivia_BIT_Performance%20Audit.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10564d5c-cf00-2a10-7b87-c94e38267742
    https://websmp206.sap-ag.de/~form/sapnet?_SHORTKEY=01100035870000689436&
    Thanks,
    JituK

  • Selection in data load from infoprovider

    Hi Guys,
    In BPC NW 7.5 we have to load data from infoprovider allowing users to select data (like for dimension TIME) by BPC prompt.  We found two solutions solving partially our problem. In fact, users have to modify manually the selections.
    SOLUTION 1:
    To load data we want use the process chain /CPMB/INFOPROVIDER, but we know that it is not possible insert selections in the /CPMB/INFOPROVIDER prompt (as describe in this post Re: Package LOAD INFOPROVIDER, Select input ENTITY).
    To select data we can use an intermediate infocube as a BW workaround (as describe in this post Re: BPC 7.5: Delta Load when loading from BI InfoProvider ) to have a source with only the selected data.This could be done by a selection in the DTP between the source infocube and the intermediate infocube. This solution is not dynamic, in fact, in this case users have to modify manually the DTP selection.
    How can we allow users to insert this selection in the DTP by a BPC prompt?
    SOLUTION 2:
    To select data we can use a transformation file inserting a selection like
    SELECTION = <Dimension1_techname>,<Dimension1_value>.
    It is not dynamic, in fact, also in this case users have to modify manually the file selection.
    Do you know how to allow these selection by a BPC prompt to avoid these manual changes?
    Do you know other solutions?
    Thank you for your support.

    Hi D-Mark,
    This is definitely a place where it would be nice to see some additional functionality added to BPC. Variable replacement in the transformation file based on the data manager prompt would probably be the best thing to have in the software.
    In any case, getting back to your question, manually modifying the transformation file selection is the most common practice on BPC projects. The blog linked by Naresh is a fairly elegant way to do this, though it doesn't completely get around the fact that it's easy to forget to do and easy to get confused about what is going on in the transformation file.
    A third option that no one has mentioned is to do a SELECTION statement in the transformation file based on navigational attributes in the source InfoProvider. This approach can make the selection statement dynamic based on the contents of BW InfoObjects. Still not very user-friendly, but if you can put an automatic process in place to update the BW navigational attributes this might meet your need without having to set up an extra BW staging InfoProvider.
    The SELECTION syntax is documented here, though it doesn't mention that you can select on navigational attributes: [http://help.sap.com/saphelp_bpc75_nw/helpdata/en/5d/9a3fba600e4de29e2d165644d67bd1/frameset.htm]
    With navigational attributes (the profit center attribute of cost center, for example) it would be something like:
    SELECTION=0COST_CENTER___0PROFIT_CENTER,PC01
    Ethan

  • How to create a report in bex based on last data loaded in cube?

    I have to create a query with predefined filter based upon "Latest SAP date" i.e. user only want to see the very latest situation from the last load. The report should only show the latest inventory stock situation from the last load. As I'm new to Bex not able to find the way how to achieve this. Is there any time characteristic which hold the last update date of a cube? Please help and suggest how to achieve the same.
    Thanks in advance.

    Hi Rajesh.
    Thnx fr ur suggestion.
    My requirement is little different. I build the query based upon a multiprovider. And I want to see the latest record in the report based upon only the latest date(not sys date) when data load to the cube last. This date (when the cube last loaded with the data) is not populating from any data source. I guess I have to add "0TCT_VC11" cube to my multiprovider to fetch the date when my cube is last loaded with data. Please correct me if I'm wrong.
    Thanx in advance.

  • Selective data load and transformations

    Hi,
    Can youu2019ll pls clarify me this
    1.Selective data load and transformations can be done in
        A.     Data package
        B.     Source system
        C.     Routine
        D.     Transformation Library-formulas
        E.     BI7 rule details
        F.     Anywhere else?
    If above is correct what is the order in performance wise
    2.Can anyone tell me why not all the fields are not appear in the data package data selection tab even though many include in datasource and data target.
    Tks in advance
    Suneth

    Hi Wijey,
    1.If you are talking about selective data load, you need to write a ABAP Program in the infopackage for the field for which you want to select. Otherway is to write a start routine in the transformations and delete all the records which you do not want. In the second method, you get all the data but delete unwanted data so that you process only the required data. Performancewise, you need to observe. If the selection logic is complicated and taks a lot of time, the second option is better.You try both and decide yourself as to which is better.
    2. Only the fields that are marked as available for selection in the DS are available as selection in the data package. That is how the system is.
    Thanks and Regards
    Subray Hegde

  • Selective data load using DTP

    Hi,
    We have created a data flow from once cube to other cube using Transformations. Now we would like to do selective data load from source cube to target cube. The problem is that in DTP we are not able to give the selective weeks in the Filter area because we can give filter conditions in change mode only and in production system we canu2019t go to change mode for DTP. So we struck up there. Can any one of you tell me how to do selective data load in this scenario
    Thanks in advance

    Hi,
    As a try, createe a new DTP and try to get in to change mode.It might accept that way.
    otherway round,you can go the way as manisha explained in previous post.
    Do the load and do a selective deletion. you can do selective deletion using a program.
    Cheers,
    Srinath.

  • Selective Deletion Before Data Load

    Hi Experts - I need to do the data load into Oracle data Warehouse. Before loading data , I need to do some selective deletion from the target table.
    In the source dataset I have a date column where I have Max and Min Date . I need to delete the data from the target laying between this Min and Max date.
    Any Idea how to do this selective deletion.
    Thanks
    R

    Create a workflow, and declare two local variables, $DateMin and $DateMax, of either date or datetime datatypes, as appropriate.  Create a script:
    $DateMin = sql('DS','select min([datetime field]) from [incoming table]');
    $DateMax = sql('DS','select min([datetime field]) from [incoming table]');
    Add a dataflow to your workflow, and connect it downstream of the script.  Add two parameters to the dataflow -- let's say you call them $P_DateMin and $P_DateMax. Back in your workflow, in the "Calls" tab of the Variables & Parameters window, set the mapping of the two dataflow input parameters to your two local workflow variables.
    In your dataflow: perform a selection of the primary key (the column(s) which constitute the pk) of your target table, filtering on your two input parameter values ($P_DateMin and $P_DateMax.  If you want to be on the safe side in terms of preventing blocking issues, send these records into a Data Transfer transform (file preferred, but up to you). Then, downstream from the Data Transfer transform, send the records into a Map Operation transform, mapping 'Normal' to 'Delete'. Then, simply send them into your target table.
    You could, of course, just write a SQL script to delete the records, but those are to be avoided as breaking lineage & impact chains.
    If all your date or datetime stamp fields on your target table are "whole" dates, with no time portion, and you have a smallish number of dates between your min. and max. dates, and you have a large number of records to delete between those dates, and your target table has an index on the date stamp column, then another approach would be to generate records, one per day, using a Date Generation transform, still making use of your two dataflow parameters. You'd declare the date field so generated to be the (false) primary key, map the records to deletes w/ the Map Operation transform, and then send them into your target, with the "Use input keys" option selected.

  • Selective data load to InfoCube

    Dear All,
    I am facing the following problem :
    I have created staging DSOs for billing item (D1) and Order item (D2). Also i have created one InfoCube (C1) which requires combined data of order and billing and so we have direct transformation with billing DSO (D1-->C1) and in transformation routines we had look up from Order item (D2) DSO.
    Now all the deltas are running fine. But in today's delta particular Order has not retrieved, say 123, but corresponding Billing document, say 456,  has been retrieved through delta.
    So now while DTP ran for C1 cube it has not loaded that particular billing doc (456) and corresponding Order details(123).
    I thought of loading this particular data by creating new Full DTP to Cube C1. Is this approach ok?
    Please help on the same.
    Regards,
    SS

    Hi,
    Yes you can do a full load. Just make sure the selection condition in your DTP is EXACTLY THE SAME as selective delete on C1.
    I'd suggest put a consolidation DSO D3 in the position of C1. And you can always use delta update C1 from D3. In my company there are similar cases and we love the consolidation DSO.
    Regards,
    Frank

  • Data Error in the Query/Report after selective data deletion for infocube

    Hi Experts,
    Please advise what i was missing and what went wrong...
    I got a Query (Forecast) on a Multicube...which is based on 2 Infocubes with Aggregates...
    As i identified some data discrepency..yesteraday i performed selective data deleation on one of the Infocube
    and executed report yesteraday and the results in the query are correct...
    When today i executed the same report i am getting different results..
    When i compared the results of the report with that of data in cube they are not matching
    The report is not displaying the data in cube..for some rows it is displaying the data in the cube but for some rows it is just displaying same as the above row
    there is no data loaded into info cube after selective deleation
    Do i need to perform request compression and fill the aggregated after selective deleation
    Please advise what went wrong

    Hi Venkat,
    No i haven't done anything on aggregates before or after selective delete
    As there is not data load after the selective delete according to SAP Manual we don't need to perform any thing on aggregates...as selective data deletion on cube will delete data from aggregates as well
    Please update how to identify error

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • 4.2.3/.4 Data load wizard - slow when loading large files

    Hi,
    I am using the data load wizard to load csv files into an existing table. It works fine with small files up to a few thousand rows. When loading 20k rows or more the loading process becomes very slow. The table has a single numeric column for primary key.
    The primary key is declared at "shared components" -> logic -> "data load tables" and is recognized as "pk(number)" with "case sensitve" set to "No".
    While loading data, these configuration leads to the execution of the following query for each row:
    select 1 from "KLAUS"."PD_IF_CSV_ROW" where upper("PK") = upper(:uk_1)
    which can be found in the v$sql view while loading.
    It makes the loading process slow, because of the upper function no index can be used.
    It seems that the setting of "case sensitive" is not evaluated.
    Dropping the numeric index for the primary key and using a function based index does not help.
    Explain plan shows an implicit "to_char" conversion:
    UPPER(TO_CHAR(PK)=UPPER(:UK_1)
    This is missing in the query but maybe it is necessary for the function based index to work.
    Please provide a solution or workaround for the data load wizard to work with large files in an acceptable amount of time.
    Best regards
    Klaus

    Nevertheless, a bulk loading process is what I really like to have as part of the wizard.
    If all of the CSV files are identical:
    use the Excel2Collection plugin ( - Process Type Plugin - EXCEL2COLLECTIONS )
    create a VIEW on the collection (makes it easier elsewhere)
    create a procedure (in a Package) to bulk process it.
    The most important thing is to have, somewhere in the Package (ie your code that is not part of APEX), information that clearly states which columns in the Collection map to which columns in the table, view, and the variables (APEX_APPLICATION.g_fxx()) used for Tabular Forms.
    MK

  • Comparison of Data Loading techniques - Sql Loader & External Tables

    Below are 2 techniques using which the data can be loaded from Flat files to oracle tables.
    1)     SQL Loader:
    a.     Place the flat file( .txt or .csv) on the desired Location.
    b.     Create a control file
    Load Data
    Infile "Mytextfile.txt" (-- file containing table data , specify paths correctly, it could be .csv as well)
    Append or Truncate (-- based on requirement) into oracle tablename
    Separated by "," (or the delimiter we use in input file) optionally enclosed by
    (Field1, field2, field3 etc)
    c.     Now run sqlldr utility of oracle on sql command prompt as
    sqlldr username/password .CTL filename
    d.     The data can be verified by selecting the data from the table.
    Select * from oracle_table;
    2)     External Table:
    a.     Place the flat file (.txt or .csv) on the desired location.
    abc.csv
    1,one,first
    2,two,second
    3,three,third
    4,four,fourth
    b.     Create a directory
    create or replace directory ext_dir as '/home/rene/ext_dir'; -- path where the source file is kept
    c.     After granting appropriate permissions to the user, we can create external table like below.
    create table ext_table_csv (
    i Number,
    n Varchar2(20),
    m Varchar2(20)
    organization external (
    type oracle_loader
    default directory ext_dir
    access parameters (
    records delimited by newline
    fields terminated by ','
    missing field values are null
    location ('file.csv')
    reject limit unlimited;
    d.     Verify data by selecting it from the external table now
    select * from ext_table_csv;
    External tables feature is a complement to existing SQL*Loader functionality.
    It allows you to –
    •     Access data in external sources as if it were in a table in the database.
    •     Merge a flat file with an existing table in one statement.
    •     Sort a flat file on the way into a table you want compressed nicely
    •     Do a parallel direct path load -- without splitting up the input file, writing
    Shortcomings:
    •     External tables are read-only.
    •     No data manipulation language (DML) operations or index creation is allowed on an external table.
    Using Sql Loader You can –
    •     Load the data from a stored procedure or trigger (insert is not sqlldr)
    •     Do multi-table inserts
    •     Flow the data through a pipelined plsql function for cleansing/transformation
    Comparison for data loading
    To make the loading operation faster, the degree of parallelism can be set to any number, e.g 4
    So, when you created the external table, the database will divide the file to be read by four processes running in parallel. This parallelism happens automatically, with no additional effort on your part, and is really quite convenient. To parallelize this load using SQL*Loader, you would have had to manually divide your input file into multiple smaller files.
    Conclusion:
    SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table. However, we can always copy the data from external tables to Oracle Tables using DB links.

    Please let me know your views on this.

Maybe you are looking for