Sequencing records by Date

Hey guys, I have a DAX question i was hoping you could help me with.  I've loaded a bunch of phone calls into power pivot and I want to do some different analysis on them based on the number of times and the order in which they where called.  I
also want to look at things like at which sequence number do I get the best results,  where in the sequence do things really start to fall off, etc.   
How can i got about using DAX to assign a sequence number to the phone numbers based on the date/time that they were called?
In T-Sql I would use OVER and PARTITION BY the phone number ordering by the date.  I am trying to do this in DAX on fly as a calculated column but can't seem to figure it out.  The end result should be a list of phone numbers with a sequence 1,
2, 3 for each time the phone was called, in the order that it was called.

I tend to worry about correctness first. We can certainly optimize on the functions I have provided.
FILTER() is a single-threaded iterator. In the case of the above examples, it is iterating row-by-row over the entire table FactCalls and applying the test. The first optimization we'll make is as follows:
COUNTROWS(
CALCULATETABLE( 'Call Results'
, ALLEXCEPT( 'Call Results', 'Call Results'[PhoneNumber] )
, FILTER(
VALUES('Call Results'[DateTime])
, 'Call Results'[DateTime] < EARLIER( 'Call Results'[DateTime] )
) + 0
This allows us to iterate over just the unique values of [DateTime], rather than all values in the table.
ALLEXCEPT() allows us to hit the storage engine, rather than the formula engine for maintaining the filter context on [PhoneNumber]
This probably won't save us too much time, though since the cardinality of [DateTime] is likely very near that of the whole table.
Thus, we need to limit the size of the table we iterate over in FILTER() even further:
=COUNTROWS(
CALCULATETABLE(
'Call Results'
, ALLEXCEPT( 'Call Results', 'Call Results'[PhoneNumber] )
, FILTER(
CALCULATETABLE(
VALUES( 'Call Results'[DateTime] )
, ALLEXCEPT( 'Call Results', 'Call Results'[PhoneNumber] )
, 'Call Results'[DateTime] < EARLIER( 'Call Results'[DateTime] )
)+ 0
In this case, we essentially duplicate our CALCULATETABLE() call in determining the table we will pass to FILTER(). Thus, the number of rows we must iterate over is just the number of distinct [DateTime]s that belong to the [PhoneNumber] on the current row
of the table. Now we don't have to check every [DateTime] that exists for its value compared to the [DateTime] on the current row of the table.
If we want to not perform this operation on [PhoneNumbers], we need to put that test outside of our calculation:
=
IF(
LEN( 'Call Results' ) = 10
, COUNTROWS(
CALCULATETABLE(
FactCalls
, ALLEXCEPT( FactCalls, FactCalls[PhoneNumber] )
, FILTER(
CALCULATETABLE(
VALUES( FactCalls[DateTime] )
, ALLEXCEPT( FactCalls, FactCalls[PhoneNumber] )
, FactCalls[DateTime] < EARLIER( FactCalls[DateTime] )
) + 0
, BLANK()
This will only return a sequence for numbers that are 10 characters/digits long, else it will just give a blank.
A few notes:
I added a 0 to the results above. I neglected to do this before, but doing so forces the result to be an integer, rather than a BLANK() for the first call of any sequence. I prefer 0 indexing, so I've added 0, but if you prefer 1 indexing, then adding 1
will achieve the same effect.
I mentioned the storage and formula engines. This is a rich topic of discussion, but for the purposes of this discussion we can leave it at this: The storage engine is faster and runs on many threads (depending on data size), but can only perform simple
operations; the formula engine is slower and can only run on one thread (no matter the data size), but can perform complex operations and logic. FILTER() is strictly a formula engine function. You may wish to do more research on this topic yourself.
You are likely not IO-bound, but CPU-bound when performing this operation - so the fact that it's an in-memory technology doesn't matter much. If you were performing the same operations in SQL Server, the working set of data would all be loaded into memory
anyway and you would similarly be CPU-bound in performing your operations. The important considerations are parallelism and which data abstraction (columnar vs row) and which data manipulation language combine to create the fastest processing. It may very
well be that this processing is faster in SQL than DAX, and that is not so much a reflection on either engine, but a reflection on the specific problem.

Similar Messages

  • Sqlldr error510 Physical record in data file is longer than the max 1048576

    SQL*Loader: Release 10.2.0.2.0 - Production on Fri Sep 21 10:15:31 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Control File: /apps/towin_p/bin/BestNetwork.CTL
    Data File: /work/towin_p/MyData.dat
    Bad File: /apps/towin_p/bin/BestNetwork.BAD
    Discard File: none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Continuation: none specified
    Path used: Direct
    Load is UNRECOVERABLE; invalidation redo is produced.
    Table "BN_ADM"."DWI_USAGE_DETAIL", loaded from every logical record.
    Insert option in effect for this table: APPEND
    TRAILING NULLCOLS option in effect
    Column Name Position Len Term Encl Datatype
    USAGE_DETAIL_DT FIRST * , DATE MM/DD/YYYY HH24:MI:SS
    UNIQUE_KEY SEQUENCE (MAX, 1)
    LOAD_DT SYSDATE
    USAGE_DETAIL_KEY NEXT * , CHARACTER
    RATE_AREA_KEY NEXT * , CHARACTER
    UNIT_OF_MEASURE_KEY NEXT * , CHARACTER
    CALL_TERMINATION_REASON_KEY NEXT * , CHARACTER
    RATE_PLAN_KEY NEXT * , CHARACTER
    CHANNEL_KEY NEXT * , CHARACTER
    SERIALIZED_ITEM_KEY NEXT * , CHARACTER
    HOME_CARRIER_KEY NEXT * , CHARACTER
    SERVING_CARRIER_KEY NEXT * , CHARACTER
    ORIGINATING_CELL_SITE_KEY NEXT * , CHARACTER
    TERMINATING_CELL_SITE_KEY NEXT * , CHARACTER
    CALL_DIRECTION_KEY NEXT * , CHARACTER
    SUBSCRIBER_LOCATION_KEY NEXT * , CHARACTER
    OTHER_PARTY_LOCATION_KEY NEXT * , CHARACTER
    USAGE_PEAK_TYPE_KEY NEXT * , CHARACTER
    DAY_OF_WEEK_KEY NEXT * , CHARACTER
    FEATURE_KEY NEXT * , CHARACTER
    WIS_PROVIDER_KEY NEXT * , CHARACTER
    SUBSCRIBER_KEY NEXT * , CHARACTER
    SUBSCRIBER_ID NEXT * , CHARACTER
    SPECIAL_NUMBER_KEY NEXT * , CHARACTER
    TOLL_TYPE_KEY NEXT * , CHARACTER
    BILL_DT NEXT * , DATE MM/DD/YYYY HH24:MI:SS
    BILLING_CYCLE_KEY NEXT * , CHARACTER
    MESSAGE_SWITCH_ID NEXT * , CHARACTER
    MESSAGE_TYPE NEXT * , CHARACTER
    ORIGINATING_CELL_SITE_CD NEXT * , CHARACTER
    TERMINATING_CELL_SITE_CD NEXT * , CHARACTER
    CALL_ACTION_CODE NEXT * , CHARACTER
    USAGE_SECONDS NEXT * , CHARACTER
    SUBSCRIBER_PHONE_NO NEXT * , CHARACTER
    OTHER_PARTY_PHONE_NO NEXT * , CHARACTER
    BILLED_IND NEXT * , CHARACTER
    NO_USERS_IN_CALL NEXT * , CHARACTER
    DAP_NO_OF_DSAS_USED NEXT * , CHARACTER
    USAGE_SOURCE NEXT * , CHARACTER
    SOURCE_LOAD_DT NEXT * , DATE MM/DD/YYYY HH24:MI:SS
    SOURCE_UPDATE_DT NEXT * , DATE MM/DD/YYYY HH24:MI:SS
    RATE_PLAN_ID NEXT * , CHARACTER
    NETWORK_ELEMENT_KEY NEXT * , CHARACTER
    SQL string for column : "-2"
    SQL*Loader-510: Physical record in data file (/work/towin_p/MyData.dat) is longer than the maximum(1048576)
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    Table "BN_ADM"."DWI_USAGE_DETAIL":
    0 Rows successfully loaded.
    0 Rows not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.
    Date conversion cache disabled due to overflow (default size: 1000)
    Bind array size not used in direct path.
    Column array rows : 5000
    Stream buffer bytes: 256000
    Read buffer bytes: 1048576
    Total logical records skipped: 0
    Total logical records read: 7000382
    Total logical records rejected: 0
    Total logical records discarded: 0
    Total stream buffers loaded by SQL*Loader main thread: 1666
    Total stream buffers loaded by SQL*Loader load thread: 4996
    Run began on Fri Sep 21 10:15:31 2007
    Run ended on Fri Sep 21 10:27:14 2007
    Elapsed time was: 00:11:43.56
    CPU time was: 00:05:36.81

    What options are you using on the CTL file? How does your data file looks like (e.g. One line per record, one line only)?

  • How to record continuous datas on Excel?

    Hi,
    I'm beginner in Labview.
    I want to see and record the datas of a strain gage and a thermocouple at the same time.
    It is working but it just records the beginning of the measure because there are too many points.
    I tried to lower the rate and number of samples but when it's too low it doesn't work.
    How can I reduce the speed and the number of samples? in order to collect all my datas?  or maybe can I use something to make an average?
    Thank you in advance for your answer.
    Julie

    Julie,
    you have at least two possibilities to solve your issue:
    a) Change "Write To Measurement File" to either append to the file or to create a sequence of files.
    b) Remove all ExpressVIs and implement the application by DAQmx- and FileIOfunctions. You can find an example for this in the examplefinder in Daqmx->AnalogIn->Voltage-> "Cont Acq&Graph Voltage - Write Data to File (TDMS).vi".
    Ok, TDMS is a fileformat you maybe dont want to use, but you can simply replace those vis by appropriate "standard"-fileIO-functions.
    hope this helps,
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • How do I skip footer records in Data file through control file of sql*loade

    hi,
    I am using sql*loader to load data from data file and i have written control file for it. How do i skip last '5' records of data file or the footer records to be skiped to read.
    For first '5' records to be skiped we can use "skip" to achieve it but how do i acheive for last '5' records.
    2)
    Can I mention two data files in one control file if so what is the syntax(like we give INFILE Where we mention the path of data file can i mention two data file in same control file)
    3)
    If i have datafile with variable length (ie 1st record with 200 charcter, 2nd with 150 character and 3rd with 180 character) then how do i load data into table, i mean what will be the syntax for it in control file.
    4)if i want to insert sysdate into table through control file how do i do it.
    5) If i have variable length records in data file and i have first name then white space between then and then last name, how do i insert this value which includes first name and last name into single column of the table.( i mean how do you handle the white space in between first name and last name in data file)
    Thanks in advance
    ram

    You should read the documentation about SQL*Loader.

  • I can record midi data from my Mason & Hamlin Piano Disc Pro Record through my MOTU Traveller into Logic but Logic won't send midi data out to the MOTU Traveller to the Piano Disc player

    Hello All,
    I can record midi data from my Mason & Hamlin Piano Disc Pro Record through my MOTU Traveller into Logic Pro 9.1.8  but Logic won't send the midi data back out to the MOTU Traveller and thus to the Piano Disc player. I got it to playback one time but have no idea how and when it did it was looping or something because the velocity was way high coming back in and the damper pedal was slamming down. When I play a key on the piano the midi "in" light on the Traveller lights up. When I play the track back on my computer no lights blink on the Traveller and when I did the apple midi studio test in utilities when I play a key I get the confirmation signal noise and the Traveller blinks when I click on the down arrow of the Traveller in the Apple midi studio test the midi out light on the traveller never lights and the signal light on the piano does not blink either. No outbound signal at all...
    I have messed with every possible parameter I can find and and have had help from one of Piano Disc's premier editors but no luck. The piano was prepped for me on Logic so it would work with my studio.  I'm positive it's my fault and I'm overlooking something really simple and stupid but what!??!
    Somebody please help.  Thank you all in advance for ANY ideas you might have!

    Blues Piano,
    I'm not sure if this will be a help or not.  I'm so Logic Pro wet behind the ears that I make newbies look experienced.  However, I'm not expecting many on the Apple support forums have a PianoDisc system, much less one with the new optical record strip.  While I don't have any record strip on my PianoDisc, I do have a PianoDisc iQ that's only a month old.  I've been playing converted paper scrolls from hundred year old player pianos through it via the MIDI in port of the PianoDisc CPU.  I've found I have to open the MIDI file in Logic Pro (10.0.4) then go to <Track><New External MIDI Track> then copy the existing track to that new external track.  Only then can I see in the Track inspector (defaults left side of screen with the Icon for the instrument) the "Port" parameter.  Then I can select my external MIDI device in that Port selector. 
    I've also encountered problems with the PianoDisc not using enough force on the notes or using too much force.  To get around this problem, until I understand Logic better, I've been setting minimum and maximum volicities.  To do that I right click on the track and select "Select All."  Then I right click again and select "MIDI" then "MIDI Transform"  then "Velocity Limiter."  In the resulting pop up window in the center is a drop down and you can play with the velocity from "MIN" to "MAX" along with "ADD" "SUBTRACT" etc. 
    I hope this helps.  I envy you your Mason & Hamlin.  If you need more help on this just email me at pfleischmann at mac dot com.

  • Function to return a value of a object based on last record by date

    Post Author: Tned
    CA Forum: Desktop Intelligence Reporting
    Can anyone assit with a method/formula to return the a value of an object based on the last record by date. BO 5 or XI. See example below. Query structure is not an issue. Only need help with the last record function or aggregate.
    Data Table         
    ID / Serial #
    Date
    Status
    Condition
    Abc1
    01/01/08
    1
    A
    Abc1
    01/02/08
    1
    Z
    Abc1
    01/02/08
    3
    Z
    Abc1
    01/04/08
    2
    D
    Abc1
    01/05/08
    5
    E
    Abc2
    01/01/08
    1
    F
    Abc2
    01/02/08
    2
    Z
                                                                                    Desired query results. Only the values of the latest records returned.                      
    ID / Serial #
    Status
    Condition
    Abc1
    5
    E
    Abc2
    2
    Z

    Post Author: Tned
    CA Forum: Desktop Intelligence Reporting
    Thanks Prashant
    However, when i add either the status or condition variables to the report all lines related to the ID are returned not just the last entery by date.
    Thanks again.Terry

  • Problem when recording the data using BDC for Tcode CJ02.

    Dear Experts,
    When i am trying to record the data for TCODE : CJ02 i need to enter the project  Definition and enter the WBS element it takes me to the screen then i should select the WBS element and attach a file for that selected WBS element . The option for me to attach the attachment of file  will be available on the application area(Services for the Object).
    Now the problem when i try to do recording in SHDB this option like create attachement is not visible in the recodring . Kindly suggest me what can i do such that i attach the file for the particular project def and WBS element.
    Either suggest any function module or other procedure .......
    Regards,
    Sana.

    Hi,
      in BDC each and every action is recording. If your press enter in same screen that also recorded once aging may be this is your case repeating field values will appear. we can solve the problem for repeat fields like below.
    suppose in your excel having repeated field X1 X2 X3 the X2 contains repeated field X3 means delete the X3 field.
    Now In your itab having X1 and X2 fields. While in the LOOP the ITAB pass the X2 field to repeated the fields.
    LOOP at ITAB to WA.
    CLEAR bdcdata_wa.
    bdcdata_wa-fnam = 'BDC_CURSOR'.
    bdcdata_wa-fval = 'RM08M-EBELN'.
    APPEND bdcdata_wa TO bdcdata_tab.
    CLEAR bdcdata_wa.
    bdcdata_wa-fnam = 'INVFO-BLDAT'.
    bdcdata_wa-fval = wa-X2." 1st time pass the X2 fields
    APPEND bdcdata_wa TO bdcdata_tab.
    CLEAR bdcdata_wa.
    bdcdata_wa-fnam = 'INVFO-BLDAT'.
    bdcdata_wa-fval = wa-X2." pass the same value to repeated field
    APPEND bdcdata_wa TO bdcdata_tab.
    Endloop.
    Hope you can understand.
    Regards,
    Dhina..

  • Find record insert date and time in a table

    Hi All,
    I want to get record insert date and time in a table. There is no datetime column in my table. Are there any possibility to get date and time for each record?
    Thank You

    Thats not easy. If your transaction info still resides on active portion of the log you can use fn_dblog to read out the time at which the transactions occurs. This is useful only if you try it shortly after transaction.
    the code would look like this
    SELECT *
    FROM fn_dblog(null,null)
    WHERE [Transaction Name] LIKE 'INSERT%'
    OR [Transaction Name] LIKE 'UPDATE%'
    Also see
    http://www.mssqltips.com/sqlservertip/3076/how-to-read-the-sql-server-database-transaction-log/
    http://solutioncenter.apexsql.com/read-a-sql-server-transaction-log/
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • SQL*Loader-510: Physical record in data file (clob_table.ldr) is long

    If I generate loader / Insert script from Raptor, it's not working for Clob columns.
    I am getting error:
    SQL*Loader-510: Physical record in data file (clob_table.ldr) is long
    er than the maximum(1048576)
    What's the solution?
    Regards,

    Hi,
    Has the file been somehow changed by copying it between windows and unix? Ora file transfer done as binary or ASCII? The most common cause of your problem. Is if the end of line carriage return characters have been changed so they are no longer /n/r could this have happened? Can you open the file in a good editor or do an od command in unix to see what is actually present?
    Regards,
    Harry
    http://dbaharrison.blogspot.co.uk/

  • What is the Sequence of Master data and Transaction data R/3 to APO

    Hi guys,
    What is the Sequence of Master data and Transaction data R/3 to APO  through CIF?

    Hi,
    Master data,
    Material Indipendent data can be sent without any dependency.
    Material Dependent data sequence is as follow.
    Plant,
    MRP areas,
    Supply area,
    Material,
    MRP area material,
    planning material,
    ATP check,
    Purchase info recards,
    schedule agg,
    PPM or PDS (But before PPM or PDS  work center should send to apo) If PDS is sending then BOM should send before PDS.
    Transaction data.
    As per the business requirement what is planning in apo.
    Regards,
    Kishore Reddy.

  • Present a record by date through an Abap Rutine

    Hi Gurus,
    i got in the cube the next records for example:
    calday Material Quantity
    01.01.2011 A 10
    15.01.2011 A 20
    if i present in the query these IO an KF the query will show these 2 records, now i want just show the last record by date
    calday Material Quantity
    15.01.2011 A 20
    i tried in the query with a calculate KF with exception aggregation , MAX with Reference to calday , but, i need to show 0CALDAY for that reason this doesnt work, so i was thinking in an ABAP  rutine that read all the active table of the ODS and compare record by record whats the last date by material and this record must be marked with a vaalue in another Infoobject, before the load to the cube, then i filter in the query for this mark IO, now how will the code be? can you help  to build this code, or some advice or idea?,  i appreciate it

    Thanks for helping me.
    Currently, my application provides users a feature to sort all the ThreadBean retrieved from the database according to two criteria:
    1. sort by
    thread_last_post_date, thread_creation_date, message_sender, thread_reply_count, thread_view_count
    2. order by DESC or ASC
    Therefore, my existing query string looks like:
    String query = "SELECT ...... FROM message_thread WHERE message_receiver = ? ORDER BY " + sort + " " + order;According to your advice, I simply add another criterion; say, articleTitle to my query string.
    A. Will this added criterion articleTitle have any impact on the feature that the application provides to users; namely, sort and order? (Now, all messages will be presented to users in categories; i.e. articleTitle.)
    B. Where should I add the articleTitle to the query string
    B.1: before sort and order?
    String query = "SELECT ...... FROM message_thread WHERE message_receiver = ? ORDER BY " + articleTitle + " " + sort + " " + order;or
    B.2: after sort and order?
    String query = "SELECT ...... FROM message_thread WHERE message_receiver = ? ORDER BY " + sort + " " + order + " " + articleTitle;

  • How to Save Multiple Records In Data Block

    Hi All,
    I Have Two Blocks --> Control Block,Database Block
    Please Any Idea How to Save Multiple Records In Data Block when User changed Data in Control Block.
    Thanks For Your Help
    Sa

    Now i have to use each record of control block(ctl_blk) as where condition in data base block(dat_blk)and display 10 records from database table.>
    Do you want this coordination to be automatic/synchronized or only when the user clicks a button or something else to signal the coordination? Your answer here will dicate which trigger to put your code in.
    As to the coordination part, as the user selects a record in the Control Block (CB), you will need to take the Key information and modify the Data Block's (DB) "DEFAULT_WHER E" block property. The logical place to put this code is the CB When-New-Record-Instance (WNRI) trigger. You code will look something like the following:
    /* Sample WNRI trigger */
    /* This sample assumes you do not have a default value in the BLOCK WHER E property */
    DECLARE
       v_tmp_dw    VARCHAR2(250);
    BEGIN
       v_tmp_dw := ' DB_Key_Column1 = '||:CONTROL.Key_Column1||' AND DB_Key_Column2 = '||:CONTROL.Key_Column_2;
       Set_Block_Property('DATA_BLOCK', DEFAULT_WHER E, v_tmp_df);
       /* If you want auto coordination to occur, do the following */
       Go_Block('DATA_BLOCK');
       Execute_Query;
       /* Now, return to the Control Block */
       Go_Block('CONTROL_BLOCK');
    END;
    The Control block items are assigned with values in Form level (Key_exeqry).If your CD is populated from a single table, it would be better to create a Master - Detail relationship (as Abdetu) describes.
    Hope this helps,
    Craig B-)
    If someone's response is helpful or correct, please mark it accordingly.

  • How to find Record creation date

    Hi Friends,
    Is there any way to find the actual Record creation date for each record in Database tables?
    Thanks

    Hi.
    I would like to suggest you something.
    In order to get the changed record date entry,
    1. GoTo the table you have created.
    2. GoTo the data element related to that field which you have defined.
    3. In the data element window, GoTo->Further Characteristics (Window) and
    4.Check the CHANGE DOCUMENT Radio button.
    Hence, For every change the date will be fetched.
    But, This is valid for only that TABLE.
    \[removed by moderator\]
    Regards
    Harsh
    Edited by: Jan Stallkamp on Jul 7, 2008 6:03 PM

  • Is it possible to trigger the acquisition and recording of data in this condition?

    Hi
    I am a LabVIEW newbie,
    Is it possible to trigger the data acquisition and recording in the following case;
    I have two input signals 1. Pressure Transducer 2. Pulse from a magnetic pickup.
    I have to plot the pressure data against the pulses. Can I make the magnetic pickup signal as master signal to trigger the data acquisition and recording? and at the same time acquiring the pressure data as well.
    How can I do that ?
    Thanks.

    Hi Rich
    Thanks...
    I have PCI - 6033E High resolution multifunction I/O board and NI BNC - 2090 adapter chasis.
    What I am getting is the pressure from the transducer and magnetic pulses.
    Two cosecutive pulses encompass one complete cycle of the combustion process inside the engine.
    I need to record the data for presenting the information offline.
    Since combustion is a highly unstable process I need to get the optimum representative grapgh at a particular engine speed.
    So lets say, I want the magnetic pulse to trigger this the data acquisition for 50 of such pulses, i.e. 50 complete cycles of diesel engine. So that these 50 acquired cycles may be averaged for pressure readings and a single representative curve (graph) may be produced.
    Thanks again
    Message Edited by SeaMist on 06-26-2008 02:49 AM

  • ODS Activation - Number of Records Per Data Package

    Hello,
    We are using the default Global ODS Setting called "Min Num of Data Records"... which defaults to 10,000.
    However, when I activate the ODS and look at the job log, I see that less than 10,000 records are in most of the data packages.  Below is part of the job log.  If the setting says the minimum must be 10,000 then why am i seeing data packages with less?
    Thanks!
    Data pkgs 000027; Added records 2,536; Changed records 0; Deleted records 0
    Data pkgs 000028; Added records 2,436; Changed records 0; Deleted records 0
    Data pkgs 000029; Added records 1,171; Changed records 0; Deleted records 0
    Data pkgs 000030; Added records 1,743; Changed records 0; Deleted records 0
    Data pkgs 000031; Added records 1,552; Changed records 0; Deleted records 0
    Data pkgs 000032; Added records 8,048; Changed records 0; Deleted records 0
    Data pkgs 000033; Added records 10,001; Changed records 0; Deleted records 0
    Data pkgs 000034; Added records 10,001; Changed records 0; Deleted records 0
    Data pkgs 000035; Added records 10,001; Changed records 0; Deleted records 0
    Data pkgs 000036; Added records 10,001; Changed records 0; Deleted records 0

    Hi Fong,
    The settings for 'no of data records' specifies how many records should be transfered in a data package to BW for processing .
    However added records specifies how many records of that data package actually loaded to your data target .
    This may be because of
    1.Key fields:  Your key field settings may lead to elimination of any duplicate records
    2.Update rule: Your update rule may have some filteration of records.
    Hope this makes you clear.
    Regards,
    Prema

Maybe you are looking for