Data Flow Query

Dear Friends,
This is regarding the flow of information from Client to Backend in a SAP NW 7.1 Environment.
I am curious to know the steps involved when the information is sent ( due to either a create / update of existing data object ) from Mobile Client Device to Backend .
Hand Held Terminal ( HHT ) >> Device Specific Queue >> Call Data Object specific BAPI wrapper for Backend Validation >> Persist the Data in CDS , only if Backend Validation is Successful?.
In Simple Words,
-- where does the Data Object Instance , created on the Client Device , get persisted first  - Middleware or Backend?
For the sake of simplicity, I have eliminated the Flow Blueprint Services :).
Thanks in Advance,
Suresh BJ

Hi Suresh,
As a summarized form , you can understand the process as below:
When a receiver updates a data object instance in its local database and does a synchronization,
then this change is not only communicated to the backend, but also validated
and confirmed by the backend.
=>In first step DOE receives the modification request and performs conflict detection to ensure that multiple receivers are not causing an inconsistency by parallel updates of the same instance.
=>It then forwards the modification request to the backend which performs the validation and update of  data.
=>Once confirmation of validation and update is received by the DOE, it persists the data in CDS and also it sends a confirmation back to the receiver which initiated the update.
=> The receiver then commits the update permanently into its local database.
=>If a rejection was received due to conflict or validation failure, then the
receiver has to rollback the update.
=>In case the modification request leads to an update of backend data, DOE checks if the subscription
state is modified and if it is needed then subscriptions will be recalculated so that other receivers subscribed to the data object also receive the updated data object Instance.
I hope this would help to resolve your query.
Thanks & Regards
Shweta

Similar Messages

  • SPLIT An R3 data flow

    Here is an example of how my flow is
    There is a reason for the split in R3 data flow . I wanted to know if it was possible
    Example:
    -->Data flow
                  ---> R/3 data flow
    >Two tables from SAP r/3 system is joined by a query and connected to the data transport
                    R/3 data flow-->Query->Target Table
    This is just an example but the actual implementation is more complex  (The above is one job)
    Is it possible this way
    >DataFlow          
                   -->Data flow
                  ---> R/3 data flow
    >Two tables from SAP r/3 system is joined by a query and connected to the data transport
    2nd Job
    The rest of the work
    I guess its not possible because the R/3 data flow has to be connected to a target
    But just want to if there is a possibility.

    Are you able to call the customers middleware externally and on-demand?  If so, you could use the "Custom Transfer" data transfer method and specify the middleware application details as part of the extraction process.  This will cover the mandatory middleware transportation requirement but still maintaining the integrity of the Data Services extraction.

  • Why does DB2 Data Flow Task query does not accept a date coming from a string or datetime variable in SSIS?

    I am trying to compare a DB2 date format to a date variable from SSIS. I have tried to set the variable as datetime and string. I have also casted the SQL date as date in the data flow task. I have tried a number of combinations of date formats, but no luck
    yet. Does anyone have any insights on how to set a date (without the time) variable and be able to use it in the data flow task SQL? It has to be an easy way to accomplish that. I get the following error below:
    An invalid datetime format was detected; that is, an invalid string representation or value was specified. SQLSTATE=22007".
    Thanks!

    Hi Marcel,
    Based on my research, in DB2, we use the following function to convert a string value to a date value:
    Date(To_Date(‘String’, ‘DD/MM/YYYY’))
    So, you can set the variable type to String in the package, and try the following query:
    ACCOUNT_DATE  BETWEEN  '11/30/2013' AND  Date(To_Date(?, ‘DD/MM/YYYY’))
    References:
    http://stackoverflow.com/questions/4852139/converting-a-string-to-a-date-in-db2
    http://www.dbforums.com/db2/1678158-how-convert-string-time.html
    Regards,
    Mike Yin
    TechNet Community Support

  • SQL Query using a Variable in Data Flow Task

    I have a Data Flow task that I created. THe source query is in this file "LPSreason.sql" and is stored in a shared drive such as
    \\servername\scripts\LPSreason.sql
    How can I use this .sql file as a SOURCE in my Data Flow task? I guess I can use SQL Command as Access Mode. But not sure how to do that?

    Hi Desigal59,
    You can use a Flat File Source adapter to get the query statement from the .sql file. When creating the Flat File Connection Manager, set the Row delimiter to a character that won’t be in the SQL statement such as “Vertical Bar {|}”. In this way, the Flat
    File Source outputs only one row with one column. If necessary, you can set the data type of the column from DT_STR to DT_TEXT so that the Flat File Source can handle SQL statement which has more than 8000 characters.
    After that, connect the Flat File Source to a Recordset Destination, so that we store the column to a SSIS object variable (supposing the variable name is varQuery).
    In the Control Flow, we can use one of the following two methods to pass the value of the Object type variable varQuery to a String type variable QueryStr which can be used in an OLE DB Source directly.
    Method 1: via Script Task
    Add a Script Task under the Data Flow Task and connect them.
    Add User::varQuery as ReadOnlyVariables, User::QueryStr as ReadWriteVariables
    Edit the script as follows:
    public void Main()
    // TODO: Add your code here
    System.Data.OleDb.OleDbDataAdapter da = new System.Data.OleDb.OleDbDataAdapter();
    DataTable dt = new DataTable();
    da.Fill(dt, Dts.Variables["User::varQuery"].Value);
    Dts.Variables["QueryStr2"].Value = dt.Rows[0].ItemArray[0];
    Dts.TaskResult = (int)ScriptResults.Success;
    4. Add another Data Folw Task under the Script Task, and join them. In the Data Flow Task, add an OLE DB Source, set its Data access mode to “SQL command from variable”, and select the variable User::QueryStr.
    Method 2: via Foreach Loop Container
    Add a Foreach Loop Container under the Data Flow Task, and join them.
    Set the enumerator of the Foreach Loop Container to Foreach ADO Enumerator, and select the ADO object source variable as User::varQuery.
    In the Variable Mappings tab, map the collection value of the Script Task to User::QueryStr, and Index to 0.
    Inside the Foreach Loop Container, add a Data Flow Task like step 4 in method 1.
    Regards,
    Mike Yin
    TechNet Community Support

  • Update Schema Changes within Query Tool in Data Flow.

    Post Author: j_perdzock
    CA Forum: Data Integration
    I just added new fields to a table within my datastore and re-imported the schema.  I need to map data to these new fields via a Query transform already defined within my data flow.  The new fields appear within my Targeted datastore; however, these new fields do not appear within the Query Transform.  How can I get these new fields to appear within my Query Transform?  We are running Designer version 11.5.3.4.

    Post Author: wdaehn
    CA Forum: Data Integration
    Columns have to be created manually in the Query, using "new output column", drag 'n drop or copy/paste. Only if the query has no columns at all and is connected to a target we do the copy/paste for you automatically.

  • R/3 data flow is timing out in Data Services

    I have created an R/3 data flow to pull some AP data in from SAP into Data Services.  This data flow outputs to a query object to select columns and then outputs to a table in the repository.  However the connection to SAP is not working correctly.  When I try to process the data flow it just idles for an hour until the SAP timeout throws an error.  Here is the error:
    R/3 CallReceive error <Function Z_AW_RFC_ABAP_INSTALL_AND_RUN: connection closed without message (CM_NO_DATA_RECEIVED)
    I have tested authorizations by adding SAP_ALL to the service account I'm using and the problem persists.
    Also, the transports have all been loaded correctly.
    My thought is that it is related to the setting that controls the method of generating and executing the ABAP code for the data flow, but I can't find any good documentation that describes this, and my trial and error method so far has not produced results.
    Any help is greatly appreciated.
    Thanks,
    Matt

    You can't find any good documentation??? I am working my butt off just.......just kiddin'
    I'd suggest we divide the question into two parts:
    My dataflow takes a very long time, how can I prevent the timeout after an hour? Answer:
    Edit the datastore, there is a flag called "execute in background" to be enabled. With that the abap is submitted as a background spool job, hence does not have the dialog-mode timeout. Another advantage is, you can watch it running by brwosing the spool jobs from the SAP GUI.
    The other question seems to be, why does it take that long even? Answer:
    Either the ABAP takes that long because of the data volume.
    Or the ABAP is not performing well, e.g. join via ABAP loops with the wrong table as inner.
    Another typical reason is to use direct_download as transfer method. This is fine for testing but it takes a very long time to download data via the GUI_DOWNLOAD ABAP function. And the download time would be part of the ABAP execution.
    So my first set of questions would be
    a) How complex is the dataflow, is it just source - query - data_transfer or are there joins, lookups etc?
    b) What is the volume of the table(s)?
    c) What is your transfer method?
    d) Have you had a look at the generated abap? (in the R/3 dataflow open the menu Validation -> Generate ABAP)
    btw, some docs: https://wiki.sdn.sap.com:443/wiki/display/BOBJ/ConnectingtoSAP

  • Data flows are getting started but not completing successfully while extracting/loading of the data

    Hello People,
    We are facing a abnormal behavior with the dataflows in the data services job.
    Scenario:
    We are extracting the data from CRM end in parallel. Please refer the build:
    a. We have 5 main workflows flows i.e :
       => Main WF1 has 6 more sub Wf's in it, in which each sub Wf has 1/2 DF's associated in parallel.
       => Main WF2 has 21 DF's and 1 WFa->with a DF & a WFb. WFb has 1 DF in parallel.
       => Main WF3 has 1 DF in parallel.
       => Main WF4 has 3 DF in parallel.
       => Main WF5 has 1 WF & a DF in sequence.
    b. Regularly the job works perfectly fine but, sometimes it gets stuck at the DF’s without any error logs.
    c. Job doesn’t stuck at a specific dataflow or on a specific day, many a times it strucks at different DF’s.
    d. Observations in the Monitor Log:
    Dataflow---------------------- State----------------RowCnt------LT-------AT------ 
    +DF1/ZABAPDF
    PROCEED
    234000
    8.113      394.164
    /DF1/Query
    PROCEED
    234000
    8.159      394.242
    -DF1/Query_2
    PROCEED
    234000
    8.159      394.242
    Where LT: Lapse Time and AT: Absolute time
    If you check the monitor log, the State of the Dataflow DF1 remains PROCEED till the end, ideally it should complete.
    In successful jobs, the status for DF1  is STOP . This DF takes approx. 2 min to execute.
    The row count for DF1 extraction is 234204 but, it got stuck at  234000.
    Then we terminate the job after sometime,but for surprise it gets executed successfully on next day.
    e. As per analysis over all the failed jobs, same things were observed over the different data flows that got stuck during the execution.Logic related to the data flows is perfectly fine.
    Observations in the Trace log:
    DATAFLOW: Process to execute data flow <DF1> is started.
    DATAFLOW: Data flow <DF1> is started.
    ABAP: ABAP flow <ZABAPDF> is started.
    ABAP: ABAP flow <ZABAPDF> is completed.
    Cache statistics determined that data flow <DF1>
    uses <0>caches with a total size of <0> bytes. This is less than(or equal to) the virtual memory <1609564160> bytes available for caches.
    Statistics is switching the cache type to IN MEMORY.
    DATAFLOW: Data flow <DF1> using IN MEMORY Cache.
    DATAFLOW: <DF1> is completed successfully.
    The highlighted text in the trace log is not appearing in the unsuccessful job but, it appears for the successful one.
    Note: The cache type is pageable cache, DS ver is 3.2.
    Please suggest.
    Regards,
    Santosh

    Hi Santosh,
    just a wild guess.
    Would you be able to replicate all the DF\WF , delete original DF\WF, rename replicated objects to original to DF\WF names(for your convenience)   and excute it.
    Some time reference does not work.
    Hope this should work.
    Regards,
    Shiva Sahu

  • Read from sql task and send to data flow task - [OLE DB Source [1]] Error: A rowset based on the SQL command was not returned by the OLE DB provider.

    I have created a execut sql task -
    In that, i have a created a 'empidvar' variable of string type and put sqlstatement = 'select distinct empid from emp'
    Resultset=resultname=0 and variablename=empidvar
    I have added data flow task of ole db type and I put this sql statement under sql command - exec emp_sp @empidvar=?
    I am getting an error.
    [OLE DB Source [1]] Error: A rowset based on the SQL command was not returned by the OLE DB provider.
    [SSIS.Pipeline] Error: component "OLE DB Source" (1) failed the pre-execute phase and returned error code 0xC02092B4.

    shouldnt setting be Result
    Set=Full Resultset as your query returns a resultset? also i think variable to be mapped should be of object type.
    Then for data flow task also you need to put it inside a ForEachLoop based on ADO.NET recordset and map your earlier variable inside it so as to iterate for every value the sql task returns.
    Also if using SP in oledb source make sure you read this
    http://consultingblogs.emc.com/jamiethomson/archive/2006/12/20/SSIS_3A00_-Using-stored-procedures-inside-an-OLE-DB-Source-component.aspx
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Automatic creation of BW data flow documentation

    Dear Gurus,
    I need to write documentation of the data flow of a huge project which I haven't implemented by myself.
    The documentation should contain a mapping of the objects in the dataprovider, towards objects in the source system(s).
    Eventually with the info in which dataproviders the objects are included, e.g. between the multiprovider and the source system.
    Details of transformations can be ignored; eventually mentioning there's a routine involved, but that's the maximum.
    With the data repository, I can have the content of cubes in a graphical overview, but it doesn't really provide me useful information.
    You can imagine I prefer an automatic way to create this documentation.
    Anybody who knows a solution, even if it only provides part of the purpose?
    Any solution via query, standard SAP or customized program, ...
    Recommendations would be very highly appreciated!
    Thx & Rgds, sam

    Worldwide documentation is made on SAP BW projects, but no reply on automatic documentation.
    A lot of time must be lost by manually creating documentation on mapping objects to source system fields.
    ==> SAP, please, work out a solution.
    I didn't find a satisfying solution, but I've done it the following way:
    List all objects for a multiprovider via the meta data repository, and paste in excel document.
    Then listing all objects for the underlying dataproviders, and paste in separate sheets of this excel.
    Compare the objects of the MP with the objects on the other sheets using excel functions, and sign when a dataprovider contains a certain object.
    For the datasources, I checked if an object is present, and if yes, give the original source field.
    This in summary as a not optimal and not complete solution, but it prevents making mistakes.
    Rgds. sam

  • SSIS Data Flow task using SharePoint List Adapter Setting SiteUrl won't work with an expression

    Hi,
    I'm trying to populate the SiteUrl from a variable that has been set based on a query to a SQL table that has a URL field.  Here are the steps I've taken and the result.
    Created a table with a url in it to reference a SharePoint Task List.
    Created a Execute SQL Task to grab the url putting the result set in a variable called SharePointUrl
    Created a For Each container and within the collection I use the SharePointUrl as the ADO object source variable and select rows in the first table.
    Still in the For Each container within the Variable mappings I have another Package Variable called PassSiteUrl2 and I set that to Index 0 or the value of the result set.
    I created a script task to then display the PassSiteUrl2 variable and it works great I see my url
    This is where it starts to suck eggs!!!!
    I insert a Data Flow Task into my foreach loop.
    I Insert a SharePoint List Adapter into my Data Flow
    Within my SharePoint List Adapter I set my list to be "Tasks", My list view to be "All Tasks" and then I set the url to be another SharePoint site that has a task list just so there is some default value to start with.
    Now within my Data Flow I create an expression and set the [SharePoint List Source].[SiteUrl] equal to my variable @[User::PassSiteUrl2].
    I save everything and run my SSIS package and it overlays the default [SharePoint List Source].[SiteUrl] with blanks in the SharePoint List Adapter then throws the error that its missing a url
    So here is my question.  Why if my package variable displays fine in my Control Flow is it now not seen or seen as blanks in the Data Flow Expression.  Anyone have any ideas???
    Thanks
    Donald R. Landry

    Thanks Arthur,
    The scope of the variable is at a package level and when I check to see if it can be moved Package level is the highest level.  The evaluateasexpression property is set to True.  Any other ideas?
    I also tried to do the following.  Take the variable that has the URL in it and just assign it to the description of the data flow task to see if it would show up there (the idea being the value of my @[User::PassSiteUrl] should just show in the
    description field when the package is run. That also shows up blank. 
    So i'm thinking its my expression.  All I do in the expression is set [SharePoint List Source].[SiteUrl] equal to @[User::PassSiteUrl] by dragging and dropping the variable into the expression box.  Maybe the expression should be something
    else or is their a way to say  @[User::PassSiteUrl] = Dts.Variables("User::PassSiteUrl2").Value.ToString() 
    In my script task I use Dts.Variables("User::PassSiteUrl2").Value.ToString() to display
    the value in the message box and that works fine.
    Donald R. Landry

  • AR DATA FLOW (TABLE LEVEL) - ON ACCOUNT CREDITS & CASH RECEIPTS편

    제품 : FIN_AR
    작성날짜 : 2003-09-16
    AR DATA FLOW (TABLE LEVEL) - ON ACCOUNT CREDITS & CASH RECEIPTS편
    ================================================================
    PURPOSE
    이 문서에서는 AR table들에 대해 어떻게 data가 들어가는지에 대해
    설명한다.
    On Account와 Cash Receipt을 기준으로 설명한다.
    Explanation
    1. On Account Credits
    특정 Customer에 대한 Credit정보를 입력했으나, 아직 특정 Invoice에는 Apply되지 않은 정보를 "On Account" credit이라고 한다.
    Credit정보가 들어있기 때문에, Credit Memo와 유사하게 table에 저장된다.
    Credit Memo와 다른 점은 아래와 같다.
    o When first entered the payment schedule will be fully remaining.
    o When first entered no records are inserted into AR_RECEIVABLE_APPLI
    CATIONS_ALL.
    o The line records will have NULL values in
    PREVIOUS_CUSTOMER_TRX_ID
    PREVIOUS_CUSTOMER_TRX_LINE_ID
    o The distribution lines have to be typed in as there is no invoice
    to copy them from.
    "On Account"는 같은 Supplier상에서는 어떠한 invoice에 대해서 적용이 가능하다는 점에서,
    특정 invoice에만 apply가 가능한 Credit Memo와는 다르다.
    2. Cash Receipts
    Recipt정보가 입력되면, 아래의 table들에 insert된다.
    o AR_CASH_RECEIPTS_ALL
    o AR_CASH_RECEIPT_HISTORY_ALL
    o AR_PAYMENT_SCHEDULES_ALL
    o AR_RECEIVABLE_APPLICATIONS_ALL
    입력된 Receipt정보가 특정 invoice와 match되면, 추가 record가
    AR_RECEIVABLE_APPLICATIONS_ALL에 insert되고, AR_PAYMENT_SCHEDULE_ALL에는 update된다.
    | | | |
    | RECEIPT |------------------| PAYMENT |
    | | | SCHEDULE |
    | |
    | |
    ^ ^
    /|\ /|\
    | | | |
    | RECEIPT | |APPLICATIONS|
    | HISTORY | | |
    2.1 AR_CASH_RECEIPTS_ALL
    Receipt에 대한 기본정보가 insert된다. 한 receipt당 한줄이 insert된다.
    Key = CASH_RECEIPT_ID (from sequence AR_CASH_RECEIPTS_S)
    Important Fields
    AMOUNT - Value of receipt in entered currency
    RECEIPT_NUMBER - Payment Number entered by user.
    STATUS - (APP)lied, (UNAPP)lied, (REV)ersed Payment,
    (STOP) payment, (NSF) insufficient funds.
    It will only change to APP once the whole
    amount of the receipt is applied.
    REVERSAL_DATE - NULL unless receipt reversed
    PAY_FROM_CUSTOMER - Contains CUSTOMER_ID for RA_CUSTOMERS
    2.2 AR_CASH_RECEIPT_HISTORY_ALL
    Receipt하나당 한줄의 data가 insert되고, GL로 Posting된 정보가 들어간다.
    Receipt이 reverse되면, 새로운 row가 insert된다.
    Key = CASH_RECEIPT_HISTORY_ID (from sequence)
    Important Fields
    CASH_RECEIPT_ID - Foreign key to AR_CASH_RECEIPTS record.
    STATUS - CLEARED for manually input receipts.
    GL_DATE - Accounting date
    ACCOUNT_CODE_COMBINATION_ID - Key to GL_CODE_COMBINATIONS
    POSTING_CONTROL_ID - -3 if unposted
    REVERSAL_POSTING_CONTROL_ID - NULL unless payment reversed
    CURRENT_RECORD_FLAG - Y if this is latest record
    PRV_STAT_CASH_RECEIPT_HIST_ID - Key to previous receipt history record.
    2.3 AR_PAYMENT_SCHEDULES_ALL
    Invoice에 apply된 Total 금액정보가 저장된다.
    Key = PAYMENT_SCHEDULE_ID (from sequence)
    Important Fields
    CUSTOMER_TRX_ID - NULL
    CASH_RECEIPT_ID - Foreign key to AR_CASH_RECEIPTS record.
    AMOUNT_DUE_ORIGINAL - Total amount of receipt (usually negative)
    AMOUNT_DUE_REMAINING - Unapplied amount of receipt.
    AMOUNT_APPLIED - How much of this receipt is applied .
    STATUS - (OP)en or (CL)osed. Will only be closed if
    AMOUNT_DUE_REMAINING is zero.
    All of the fields holding LINE, TAX and FREIGHT amounts are NULL.
    2.4 AR_RECEIVABLE_APPLICATIONS
    Receipt이 처음 생성될때, 한줄의 data가 이 table에 insert되고,
    invoice에 대해 Apply혹은 Unapply가발생하면, 두줄씩 새롭게 생성된다.
    예를들면, 아래와 같다.
    Record 1 UNAPP 700
    { Record 2      UNAPP       -200
    { Record 3      APP          200      cross referenced to the Invoice
    { Record 4      UNAPP       -500
    { Record 5      APP          500      cross referenced to 2nd Invoice
    The sum of the amounts on records that have a particuar status should add up
    to the running totals on the payment schedulesi, but with the opposite sign.
    i.e. In the example above
    AR_PAYMENT_SCHEDULES.AMOUNT_DUE_ORIGINAL = -700
    AR_PAYMENT_SCHEDULES.AMOUNT_DUE_REMAINING = 0
    AR_PAYMENT_SCHEDULES.AMOUNT_APPLIED = -700
    UNAPP = 700 -200 -500 = 0
    APP = 200 + 500 = 700
    Statuses of these records can be:-
    UNAPP - Unapplied
    APP - Applied
    ACC - On Account
    UNID - Unidentified (Customer Not known)
    이러한 record내역은 invoice/credit/receipt에 있는 Transaction History form에서 확인할 수 있다.
    2.5 AR_PAYMENT_SCHEDULE (Invoice)
    Receipt이 특정 invoice에 apply되면, invoice에 대한 Payment Schedule record가 update된다.
    remaining amount field 값은 Payment금액만큼 줄어들게 된다.
    만약, remaining amount가 "0"가 되면, invoice Payment schedule은 closed상태가 된다.
    예를들어, Receipt금액 "200"이 "1175" invoice금액(Tax금액 175가 포함된)에 apply되었다면, Invoice의 Payment Schedule은 아래와 같이 조정된다.
    Before After
    AMOUNT_DUE_REMAINING 1175.00 975.00
    AMOUNT_LINE_ITEMS_REMAINING 1000.00 800.00
    TAX_REMAINING 175.00 175.00
    FREIGHT_REMAINING 0.00 0.00
    Note that receipts are applied in a fixed sequence:-
    1. Line Amounts
    2. Tax Amounts
    3. Freight Amounts
    ie The TAX_REMAINING figure will only start to decrease when the
    AMOUNT_LINE_ITEMS_REMAINING is zero.
    Reference Documents
    Note : 29277.1 & 29278.1

    Hi,
    This query works fine for me:
    SELECT CR.CASH_RECEIPT_ID,
                CR.RECEIPT_NUMBER,
                CR.RECEIPT_DATE,
                CR.CURRENCY_CODE,
                DECODE ( CR.TYPE, 'MISC', NULL, NVL (SUM (DECODE (RA.STATUS, 'ACC', NVL (RA.AMOUNT_APPLIED, 0), 0)), 0)) ON_ACCOUNT_AMOUNT
             FROM AR_RECEIVABLE_APPLICATIONS_ALL RA,
                AR_CASH_RECEIPTS_ALL CR,
                AR_RECEIPT_METHODS RM
          WHERE RA.CASH_RECEIPT_ID = CR.CASH_RECEIPT_ID
                AND CR.RECEIPT_METHOD_ID = RM.RECEIPT_METHOD_ID
                AND CR.ORG_ID = <org_id>
          GROUP BY CR.CASH_RECEIPT_ID,
                CR.RECEIPT_DATE,
                CR.RECEIPT_NUMBER,
                RM.NAME,
                CR.CURRENCY_CODE,
                CR.TYPE order by receipt_date desc
    Let me know if it worked.
    Octavio

  • Execute SQL Task - UPDATE or Data Flow Data Conversion

    Good Morning folks,
    I want to know which is more fast to convert data type.
    I want to convert nvarchar(255) to datetime2,
    using t-sql (execute sql task)
    UPDATE TBL
    SET  FIELD1= CAST(FIELD1AS DATETIME2)
    GO
    or data conversion(data flow)
    Thank..

    Itz Shailesh, my t-sql have only UPDATE, not many UPDATES... so it's one batch, no 2,3,4... So.. it's Only one update.. ex: update table set field1 = cast(field1 as datetime2), field2 = cast(field2 as datetime2). not : update table set field = cast(field
    as datetime2) go    update table set field2 = cast(field2 as datetime2) go.... understand?
    Yeah, I understand that you have to update just 1 field. What I am saying, if you have to update millions of rows then you must update rows in batches ( lets say batch of 50k). This way you will only touch 50k rows at a time and not all rows from table.
    I see that your rows are less however I would still prefer the option of data conversion transformation over update statement.
    If this post answers your query, please click "Mark As Answer" or "Vote as Helpful".

  • Tracing down data flow within an DB

    Hello,
    pls, many times I was wandering if there is an practical way of performing subject's issue. Namely,
    1) if there is an App without source code, on production server and
    2) corresponding DB, that is open i.e., it is possible to go through all the objects but, plethora of them
    Question: upon CLICK event on the control located on the App interface, is there an practical way of how to follow data flow, entered on the interface controls, how to trace down which SP(s) have been fired out etc..etc...
    DB itself is very, very complex.....
    how to cope with such tasks?
    thanks
    bye

    On production server don't use SQL Profiler due to performance impact.
    Use server-side tracing with caution as well:
    http://www.sqlusa.com/bestpractices/createtrace/
    BOL: http://msdn.microsoft.com/en-us/library/ms175047.aspx
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • BODI: Is it possible to pass parameter/variable value out of a data flow?

    Hi All,
    Is it possible to pass parameter value out of a data flow?
    I've created a custom function in my query transform to get row count, this value would be used outside the data flow to perform another logic. It looks like I'm unable to modify the output schema at the function in the query transform to explicitly map it to a particular global/local variable.
    Any ideas?
    Thanks.

    Any ideas?

  • How to find the data flow in pipes

    Hi All,
    I am working on an existing created job which have the query that mapped to a input file from there it have a couple of Query transforms. After the second Query transform there are multiple Pipes with different query transform in which all of the querys are target to a error table and one pipe is target to a clean table.
    I want to know how the data is flow into the clean table and error table. How to find the conditions saying which record needs to go into the clean table and which record needs to go into an error table.
    Thanks

    No such public document available from Oracle unless from Oracle Education.
    for some specific operation you can check Process tab inside Oracle Application which shows the data flow as part for workflow process.
    Regards
    Prashant Pathak

Maybe you are looking for