Query with BOND process

Hi Xperts
We have a Plant & a Leased Warehouse.Now , the Leased Warehouse has been created as a Storage Location  , however, it's far from the Factory.
Now we procure duty free input materials via CT3/PC & also transfer the same from Plant to Leased Warehouse.This transfer has been done with as ARE3 as per the policy.
Now we are in a process to transfer the Finished Products from Plant to Leased Warehouse as well.As you know both the Plants are having their own Bonds , we just need to know how we can manipulate the same process for FG as well.
Requirement
1. We will use 313/315 movements to transfer the FG to Leased Warehouse
2. ARE3 will be created against the material document generated via 313
3. Sending Plant's BOND should have Credit Entry
4. Receiving Plant's BOND should have Debit Entry
How to calculate the Bond Amount on Quantity basis??
Regds

To achieve your 3rd and 4th points, you should have created the leased warehouse as a Plant or a Depot.
G. Lakshmipathi

Similar Messages

  • Query with dismantling process while Sales Return

    Hi Xperts
    We are doing Sales process in SAP as per the following way :
    Sale Order ->(BOM gets coppied for the finished prod) -> automatically Production order created while saving Sale order -> CO15 to confirm Prd Order -> Outbound Del -> PGI -> Billing.
    Now we want to have Sales Return process where we do not want that FG stock comes in to warehouse.We want directly Components stock should come & updated in the warehouse.How to achieve this ?
    More clearly stating that at the time of confirming Prd Order :
    FG : 101
    Component : 261
    We exactly want the reversal of the above during Sales Return.that is :
    FG : 102
    Component : 262
    Can we implement this for movement type 653?
    Regards
    Soumick

    done

  • SQL query with Bind variable with slower execution plan

    I have a 'normal' sql select-insert statement (not using bind variable) and it yields the following execution plan:-
    Execution Plan
    0 INSERT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=148)
    1 0 HASH JOIN (Cost=7 Card=1 Bytes=148)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=4 Card=1 Bytes=100)
    3 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=3 Card=1)
    4 1 INDEX (FAST FULL SCAN) OF 'TABLEB_IDX_003' (NON-UNIQUE)
    (Cost=2 Card=135 Bytes=6480)
    Statistics
    0 recursive calls
    18 db block gets
    15558 consistent gets
    47 physical reads
    9896 redo size
    423 bytes sent via SQL*Net to client
    1095 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    55 rows processed
    I have the same query but instead running using bind variable (I test it with both oracle form and SQL*plus), it takes considerably longer with a different execution plan:-
    Execution Plan
    0 INSERT STATEMENT Optimizer=CHOOSE (Cost=407 Card=1 Bytes=148)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=3 Card=1 Bytes=100)
    2 1 NESTED LOOPS (Cost=407 Card=1 Bytes=148)
    3 2 INDEX (FAST FULL SCAN) OF TABLEB_IDX_003' (NON-UNIQUE) (Cost=2 Card=135 Bytes=6480)
    4 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=2 Card=1)
    Statistics
    0 recursive calls
    12 db block gets
    3003199 consistent gets
    54 physical reads
    9448 redo size
    423 bytes sent via SQL*Net to client
    1258 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    55 rows processed
    TABLEA has around 3million record while TABLEB has 300 records. Is there anyway I can improve the speed of the sql query with bind variable? I have DBA Access to the database
    Regards
    Ivan

    Many thanks for your reply.
    I have run the statistic already for the both tableA and tableB as well all the indexes associated with both table (using dbms_stats, I am on 9i db ) but not the indexed columns.
    for table I use:-
    begin
    dbms_stats.gather_table_stats(ownname=> 'IVAN', tabname=> 'TABLEA', partname=> NULL);
    end;
    for index I use:-
    begin
    dbms_stats.gather_index_stats(ownname=> 'IVAN', indname=> 'TABLEB_IDX_003', partname=> NULL);
    end;
    Is it possible to show me a sample of how to collect statisc for INDEX columns stats?
    regards
    Ivan

  • Running a SQL Stored Procedure from Power Query with Dynamic Parameters

    Hi,
    I want to execute a stored procedure from Power Query with dynamic parameters.
    In normal process, query will look like below in Power Query. Here the value 'Dileep' is passed as a parameter value to SP.
        Source = Sql.Database("ABC-PC", "SAMPLEDB", [Query="EXEC DBO.spGetData 'Dileep'"]
    Now I want to pass the value dynamically taking from excel sheet. I can get the required excel cell value in a variable but unable to pass it to query.
        Name_Parameter = Excel.CurrentWorkbook(){[Name="Table3"]}[Content],
        Name_Value = Name_Parameter{0}[Value],
    I have tried like below but it is not working.
    Source = Sql.Database("ABC-PC", "SAMPLEDB", [Query="EXEC DBO.spGetData Name_Value"]
    Can anyone please help me with this issue.
    Thanks
    Dileep

    Hi,
    I got it. Below is the correct syntax.
    Source = Sql.Database("ABC-PC", "SAMPLEDB", [Query="EXEC DBO.spGetData '" & Name_Value & "'"]
    Thanks
    Dileep

  • Internal Error while executing a query with varaints in BEX

    HI All,
    I am executing a query which has 4 variants. when I use the first 3 variants the query is working fine.....but when I try to execute the same query for the 4th variant....the system is giving an error message called Internal Error.....
    when I executed the same query with same variant...in RSRT transaction code...I was able to see the results....
    The error messages that I am getting as follows...
    <Internal Error> receiving from the BW Server Failed. BW server raised exception SYSTEM_FAILURE. DO you want to see more information...
    when I selected YES....I got the following message
    Error in a ABAP statement when processing an Internal Table.table.
    Error Group
    RFC_ERROR_SYSTEM_FAILURE
    when clicked on the OK button the following message is displayed....
    <Internal Error> Can not determine text elements for the current query.
    Can any one explain me what went wrong...is there any thing that I need to do in the BEX settings...are there any patches that needs to be installed...or any OSS notes available for this sort of errors...?
    Advance in thanks
    Dilse...
    Hash

    Hi Harish,
    Execute the query in RSRT and check whether you have any short dump is in ST22. This would give a clear idea at what might have gone wrong.
    Another thing is to check whether the code for the variable is fine in the user exit.
    Hope this helps.
    Bye
    Dinesh

  • Reg: SQL select Query in BPEL process flow

    <p>
    Hi,
    I am suppose to execute a SQL select query (in BPEL Process flow) as mention below in JDeveloper using Database adapter.
    </p>
    <p>
    SELECT LENGTH, WIDTH, HEIGHT, WEIGHT,
    </p>
    <p>
    LENGTH*WIDTH* HEIGHT AS ITEM_CUBE
    </p>
    <p>
    FROM CUBE
    </p>
    <p>
    WHERE ITEM= &lt;xyz&gt;
    </p>
    <p>
    AND OBJECT= (SELECT CASE_NAME FROM CUBE_SUPPLIER WHERE ITEM=&lt;xyz&gt; AND SUPP_IND = &lsquo;Y')
    <strong>Now my question is:
    1.</strong> What does this "*" refer to in the query and how can I retrieve the value of LENGTH*WIDTH* HEIGHT from the query where LENGTH,WIDTH and HEIGHT are the individual field in the table.
    2.What does this " AS" refer to? If " ITEM_CUBE " is the alies for the table name "ITEM" to retrieve the value, then query shoud be evaluated as
    </p>
    <p>
    SELECT LENGTH, WIDTH, HEIGHT, WEIGHT,
    </p>
    <p>
    LENGTH*WIDTH* HEIGHT AS ITEM_CUBE
    </p>
    <p>
    FROM CUBE
    </p>
    <p>
    WHERE <strong>ITEM_CUBE.ITEM</strong>= &lt;xyz&gt;
    </p>
    <p>
    AND <strong>ITEM_CUBE.OBJECT</strong>= (SELECT CASE_NAME FROM CUBE_SUPPLIER WHERE ITEM=&lt;xyz&gt; AND SUPP_IND = &lsquo;Y')
    Is my assumption correct?
    Please suggest asap.
    Thanks...
    </p>
    <p>
    </p>

    Hi
    Thank for your reply!
    I have a nested select query which performs on two different table as shown below:
    <p>
    SELECT LENGTH, WIDTH, HEIGHT, WEIGHT,
    </p>
    <p>
    LENGTH*WIDTH* HEIGHT AS ITEM_CUBE
    </p>
    <p>
    FROM CUBE
    </p>
    <p>
    WHERE ITEM= &lt;abc&gt;
    </p>
    <p>
    AND OBJECT= (SELECT NAME FROM SUPPLIER WHERE ITEM=&lt;Item&gt; AND SUPP_IND = &lsquo;Y')
    I am using DB adapter of Oracle JDeveloper in BPEL process flow, where I can able to select only one master table in DB adapter say SUPPLIER and its attributes at a time.But as per my requirment I need to select both the table (CUBE and SUPPLIER) in a single adapter to execute my query.
    It can be achievable by using two DB adapter , One to execute the nested query and another to execute the main qyery considering value of nested query as a parameter.But I want to achieve it by using a single one.
    Am I correct with my concept?
    Please suggest how to get it ?
    </p>
    Edited by: user10259700 on Oct 23, 2008 12:17 AM

  • Need help in optimizing the query with joins and group by clause

    I am having problem in executing the query below.. it is taking lot of time. To simplify, I have added the two tables FILE_STATUS = stores the file load details and COMM table that is actual business commission table showing records successfully processed and which records were transmitted to other system. Records with status = T is trasnmitted to other system and traansactions with P is pending.
    CREATE TABLE FILE_STATUS
    (FILE_ID VARCHAR2(14),
    FILE_NAME VARCHAR2(20),
    CARR_CD VARCHAR2(5),
    TOT_REC NUMBER,
    TOT_SUCC NUMBER);
    CREATE TABLE COMM
    (SRC_FILE_ID VARCHAR2(14),
    REC_ID NUMBER,
    STATUS CHAR(1));
    INSERT INTO FILE_STATUS VALUES ('12345678', 'CM_LIBM.TXT', 'LIBM', 5, 4);
    INSERT INTO FILE_STATUS VALUES ('12345679', 'CM_HIPNT.TXT', 'HIPNT', 4, 0);
    INSERT INTO COMM VALUES ('12345678', 1, 'T');
    INSERT INTO COMM VALUES ('12345678', 3, 'T');
    INSERT INTO COMM VALUES ('12345678', 4, 'P');
    INSERT INTO COMM VALUES ('12345678', 5, 'P');
    COMMIT;Here is the query that I wrote to give me the details of the file that has been loaded into the system. It reads the file status and commission table to show file name, total records loaded, total records successfully loaded to the commission table and number of records that has been finally transmitted (status=T) to other systems.
    SELECT
        FS.CARR_CD
        ,FS.FILE_NAME
        ,FS.FILE_ID
        ,FS.TOT_REC
        ,FS.TOT_SUCC
        ,NVL(C.TOT_TRANS, 0) TOT_TRANS
    FROM FILE_STATUS FS
    LEFT JOIN
        SELECT SRC_FILE_ID, COUNT(*) TOT_TRANS
        FROM COMM
        WHERE STATUS = 'T'
        GROUP BY SRC_FILE_ID
    ) C ON C.SRC_FILE_ID = FS.FILE_ID
    WHERE FILE_ID = '12345678';In production this query has more joins and is taking lot of time to process.. the main culprit for me is the join on COMM table to get the count of number of transactions transmitted. Please can you give me tips to optimize this query to get results faster? Do I need to remove group and use partition or something else. Please help!

    I get 2 rows if I use my query with your new criteria. Did you commit the record if you are using a second connection to query? Did you remove the criteria for file_id?
    select carr_cd, file_name, file_id, tot_rec, tot_succ, tot_trans
      from (select fs.carr_cd,
                   fs.file_name,
                   fs.file_id,
                   fs.tot_rec,
                   fs.tot_succ,
                   count(case
                            when c.status = 'T' then
                             1
                            else
                             null
                          end) over(partition by c.src_file_id) tot_trans,
                   row_number() over(partition by c.src_file_id order by null) rn
              from file_status fs
              left join comm c
                on c.src_file_id = fs.file_id
             where carr_cd = 'LIBM')
    where rn = 1;
    CARR_CD FILE_NAME            FILE_ID           TOT_REC   TOT_SUCC  TOT_TRANS
    LIBM    CM_LIBM.TXT          12345678                5          4          2
    LIBM    CM_LIBM.TXT          12345677               10          0          0Using RANK can potentially produce multiple rows to be returned though your data may prevent this. ROW_NUMBER will always prevent duplicates. The ordering of the analytical function is irrelevant in your query if you use ROW_NUMBER. You can remove the outermost query and inspect the data returned by the inner query;
    select fs.carr_cd,
           fs.file_name,
           fs.file_id,
           fs.tot_rec,
           fs.tot_succ,
           count(case
                    when c.status = 'T' then
                     1
                    else
                     null
                  end) over(partition by c.src_file_id) tot_trans,
           row_number() over(partition by c.src_file_id order by null) rn
    from file_status fs
    left join comm c
    on c.src_file_id = fs.file_id
    where carr_cd = 'LIBM';
    CARR_CD FILE_NAME            FILE_ID           TOT_REC   TOT_SUCC  TOT_TRANS         RN
    LIBM    CM_LIBM.TXT          12345678                5          4          2          1
    LIBM    CM_LIBM.TXT          12345678                5          4          2          2
    LIBM    CM_LIBM.TXT          12345678                5          4          2          3
    LIBM    CM_LIBM.TXT          12345678                5          4          2          4
    LIBM    CM_LIBM.TXT          12345677               10          0          0          1

  • BPEL with Reliable Processing

    Hi,
    I read and work on the "BPEL with Reliable Processing" cookbook.
    http://www.oracle.com/technology/pub/articles/bpel_cookbook/qualcomm-bpel.html
    It's great!
    I try to enhance the process by adding notion of priority between different records.
    A priority-based process will favour those processes with high priority values.
    To achieve that, I made the following modifications:
    1) Add the "PRIORITY" column into the "DB_POLL_SOURCE" table.
    CREATE TABLE "DB_POLL_SOURCE"
    "ID" NUMBER (17,0) NOT NULL,
    "VALUE1" VARCHAR2 (32),
    "VALUE2" VARCHAR2 (32),
    "VALUE3" VARCHAR2 (32),
    "PROCESS_NOT_BEFORE" DATE,
    "RETRY_COUNT" NUMBER (7,0) DEFAULT 0 NOT NULL,
    "BPEL_STATE" VARCHAR2 (16) DEFAULT 'P_NEW',
    "CREATED_DTS" DATE DEFAULT SYSDATE,
    "MODIFIED_DTS" DATE DEFAULT SYSDATE,
    "PRIORITY" NUMBER (7,0) DEFAULT 5 NOT NULL
    2) Add an order by clause to sort the "PRIORITY" column in descending order.
    3) Add the where clause (rownum<N) for getting the first N records from the query record set.
    CREATE OR REPLACE VIEW DB_POLL_SOURCE_VW
    (ID, BPEL_STATE)
    AS
    select *
    from (
    select
    dbps.ID,
    dbps.BPEL_STATE
    from
    DB_POLL_SOURCE dbps
    where
    (dbps.PROCESS_NOT_BEFORE is NULL or dbps.PROCESS_NOT_BEFORE < SYSDATE)
    and dbps.BPEL_STATE like 'P_%'
    order by dbps.PRIORITY desc
    where rownum < 5
    The problem is that the view must be updatable and ROWNUM cannot be used inside an updatable view.
    The message error I got is the following:
    ORA-01732: data manipulation operation not legal on this view.
    So my question is:
    How can I define an updatable view to sort the "PRIORITY" column and then limit the number of rows returned?
    Thanks a lot
    Olivier

    If you want to update a view, you must use database triggers to perform this. You can user the statement:
    CREATE OR REPLACE TRIGGER <triggername>
    INSTEAD OF INSERT (or other UPDATE/DELETE or both)
    ON <view>
    FOR EACH ROW
    BEGIN
    .. PLSQL code here ..
    END <triggername>;
    /

  • Tuning SQL query with SDO and Contains?

    I'trying to optimize a query
    with a sdo_filter and an intermedia_contains
    on a 3.000.000 records table,
    the query look like this
    SELECT COUNT(*) FROM professionnel WHERE mdsys.sdo_filter(professionnel.coor_cart,mdsys.sdo_geometry(2003, null, null,mdsys.sdo_elem_info_array(1,1003,4),mdsys.sdo_ordinate_array(809990,2087279,778784,2087279,794387,2102882)),'querytype=window') = 'TRUE' AND professionnel.code_rubr ='12 3 30' AND CONTAINS(professionnel.Ctx,'PLOMBERIE within Nom and ( RUE within Adresse1 )',1)>0
    and it takes 15s on a bi 750 pentium III with
    1.5Go of memory running under 8.1.6 linux.
    What can i do to improve this query time?
    null

    Hi Vincent,
    We have patches for Oracle 8.1.6 Spatial
    on NT and Solaris.
    These patches include bug fixes and
    performance enhancements.
    We are in the process of making these patches
    avaialble in a permanent place, but until then, I will temporarily put the patches on:
    ftp://oracle-ftp.oracle.com/
    Log in as anonymous and use your email for
    password.
    The patches are in /tmp/outgoing in:
    NT816-000706.zip - NT patch
    libordsdo.tar - Solaris patch
    I recommend doing some analysis on
    individual pieces of the query.
    i.e. time the following:
    1)
    SELECT COUNT(*)
    FROM professionnel
    WHERE mdsys.sdo_filter(
    professionnel.coor_cart,
    mdsys.sdo_geometry(
    2003, null, null,
    mdsys.sdo_elem_info_array(1,1003,4),
    mdsys.sdo_ordinate_array(
    809990,2087279,
    778784,2087279,
    794387,2102882)),
    'querytype=window') = 'TRUE';
    2)
    SELECT COUNT(*)
    FROM professionnel
    WHERE CONTAINS(professionnel.Ctx,
    'PLOMBERIE within Nom and ( RUE within Adresse1)',1) >0;
    You might want to try reorganizing the entire
    query as follows (no promises).
    If you contact me directly, I can try to
    help to further tune the SQL.
    Hope this helps. Thanks.
    Dan
    select count(*)
    FROM
    (SELECT /*+ no_merge */ rowid
    FROM professionnel
    WHERE mdsys.sdo_filter(
    professionnel.coor_cart,
    mdsys.sdo_geometry(
    2003, null, null,
    mdsys.sdo_elem_info_array(1,1003,4),
    mdsys.sdo_ordinate_array(809990,2087279,
    778784,2087279,
    794387,2102882)),
    'querytype=window') = 'TRUE'
    ) a,
    (SELECT /*+ no_merge */ rowid
    FROM professionnel
    WHERE CONTAINS(professionnel.Ctx,
    'PLOMBERIE within Nom and
    ( RUE within Adresse1)',1) >0
    ) b
    where a.rowid = b.rowid
    and professionnel.code_rubr ='12 3 30';
    **NOTE** Try this with no index on code_rubr
    null

  • Is it possible to run an IdocScript Query with Service GET_SEARCH_RESULTS?

    Hi All,
    I need to run the below query with GET_SEARCH_RESULTS service.
    dDocType <matches> `ArchivePackage` <AND> xSubmissionDate < `10/08/13 11:03 AM`  <AND> xArchiveDocName <EQUALS> dDocName
    Part of this query works fine but when the part in BOLD is also used I get an error, reason being there is nothing like <EQUALS> when using QueryBuilder
    Instead is it possible to run the below idocscript with any service?
    <$ (dDocType like 'ArchivePackage') & xSubmissionDate < dateCurrent(-365) & (XArchiveDocname == dDocname) $>
    I tried it with GET_SEARCH_RESULTS, but got wrong results and no error.
    Is this even possible or should I just change my approach?
    Regards

    I tried to search more info on this one. It is true that in all cases the operator is followed by a constant - see e.g. https://blogs.oracle.com/interactions/entry/ucm_get_search_results_with_full_text_search
    Honestly, I see no reason why a reference to another metadata could not be made, but the parser might just fail on it. If you can, try to get an official answer from support via Metalink. Alternatively, you can decompile the code processing the service and check why parsing fails when non-constants are used.

  • Query in mapping process....

    can i do the query in mapping process 4 dimensions or cubes??? using which pallete??? n how 2 use it with my own queries???
    thx guyz....

    Perhaps if you rewrite your question in something that resembles English you may get an answer.

  • Inventory- India Localization - Bonded Process

    Hi Gurus,
    I've been googling on net in order to find out bonded process Inventory - India Localization.
    But I am unable to find that exact documented part, though I have user guide where it specifies few facts of bonded features. This is not yet sufficient for us in order to meet our client expectations.
    I would like to know if some body can share ppt/pdf/doc of pictorial presentation where it shows me bonded features and it's usage in inventory.
    I believe along with this features there might be few things front on statutory reporting.
    Appreciate if some body can share me any document related to this.
    Thanks in advance.
    Rgds,
    Nagendra

    Also can any one provide us the document/link to understand and implement it in Oracle Application on 11.5.10.2 with IN60107 localization versionHave a look at Note: 471249.1 - Documentation repository for Oracle Financials for India
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=471249.1

  • Issues with parallel processing in Logical Database PCH and PNP

    Has anyone encountered issues when executing programs in parallel  that utilizes the logical database PCH or PNP?
    Our scenario is the following:
    We having have 55 concurrent jobs that execute a program that use the logical database PCH at a given time.  We load the the PCHINDEX table with the code below.
          wa_pchindex-plvar = '01'.
          wa_pchindex-otype = 'S'.
          wa_pchindex-objid_low = index_objid.
          APPEND wa_pchindex TO pchindex.
    We have seen instances where when the program is executed in parallel, with each process having its own range of positions id's, that some positions are dropped or some are added that is outside the range of the given process.
    For example:
    process 1 has a range of positions ID's 1-10
    process 2 has a range of positions ID's 11-20
    process 3 has a range of positions ID's 21-30
    Process 3 drops position 25 and adds position 46.
    Has anyone faced a similar issue?
    Thanks for your help.
    Best Regards,
    Duke

    Hi,
    first of all, you should read [Using Parallel Execution|http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/usingpe.htm#DWHSG024] in documentation for your version - almost all of these topics are covered there.
    1. According to my server specification how much DOP i can specify.It depends not only on number of CPU. More important factors are settings of PARALLEL_MAX_SERVERS and PARALLEL_ADAPTIVE_MULTI_USER.
    2. Which option for Setting Parallel is good - Using the 'alter table A parallel 4' or passing the parallel hints in the sql statementsIt depends on your application. When setting PARALLEL on a table, all SQL dealing with that table would be considered for parallel execution. So if it is normal for your app to use parallel access to that table, it's OK. If you want to use PX on a limited set of SQL, then hints or session settings are more appropriate.
    3. We have a batch processing jobs which are loading data into the tables from flat files (24*7) using sql loader. is it possible to parallel this operation and any negative effect if enabled parallel.Yes, refer to documentation.
    4. Query or DML - which one will be perform best with parallel option.Both may take advantages of using PX (with some restrictions to Parallel DML) and both may run slower than non-PX versions.
    5. What are the negative issue if parallel option is enabled.1) Object checkpoint happens before starting parallel FTS (true for >=10gR2, before that version tablespace checkpoint was used)
    2) More CPU and memory resources are used with PX - it may be both benefit and an issue, especially with concurrent PX.
    6. what are the things to be taken care while enabling the parallel option.Read the documentation - it contains almost all you need to know. Since you are using RAC, you sould not forget about method of PX slaves load balancing between nodes. If you are on 10g, refer to INSTANSE_GROUPS/PARALLEL_INSTANCE_GROUPS parameters, if you are using 11g then properly configure services.

  • Can you run a Query in a Process Chain?

    As part of a data validation process chain, I need to run a query and send the results by email.  I've created the query (with the exception) and set it up in information broadcaster to be sent by email.  I thought that I would be able to just drop in the "Exception Reporting" process into the process chain and be able to select the query to run.  Needless to say, it don't work that way.
    If anyone has ran a query in a process chain, please let me know how you did it?
    Also if someone knows how the "Exception Reporting" process in RSPC works, please share?
    Thanks

    Patel, we may be able to rethink our approach and use the event data change that was mentioned in the document you sent.
    I was hoping to be able to do everything from within a process chain.  Does anyone know how to use the "Exception Reporting" process that is available in RSPC?  Is it a leftover from the 3.X days that can't really be used in 7.0?

  • Microsoft sql client deadlocked on lock resources with another process

    Hi
    I wrote a forecasting report for a customer which creates an excel spreadsheet with the information
    Depending on how they run the report it can take between 5 to 15 minutes to run
    We have just upgraded the customer to SAP 8.8 PL10 and Microsoft SQL Server 2008 and they seem to be getting an error -
    Microsoft sql client deadlocked on lock resources with another process .......
    The error seems intermittent, they may get the error once and the next time they run it, it is fine.
    I have never seen this error before and they never had it before on SAP 2005
    Can anyone suggest anything please?
    Thanks
    Regards Andy

    Hi Andy,
    I was having the same problem. I'm gonna tell you what i did.
    My query usually takes from 10 to 15 minutes to show results. That long time also block all transaction in the database.
    I was reading about some techniques to improve queries performance. Some of the tips are:
    1. Review indexes in the table you are querying
    2. Use Views
    3. Avoid cursors
    4. Archive old data
    5. Use the correct transaction isolation level
    The last one, was the tip that helped me to avoid the block in the database.
    By default the isolation level in SQL Server is Read Commited, that explains why the database block some transactions. For example, if you have a query that take data from JDT1 table and it takes several minutes to show the results, other transactions that try to write in the same table should be blocked if they arrive at the same time of the first query.
    To solve this, you can make your query in a transaction with Snapshot isolation level. It means that your select query will take a snapshot of the data without blocking any other transaction.
    Here is an example how you can make it. The difference is that you may use ADO.NET connection replacing DI API Connection:
    oConnection = OK1.Generic.Helpers.setConnection(server, password, userID, db); // You have to set anyway your sqlconnection
                        if (oConnection.State == ConnectionState.Open)
                            oCommand = new SqlCommand();
                            oCommand.Connection = oConnection;
                            oCommand.CommandTimeout = System.Convert.ToInt32(timeOut);
                            oCommand.CommandText = "ALTER DATABASE " + db + " SET ALLOW_SNAPSHOT_ISOLATION ON";
                            oCommand.ExecuteNonQuery();
                            sqlTran1 = oConnection.BeginTransaction(IsolationLevel.Snapshot);
                            oCommand.CommandText = query;
                            oCommand.Transaction = sqlTran1;
                            oCommand.ExecuteNonQuery();
                            sqlTran1.Commit();
                            oCommand.CommandText = "ALTER DATABASE " + db + " SET ALLOW_SNAPSHOT_ISOLATION OFF";
                            oCommand.ExecuteNonQuery();
                            if (oConnection.State == ConnectionState.Open)
                                oConnection.Close();
    In this example I write the data to show in the report in other table, then the report takes the data from that table.
    I hope it will be helpful for you.
    Regards,
    Juan Camilo

Maybe you are looking for

  • Partial Pick Confirming Sales Order

    Hi Folks, I have a requirement where I have to Pick Confirm and Ship confirm Sales Order through API. 1. When the quantites to be picked are same as quantity ordered the code seems to work just fine. But suppose in my example my quantity in SO line i

  • Help! Senior citizen needs help in closing out a program?

    I am woefully ignorant about some things on my computer.  Trying to look at photos from a disc.  Ran it through virus program and it is clean.   Clicked on first photo which opened yesterday..but today it is not opening.  All I am seeing is that litt

  • How do I monitor connection speeds on the E4200?

    All my previous routers have allowed me to monitor my connection speeds however I can't find this in this router.  I found the client table under "status" but I have not been able to find the connection speeds anywhere.  I like to make sure that my d

  • I close the litte fox mask in the left corner and can't open it again.

    <blockquote>Locking duplicate thread.<br> Please continue here: [[/questions/884182]]</blockquote> I press the little fox mask to change the skin.. I saw an X button and I pressed it.. Now, I can't see the little fox mask nor can I see the skin at th

  • Help with java.util.Random

    Problem: I wanna generate random number in interval <0;1> with two "decimal number precision" (i dont know, how to expres it exactly in English.. So, here is Patern: "*X.XX*" (For ex. it might be numbers like 0.24 , 1.00 , 0.05, 0.54) ). I have tried