Problems Generating Data Move Scripts in v1.5.1

Hi, I'm having problems when trying to generate data move scripts in SQL Dev 1.5.1 to carry out an off-line data load. I'm carrying out a migration from Sybase to Oracle and the database I'm working on has over 400 tables in it. I have successfully captured and migrated all the tables into the resp. models and have generated and created the DDL for the converted model. However, when I request the data move scripts to be generated I'm only getting ctl files created for the 1st 49 tables. Also, there is no oracle_ctl.sh script created. Also, no post_load.sql script is produced only a pre_load.sql script.
I've got 3 databases to migrate and on the 2nd database I only get the data move scripts created for ths first 86 tables and there are over 250 tables in it.
It appears to have worked better for the 3rd database which is much much smaller than the 1st two databases having only 59 tables in it. This time all the files were produced as expected. However, it's really the 1st two larger databases that are my priority to get migrated.
I've tried changing the preferences within Migration/Generation Options to from 'One Single File' to 'A File per Object' but it makes no difference. I would prefer everything in one file but can work round that.
Ideally, I'd like to generate all the ctl files for a database in one go so that I can group edit them and would prefer the tool to create the oracle_ctl.sh script to call all the ctl scripts for me rather than having to hand build it. I'm puzzled as to why the tool only creates ctl files for some of the tables contained within a converted model. it looks like it is not completing the job in these cases as it also doesn't create all the scripts that it is supposed to create either. It doesn't give out any error messages and the screen looks no different at completion to when it works successfully in the case of the very small database.
Anybody had this problem or can suggest how to fix it ?
Thanks all.

Send me you phone number to [email protected]
We'll help sort this out.
Barry

Similar Messages

  • Error in one of data mover scripts during campus solution installation

    Hi everybody,
    This is my first attempt to install one of peoplesoft products
    I am installing HCM 9.0 on windows 2008 64 bit, oracle 11g
    I am now in task called 7A-16-9: Updating PeopleTools System Data
    I ran pt849tls.dms successfully
    but pt850tls.dms failed with error:
    File: Data MoverSQL error. Stmt #: 0 Error Position: 25 Return: 904 - ORA-00904: "PT_RETENTIONDAYS": invalid identifier
    Failed SQL stmt:UPDATE PS_PRCSSYSTEM SET PT_RETENTIONDAYS=RETENTIONDAYS
    Error: SQL execute error for UPDATE PS_PRCSSYSTEM SET PT_RETENTIONDAYS=RETENTIONDAYS
    Is there a script that I have missed?
    I followed the instructions step by step
    Thanks for your help,,,

    I highly appreciate your efforts,
    I ran rel849un.sql and rel850un.sql successfully (I checked the corresponding log files without errors)
    table description is:
    CREATE TABLE "SYSADM"."PS_PRCSSYSTEM"
    (     "OPSYS" VARCHAR2(1 CHAR),
         "RETENTIONDAYS" NUMBER(*,0),
         "PRCSPURGENEXTDTTM" TIMESTAMP (6),
         "RECURDTTM" TIMESTAMP (6),
         "PRCSPURGERECURNAME" VARCHAR2(30 CHAR),
         "PURGEPRCSFILES" NUMBER(*,0),
         "ARCH_PROCESSED" VARCHAR2(1 CHAR),
         "PRCSSYSLOADOPT" VARCHAR2(2 CHAR),
         "VERSION_UPDATED" VARCHAR2(1 CHAR))
    Best Regards,,,

  • Can you set a default value using Data Mover Scripts?

    Hi,
    We're going through a upgrade of PS Fin 8.9 -> 9.1. We've found some existing tables with new columns that are marked as NOT NULL so we're having trouble migrating the data using DMS.
    Is there a way to Export the data from 8.9 and when importing into 9.1, set a default value for the new column using DMS?
    A specific example would be the PS_SOURCE_TBL where EXCHANGE_RATE_OPTN is new and NOT NULL.
    Thanks for any assistance.

    What you could do is first rename the record to be imported
    IMPORT SOURCE_TBL AS PS_SOURCE_TBL_ORG;
    Then write a insert/select script from PS_SOURCE_TBL_ORG to PS_SOURCE_TBL defaulting column EXCHANGE_RATE_OPTN
    Insert into PS_SOURCE_TBL (COLUMN,COLUMN,COLUMN,COLUMN, EXCHANGE_RATE_OPTN)
    select PS_SOURCE_TBL_ORG (COLUMN,COLUMN,COLUMN,COLUMN, 'defaultvalue');
    And then drop table PS_SOURCE_TBL_ORG to clean up your database.

  • Data migration scripts problems

    Hello,
    I've been using SQL Developer for a couple of months now ; I'm working on the migration from a SQL Server database to Oracle 10g, and the migration features of the product really saved me a lot of time so far.
    However, there is a couple of small problems with the generated data migration scripts :
    - on the SQL Server side, the script that calls for bcp specifies no encoding ; as a result every Unicode information is lost. I think that by default you should at least call bcp with the option of keeping the original encoding information.
    - on the Oracle Side, empty strings ( = CHR(00) ) are converted into a whitespace character ( = ' '), when they shouldn't. This is all the more annoying to correct manually when you have a lot of tables, since there's one control file per table.
    Apart from that, everything's fine :)
    Regards,
    Isabelle.

    - on the Oracle Side, empty strings ( = CHR(00) ) are
    converted into a whitespace character ( = ' '), when
    they shouldn't. This is all the more annoying to
    correct manually when you have a lot of tables, since
    there's one control file per table.You should be aware that Oracle treats empty strings as NULL while SQLServer doesn't. It may be that this is an attempt to avoid problems with "not null" columns. On sqlserver, an empty string is OK in a not null column but not in Oracle.

  • Data mover problem

    Hi all,
    I have a table A(name(10), num(5)) in DEV environment. i have to move all the data in the table A to TEST environment. So i want to use the data mover.
    I have exported in DEV and imported in TEST.
    I could see the all the data came to TEST environment
    The problem is.
    my name field in DEV has the values like 'david ' (please mind the spaces)
    but if i see in TEST i could see only 'david' ( the spaces are missing)
    Do we need to put any settings in order not to skip the ending space characters in EXPORT/IMPORT scripts.
    Please help me.
    Thank you.

    ur user writes some query where she would concatenate this field with another field in another table. that is why she wants the trailing spaces.Then the extra space "problem" you have should be solve by your concatenation program, not inside database. Your field in 10 characters long, if you have a name on 10 characters, how will you put an extra space there ?
    I don't know how/where you are concatening the fields, but it is there you should solve it. The database should not contains extra space, you could break down the data consistency.
    Nicolas.

  • Load Data / Upload Script Problem

    Hello:
    I hope everyone is having a nice weekend.
    I have a very weird situation.
    I'm using Internet Explorer and I was loading data. No trouble there. I proceeded to upload a script. I selected the script file and gave it a script name. I clicked Upload and it appears on the screen. I then clicked on the script, the script editor screen appeared in bright red. When I click Run, I get a message on the bottom of the screen (next to my Internet Explorer icon - above my Start button) saying Error On Page. Yet, no specific error appears.
    So, I switch over to Firefox and I'm successfully able to run the same script. Now, here's what is weird. I proceeded to upload more data (I'm still in Firefox). I was successful. I go to SQL Workshop, Object Browser, Browse, Tables. A list of tables appears on the left side. I want to view the table definition for the table I just loaded. I select the table and instead of see the table definition displayed, the area is blank. I have no table definition.
    I'm a bit confused. I'm having problems loading/creating/running scripts in Internet Explorer which tells me I have an Error on Page but doesn't display any errors.
    While at the same time, I'm having problems working with tables in Firefox.
    Does anyone have any idea what could be causing this strange problem?
    I can understand, probably, one browser having trouble, but both browsers on two different tasks. This makes no sense to me.
    Any help you can give me would be great!
    Thank you.

    Hello:
    Thank you for replying.
    Yes, I do have those two lines beneath my Alias statement. Though, each line had a typo so I fixed them. I rebooted my machine.
    After fixing the typos, I went to Firefox to see if I can view the table definitions and yes I am. So the typos did fix the Firefox problem.
    I then went to Internet Explorer and tried to click on the SQL script and the same thing is still happening. I clicked the script that I uploaded and a red screen appears. If I try to click anything the Error on Page still appears on the bottom.
    The script does work in Firefox. I'm not sure why Internet Explorer is not working.
    Should I have done something else after modifying my dads file? Is there some sort of command or do I have to restart a service in order for the updated dad file to be recognized?
    Thanks for your help. I hope we can troubleshoot the IE problem.

  • URGENT ! JDEV 10.1.2 Problem with data control generated from session bean

    I got a problem with data control generated from session bean which return a collection of data transfer object.
    The dto's seem to be correct. The session bean load correctly the data into and the object's are plenty of data. Using the console to display the dto content is ok.
    When generating a data control from this session bean and associate the dto included in the collection only the first object level and one-to-one dto object are correctly setted in the data control. Object that represent collection into the dto (one-to-many foreign key) are setted as collection with an iterator but the structure of the object is not setted. I don't know how to associate this second level of collection with the dto bean class to obtain the attributes definition.
    I created a case with hr schema like the hrApp demo application in the tutorial with departments and employees table. I got the same problem.
    Is it a bug ?
    It exists a workaround to force the data control to understand the collection data structure ?
    Help is welcome ! this is urgent !!!

    we found the problem by assigning the child dto bean class to the node representing the iterator in the xml file corresponding to the master dto.

  • How to generate the insert script of the  tables data present  in an entire

    How to generate the insert script of the tables data present in an entire schema in sqlplus environment
    with out toad can you please help me please!!!!!!!!!!!!!

    HI,
    First create this function to get insert scripts.
    /* Formatted on 2012/01/16 10:41 (Formatter Plus v4.8.8) */
    CREATE OR REPLACE FUNCTION extractdata (v_table_name VARCHAR2)
       RETURN VARCHAR2
    AS
       b_found   BOOLEAN         := FALSE;
       v_tempa   VARCHAR2 (8000);
       v_tempb   VARCHAR2 (8000);
       v_tempc   VARCHAR2 (255);
    BEGIN
       FOR tab_rec IN (SELECT table_name
                         FROM user_tables
                        WHERE table_name = UPPER (v_table_name))
       LOOP
          b_found := TRUE;
          v_tempa := 'select ''insert into ' || tab_rec.table_name || ' (';
          FOR col_rec IN (SELECT   *
                              FROM user_tab_columns
                             WHERE table_name = tab_rec.table_name
                          ORDER BY column_id)
          LOOP
             IF col_rec.column_id = 1
             THEN
                v_tempa := v_tempa || '''||chr(10)||''';
             ELSE
                v_tempa := v_tempa || ',''||chr(10)||''';
                v_tempb := v_tempb || ',''||chr(10)||''';
             END IF;
             v_tempa := v_tempa || col_rec.column_name;
             IF INSTR (col_rec.data_type, 'CHAR') > 0
             THEN
                v_tempc := '''''''''||' || col_rec.column_name || '||''''''''';
             ELSIF INSTR (col_rec.data_type, 'DATE') > 0
             THEN
                v_tempc :=
                      '''to_date(''''''||to_char('
                   || col_rec.column_name
                   || ',''mm/dd/yyyy hh24:mi'')||'''''',''''mm/dd/yyyy hh24:mi'''')''';
             ELSE
                v_tempc := col_rec.column_name;
             END IF;
             v_tempb :=
                   v_tempb
                || '''||decode('
                || col_rec.column_name
                || ',Null,''Null'','
                || v_tempc
                || ')||''';
          END LOOP;
          v_tempa :=
                v_tempa
             || ') values ('
             || v_tempb
             || ');'' from '
             || tab_rec.table_name
             || ';';
       END LOOP;
       IF NOT b_found
       THEN
          v_tempa := '-- Table ' || v_table_name || ' not found';
       ELSE
          v_tempa := v_tempa || CHR (10) || 'select ''-- commit;'' from dual;';
       END IF;
       RETURN v_tempa;
    END;
    SET PAUSE OFF
    SET LINESIZE 1200
    SET PAGESIZE 100
    SET TERMOUT OFF
    SET HEAD OFF
    SET FEED OFF
    SET ECHO OFF
    SET VERIFY OFF
    SPOOL  GET_INSERTS.SP REP
    SELECT EXTRACTDATA('EMP') FROM DUAL;
    SPOOL OFF
    SET PAUSE  ON
    SET LINESIZE 120
    SET PAGESIZE 14
    SET TERMOUT ON
    SET HEAD ON
    SET FEED 5
    SET ECHO ON
    SET VERIFY ON
    SELECT    'insert into EMP ('
           || CHR (10)
           || 'EMPNO,'
           || CHR (10)
           || 'ENAME,'
           || CHR (10)
           || 'JOB,'
           || CHR (10)
           || 'MGR,'
           || CHR (10)
           || 'HIREDATE,'
           || CHR (10)
           || 'SAL,'
           || CHR (10)
           || 'COMM,'
           || CHR (10)
           || 'DEPTNO) values ('
           || DECODE (empno, NULL, 'Null', empno)
           || ','
           || CHR (10)
           || ''
           || DECODE (ename, NULL, 'Null', '''' || ename || '''')
           || ','
           || CHR (10)
           || ''
           || DECODE (job, NULL, 'Null', '''' || job || '''')
           || ','
           || CHR (10)
           || ''
           || DECODE (mgr, NULL, 'Null', mgr)
           || ','
           || CHR (10)
           || ''
           || DECODE (hiredate,
                      NULL, 'Null',
                         'to_date('''
                      || TO_CHAR (hiredate, 'mm/dd/yyyy hh24:mi')
                      || ''',''mm/dd/yyyy hh24:mi'')'
           || ','
           || CHR (10)
           || ''
           || DECODE (sal, NULL, 'Null', sal)
           || ','
           || CHR (10)
           || ''
           || DECODE (comm, NULL, 'Null', comm)
           || ','
           || CHR (10)
           || ''
           || DECODE (deptno, NULL, 'Null', deptno)
           || ');'
      FROM emp;
    SELECT '-- commit;'
      FROM DUAL;now run the baove select statement you will get the following insert statements
    /* Formatted on 2012/01/16 10:57 (Formatter Plus v4.8.8) */
    --'INSERT INTO EMP('||CHR(10)||'EMPNO,'||CHR(10)||'ENAME,'||CHR(10)||'JOB,'||CHR(10)||'MGR,'||CHR(10)||'HIREDATE,'||CHR(10)||'SAL,'||CHR(10)||'COMM,'||CHR(10)||'DEPTNO)VALUES('||DECODE(EMPNO,NULL,'NULL',EMPNO)||','||CHR(10)||''||DECODE(ENAME,NULL,'NULL',''''|
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm, deptno
         VALUES (7369, 'SMITH', 'CLERK', 7902,
                 TO_DATE ('12/17/1980 00:00', 'mm/dd/yyyy hh24:mi'), 800, NULL, 20
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm, deptno
         VALUES (7499, 'ALLEN', 'SALESMAN', 7698,
                 TO_DATE ('02/20/1981 00:00', 'mm/dd/yyyy hh24:mi'), 1600, 300, 30
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm, deptno
         VALUES (7521, 'WARD', 'SALESMAN', 7698,
                 TO_DATE ('02/22/1981 00:00', 'mm/dd/yyyy hh24:mi'), 1250, 500, 30
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm,
                 deptno
         VALUES (7566, 'JONES', 'MANAGER', 7839,
                 TO_DATE ('04/02/1981 00:00', 'mm/dd/yyyy hh24:mi'), 2975, NULL,
                 20
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm,
                 deptno
         VALUES (7654, 'MARTIN', 'SALESMAN', 7698,
                 TO_DATE ('09/28/1981 00:00', 'mm/dd/yyyy hh24:mi'), 1250, 1400,
                 30
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm,
                 deptno
         VALUES (7698, 'BLAKE', 'MANAGER', 7839,
                 TO_DATE ('05/01/1981 00:00', 'mm/dd/yyyy hh24:mi'), 2850, NULL,
                 30
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm,
                 deptno
         VALUES (7782, 'CLARK', 'MANAGER', 7839,
                 TO_DATE ('06/09/1981 00:00', 'mm/dd/yyyy hh24:mi'), 2450, NULL,
                 10
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm,
                 deptno
         VALUES (7788, 'SCOTT', 'ANALYST', 7566,
                 TO_DATE ('04/19/1987 00:00', 'mm/dd/yyyy hh24:mi'), 3000, NULL,
                 20
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm,
                 deptno
         VALUES (7839, 'KING', 'PRESIDENT', NULL,
                 TO_DATE ('11/17/1981 00:00', 'mm/dd/yyyy hh24:mi'), 5000, NULL,
                 10
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm, deptno
         VALUES (7844, 'TURNER', 'SALESMAN', 7698,
                 TO_DATE ('09/08/1981 00:00', 'mm/dd/yyyy hh24:mi'), 1500, 0, 30
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm,
                 deptno
         VALUES (7876, 'ADAMS', 'CLERK', 7788,
                 TO_DATE ('05/23/1987 00:00', 'mm/dd/yyyy hh24:mi'), 1100, NULL,
                 20
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm, deptno
         VALUES (7900, 'JAMES', 'CLERK', 7698,
                 TO_DATE ('12/03/1981 00:00', 'mm/dd/yyyy hh24:mi'), 950, NULL, 30
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm,
                 deptno
         VALUES (7902, 'FORD', 'ANALYST', 7566,
                 TO_DATE ('12/03/1981 00:00', 'mm/dd/yyyy hh24:mi'), 3000, NULL,
                 20
    INSERT INTO emp
                (empno, ename, job, mgr,
                 hiredate, sal, comm,
                 deptno
         VALUES (7934, 'MILLER', 'CLERK', 7782,
                 TO_DATE ('01/23/1982 00:00', 'mm/dd/yyyy hh24:mi'), 1300, NULL,
                 10
                );i hope this helps .
    Thanks,
    P Prakash
    Edited by: prakash on Jan 15, 2012 9:21 PM
    Edited by: prakash on Jan 15, 2012 9:22 PM

  • ImplementationOpt Design[Opt 31-67] Problem for AXI bus between data mover ip and axi interconnect

    I'm using vivado 2014.4 and win7 64bit for my zynq design.  Previously, the design is good. I made some revisions. Then I came across the problem.
    If the synthesis strategies option is flow_runtime_optimatized(Vivadio synthesis 2014), everything works
    If the synthesis  strategies option is default options (the synthesis settings are shown in the picture), Synthesis is still fine, but the implementation failed. Some of the errror msg is shown below.
    The error message shows there are unconnected pins on axi_interconnect_1. The connections between AXI master from my own IP(it is the axi port from a DataMover IP) and axi_interconnected is shown in the attached picture.
    The synthesis schematic is also checked. The connections of the axi_interconnect have some pins unconnected as shown in the picture (interconnect_schematic_synth.PNG). The connections of my IP are good, but it misses some pins (like _arready, _rvalid, _ruser, _bid...).  The master AXI port in my port is from the data mover IP. By default, the data mover IP doesn't have these missed pins. The AXI port on my IP declares these missed pins, but actually they are not connected to any inside my IP. 
    Btw, previously, the project works well in my design. But now it doesn't.
     I also check the connections of the axi_interconnect when synthesis strategies option is flow_runtime_optimatized, the schematic show all pins are connected. 
    Please help. Thx.
    Sam
    ImplementationOpt Design[Opt 31-67] Problem: A LUT1 cell in the design is missing a connection on input pin I0, which is used by the LUT equation. This pin has either been left unconnected in the design or the connection was removed due to the trimming of unused logic. The LUT cell name is: design_top_level_i/Zynq_Processing_System/axi_interconnect_1/s00_couplers/auto_pc/inst/gen_axi4_axi3.axi3_conv_inst/USE_READ.USE_SPLIT_R.read_addr_inst/size_mask_q[3]_i_1__0.
    [Opt 31-67] Problem: A LUT2 cell in the design is missing a connection on input pin I0, which is used by the LUT equation. This pin has either been left unconnected in the design or the connection was removed due to the trimming of unused logic. The LUT cell name is: design_top_level_i/Zynq_Processing_System/axi_interconnect_1/s00_couplers/auto_pc/inst/gen_axi4_axi3.axi3_conv_inst/USE_READ.USE_SPLIT_R.read_addr_inst/access_is_incr_q_i_1__0.
    [Opt 31-67] Problem: A LUT2 cell in the design is missing a connection on input pin I1, which is used by the LUT equation. This pin has either been left unconnected in the design or the connection was removed due to the trimming of unused logic. The LUT cell name is: design_top_level_i/Zynq_Processing_System/axi_interconnect_1/s00_couplers/auto_pc/inst/gen_axi4_axi3.axi3_conv_inst/USE_READ.USE_SPLIT_R.read_addr_inst/access_is_incr_q_i_1__0.
    [Opt 31-67] Problem: A LUT2 cell in the design is missing a connection on input pin I0, which is used by the LUT equation. This pin has either been left unconnected in the design or the connection was removed due to the trimming of unused logic. The LUT cell name is: design_top_level_i/Zynq_Processing_System/axi_interconnect_1/s00_couplers/auto_pc/inst/gen_axi4_axi3.axi3_conv_inst/USE_READ.USE_SPLIT_R.read_addr_inst/command_ongoing_i_2__0.
    [Opt 31-67] Problem: A LUT2 cell in the design is missing a connection on input pin I0, which is used by the LUT equation. This pin has either been left unconnected in the design or the connection was removed due to the trimming of unused logic. The LUT cell name is: design_top_level_i/Zynq_Processing_System/axi_interconnect_1/s00_couplers/auto_pc/inst/gen_axi4_axi3.axi3_conv_inst/USE_READ.USE_SPLIT_R.read_addr_inst/size_mask_q[1]_i_1__0.
    [Opt 31-67] Problem:

     
    Hi Muzaffer
    THx for your reply. 
    I tried implementation with opt_design option -directive Explore, it didn't work.
    Aslo I disabled the opt_design and enabled phys_opt_design, it still has the same error in the implementation.
    I would try to delete the pins related  _AR_(like _arready, etc. ) and _R_ (like _rdata, _rid, etc.) pins in the AXI4 port in my IP. The data mover ip doesn't contains thes pins. I want  to see whether it works this way.
    Hopefully, the new vivado version will help. 

  • PeopleSoft Installation: unable to login to data mover using access id.

    Hi All,
    I am trying to install PeopleSoft HRMS 9.0/ Oracle 10g
    I have succesfully installed Oracle 10g. I have also carried out all the steps given in installation guide for running the database scripts prsent at PS_HOME\scripts\nt.
    I am also able to loging to database using access id as SYSADM, but when i try to login to Data Mover in bootstrap mode using ID: SYSADM, the data mover doesn not opens, it just disappears from the window. Where as im successfullly able to login to data mover in bootstrap mode using connect id PEOPLE, and further on i was able to choose between DEMO and SYS type, and generate the DMS script. The script is still executing, and it is importing tables under the PEOPLE schema, as already mentioned in the post that PEOPLE schema should not hold any tables.
    Also i have downloaded the PeopleSoft HRMS 9.0 from oracle edelivery for windows 64 bit, but my server is 32 bit. can this be the cause of the problem
    Please share the the solution if aware.
    Thanks in advance
    Alok

    in bootstrap mode using ID: SYSADM, the data mover doesn not opens, it just disappears from the window. Please, set a trace level on your client, retry and report the log over here.
    Nicolas.

  • Error: Could not continue scan with nolock due to data movement, DBCC proccache will clear the probelm

    SQL Server: 2008 R2 SP2
    Before describing my problem, I have gone via the forum, there is no view or functions inside my stored procedure
    When running a particular stored procedure inside crystal report, the error " Could not continue scan with nolock due to data movement" comes once every few weeks. After I clear the query cache plan, it works again for few weeks and the problem
    comes again. During these few weeks, there is no restart or query plan clearing.
    If I run the stored procedure inside SSMS, where the SQL statement is copied and pasted from SQL profiler during crystal report run, there is no error.
    I discovered running in SSMS and crystal report generate 2 different query plans even I copied the SQL from SQL profiler, I have actually saved the query plans. Unfortunately, this forum does not accept attachments, or otherwise I will post my query plans
    here.
    There is one thing I notice about the query plan is during nested loop operation, there is a warning "no join predicate". I don't use any views or UDF in the statement, nor did I use pre-1992 ANSI join syntax. However, I did use table variables.
    My guess is whether this will cause " Could not continue scan with nolock due to data movement", after I clear the cache, I run crystal report again, and I look at the plan again, the "nested loop no join predicate" warning is gone.
    Running this stored procedure took 1 second maximum, even when this error is popping up, it pop up within 1 second.
    DBCC checkdb has been run
    The same stored procedure running by crystal report in a SQL 2008 (non r2) live environment has no problems, so I am thinking this is R2 specific problems.
    The "nested loop no join predicate" error SQL statment is below, no views, no udf, but table variables
    INSERT @ChequeAccount
    SELECT        PS.PaySummaryID, PS.EmployeeID, PS.CostCentreID,
                (PS.GrossPay    + PS.LumpSumA + PS.LumpSumB    + PS.LumpSumD+ PS.LumpSumE+ PS.ETP+ PS.PaymentsAfterTax    - PS.DeductionsAfterTax  
     - PS.Tax- PS.ETPTax    + PS.TaxRebate) * -1 AS Amount,
                CGLM.GLAccountID
    FROM Pay_Summary PS JOIN Input_Sheet ISH ON PS.InputSheetID = ISH.InputSheetID  AND  ISH.PayrollID = @binPayrollID   
    AND PS.PaySummaryID NOT IN (SELECT PaySummaryID FROM @ChequeAccount)
    JOIN Payroll P ON P.PayrollID = ISH.PayrollID AND P.EmployerID = @binEmployerID
    JOIN CustomGLFixMapping CGLM ON CGLM.EmployerID = P.EmployerID AND CustomGLFixMappingNameID = 1 AND CGLM.CostCentreID IS NULL

    The error Could not continue scan with nolock due to data movement can occur when you use the NOLOCK table hint, or use the command SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED. That is, so-called dirty reads. The error is not related to the
    query plan per se, but when scanning a table, the storage engine will use an IAM scan rather than following the clustered index. If there is simultaneous activity, the storage engine may detect this and abort the operation to avoid returning incorrect data.
    Or it may not detect it, and return uncommitted data or fail to return committed data.
    All of these effects are transitory and they will not show up when you are alone on the system, only when there is concurrent activity in one or more of the tables in the query.
    Using dirty reads is a risky business for the reasons explained above, and it takes careful analysis to understand whether you can live with the errors you can get from a particular query. The error about data movement can be handled: trap the error and
    resubmit the query. But what about spurious incorrect results?
    If you believe locking to be a problem, you should consider setting the database to READ_COMMITTED_SNAPSHOT
    and take out all use of READ UNCOMMITTED/NOLOCK. When the database is in READ_COMMITTED_SNAPSHOT, readers read from the snapshot and only see committed data without blocking writers. This has some other effects like requiring a bigger tempdb,
    and there is a risk for other types of concurrency errors, but they tend to be smaller risks.
    I discovered running in SSMS and crystal report generate 2 different query plans even I copied the SQL from SQL profiler,
    This is because SSMS by default runs with SET ARITHABORT ON. I discuss this in more detail in this article on my web site:
    http://www.sommarskog.se/query-plan-mysteries.html
    However, as I said, this problem is not related to the query plan as such, although some query plans are more susceptible to this error than others. (All plans are suscpeitble to produce incorrect results).
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Problems migrating data via LSMW using the IDoc method

    Hello everybody,
    I got a very strange problem migrating data via the LSMW. I tried to map legacy data using the IDoc mapping method. I'm using the message type 'CRMXIF_PARTNER_SAVE_M ' and the basic IDoc type 'CRMXIF_PARTNER_SAVE_M01'.
    I can read the import data and convert it to IDoc structure as well as generate the IDoc. When I start processing the IDoc afterwards I get always the same error messages which I don't understand...
    The errors are:
    1) Error status 'A ' calling validation service (Status 51)
    2) Validation error occurred: Module CRM_BUPA_MAIN_VAL , BDoc type BUPA_MAIN (Status 51)
    3) Error_Time Dependency_Addresses CHECK_TABLE_MISSING (Status 51)
    4) Internal error when calling operation module BUA_CHECK_ADDRESS_VALIDITY_ALL : Check table missing (Status 51)
    5) Address moves ignored in the case of time-independent systems (Status 51)
    6) Partner data processed with key PartnerGUID 4BF67ADE9B9923BEE10000000A3500DB (Status 51)
    7) Partner (4BF67ADE9B9923BEE10000000A3500DB ): the following errors occurred (Status 51)
    I tried to import the following data:
    external ID
    Name1
    Name2
    Address data
    Telephone data
    Fax data
    Website data
    Can anybody tell me where those errors come from and how I can fix them. Would be great if somebody could help me!
    Thank you and best regards,
    Markus

    so apparently my issue with this was setting the time zone in the Address to the time zone of the user doing the conversion. You can find the time zone in SU01.
    Also you can create a BP in the WebUI then check t-code SMW01 and it will help you on passing the values to CRMXIF_PARTNER_SAVE
    Edited by: Akeem Lockett on Jul 8, 2010 7:44 AM

  • Data Mover login error (SQL 2008R2 as backend DB)

    Hi,
    This question was asked in this thread -> Continued Discussion - Steps to Upgrade PT8.49 to 8.50 Manually
    The difference between problem in that thread and mine is that I have WIN08R2 SQL08R2 Peopletools9.52 and Peoplesoft HRMS 9.0 installed. I was on the steps of creating the databases manually with the sql scripts and that was also preformed.
    I also have different ConnectID: people and AccessID: peoplesa with respective passwords. I try login in with the AccessID and I get the below error.
    File: SQL Access ManagerSQL error. Stmt #: 2 Error Position: 0 Return: 8077 - Invalid SNAC client version (minimum is 10.00.1763) (SQLSTATE PS077)
    Any suggestions?
    -Vishal

    Hi there,
    I upgraded the SNAC client to 1033\x64\sqlncli.msi from the Microsoft and still I get the same Login error in the DataMover.
    I checked the registry location --> HKEY_Local_Machine\Software\Microsoft\Microsoft SQL Server Native Client 10.0\Current Version = 10.51.2500.0
    The Data Mover login error says that it requires minimum is 10.00.1763 so i guess the one I have should work fine?
    Any suggestions?
    -Vishal

  • Couldn't run data mover....

    Hello, i am currently at PeopleTools_8.52_Installation_Oracle.pdf (Task 6A-14: Creating Data Mover Import Scripts). I had verify the same connect id and password with is people/peop1e as default.. But i cant seem to log in to data mover.
    What does it mean that using the access id as the user id? where is the access id?

    currently i have these. do I need to delete and re-create a new listener?
    listener.ora:
    # listener.ora Network Configuration File: C:\app\L31004\product\11.2.0\dbhome_3\network\admin\listener.ora
    # Generated by Oracle configuration tools.
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1523))
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1523))
    ADR_BASE_LISTENER = C:\app\L31004
    tnsnames.ora:
    # tnsnames.ora Network Configuration File: C:\app\L31004\product\11.2.0\dbhome_3\network\admin\tnsnames.ora
    # Generated by Oracle configuration tools.
    LISTENER_ORCL =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1523))
    ORCL =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1523))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = orcl.citi.sit)
    sqlnet.ora:
    # sqlnet.ora Network Configuration File: C:\app\L31004\product\11.2.0\dbhome_3\network\admin\sqlnet.ora
    # Generated by Oracle configuration tools.
    # This file is actually generated by netca. But if customers choose to
    # install "Software Only", this file wont exist and without the native
    # authentication, they will not be able to connect to the database on NT.
    SQLNET.AUTHENTICATION_SERVICES= (NONE)
    NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

  • How to bind data from script created variable to embed element of XML schema (xsd) in "Data View"

    Hi, i have got another problem with livecycle designer scripting.
    I have got script line which is defining of string variable:
    var aaa = "this is my string";
    and i have got embed XML schema like this (it`s only short part of whole file):
    ... xs:element name="bbb" type="xs:string"/ ...
    After saving data to XML i would like to get "this is my string" as a value of my "bbb" XML element.
    To save this data i`m using submit button which is connect with submit.php file on my server.
    How to connect script created variable and embed XML schema element which is present on my "Data View" tab.
    Please help me a bit becouse i don`t know even where to search answer of it...
    Of course i know possibilities to create fake unvisible text field object and bind it with 'bbb' and than put "this.rawValue = aaa" to connect those two variables but i think that is not a good idea to solve it in that way. It`s too primitive

    i solve it, i should write this:
    xfa.datasets.data.bbb.value = aaa;

Maybe you are looking for