Data mart cube to cube copy records are not matching in target cube

hI EXPERTS,
Need help on the below questions for DATA Mart-Cube to cube copy(8M*)
Its BW 3.5 system
We have two financial cube.
Cube A1 - Sourced through R/3 system (delta update) and Cube B1- Sourced through A1 cube.(Full update). These two cubes are connected through update rules with one to one mapping without any routines.Basis did a copy of back-end R/3 system from Production to Quality server.This happened approximately 2 months back.
The Cube A1 which extracts delta load from R/3 is loading fine. but for the second cube, (extraction from previous cube A1) i am not getting full volume of data instead i m getting meagre value but the loading shows successful status in the monitor.
We  tried through giving conditions in my infopackage (as it was given in previous year's loading) but then also its fetching the same meagre volume of data.
To ensure that is it happening for the particular cube, we tried out in other cube which are sourced thro myself system and that is also getting meagre data rather than full data..
For Example: For an employee if the data available is 1000, the system is extracting randomly some 200 records.
Any quick reply will be more helpful. Thanks

Hi Venkat,
              Did you do any selective delitions in CUBEA1.
first reconcile data cube1 & cube2 .
match totals of cube1 with cube2.
Thanks,
Vijay.

Similar Messages

  • DB Connect DataSource PSA records and DSO records are not matching...

    Dear All,
    I'm working with SAP NetWeaver BW 7.3 and for the first time, I have loaded from Source System DB Connect. I have created a DataSource and pulled all records and found 8,136,559 records in PSA. When I designed and created DSO with Key Fields 0CALDAY, Item No and Company Code, it has transferred records about 8,136,559 and added records about 12,534 only. Similarly following InfoCube has about 12,534 records into its Fact table. When I tried to reconcile the data/records with source DBMS for a month, the records/data could not matched?
    1. What could be the reason behind the issue? why I am unable to load the records/data correctly?
    2. Have I not mentioned the Key Fields of DSO in a correct manner?
    3. Is it possible to load the records/data into DSO without giving any field as Key Fields?
    4. How should I resolve this issue?
    5. Is it could be of DSO Overwrite and summation function utilization if yes, then how to utilize it?
    Many thanks,
    Tariq Ashraf

    Dear Tariq,
    1. What could be the reason behind the issue? why I am unable to load the records/data correctly?
    Ans:  Check transformation once. Is there any start routine you have used or direct assignments. What kind of DTP settings you have done.
    Check the messages at the DTP monitor. You will surely find some clue. Any duplicate records are being detected or not check once if you are using semantic keys in your DTP.
    2. Have I not mentioned the Key Fields of DSO in a correct manner?
    Ans:  The transformation key and the DSo key are they same in your case?
    What kind of DSO is it? Like for sales order DSO you take Order number as a key field., So you have to define the key fields according to business semantics I suppose. Do you agree?
    3. Is it possible to load the records/data into DSO without giving any field as Key Fields?
    Ans:  I dont think so as the keys you defined will help in having unique data records isnot it?
    4. How should I resolve this issue?
    Ans: Please check the above as in Ans:1 please share your observation.
    5. Is it could be of DSO Overwrite and summation function utilization if yes, then how to utilize it?
    Ans: DSO overwriting of key figures is useful when you have full loads in picture. Are you always going to perform full loads ?
    For reference would you like to check this thread:  Data fileds and key fields in DSO
    Lets see what experts give their inputs.
    Thank You...

  • Report Records are  Not matching with database(Toad,oracle )

    Hi experts ,
    s/w sap bo 3.1
    When iam executing one of my  report ,it fetches data  around 4000,
    when I am executing same sql in toad database or oracle , I am fetching  6000 records .
    There is 2000 records diff ,we are struggling with this issue from 4 days .
    I across verified:
    1. I enable and disabled the retrieve duplicate row .. still  there is diff in records
    2. Universe:  disabled    controls Limit size of result set to
    3. even i tried creating derived table  using same sql ,still issue with report data count report
    Advance thanks for yours inputs

    Hi Raghu,
    Can you check below things -
    1) Max rows retrieved condition in Query\Data Provider properties in your report. It might be set to 4000.
    2) May be your rows retrieved are 6000, but in report it is aggregating the KPIs. Can you check what is the row count at database level if you apply Sum(KPI) + Group by.
    3) What is the difference between the query generated by BO and query you are running at DB level? esp. the where clause.
    Assuming: No record limitation is set at universe level.
    Thanks,
    Sameer

  • KE42 records are not matching with report KE30

    Guru s,
    I have made some Plannig records for next year in KE42 ,say Trade, and later I ran KE30 to see the report for that particlar Trade, for that sales org and ccod but I do see some other number for Trade rather tahn seeing the same numbers for KE42, am I missing anything , Do I need to do anything else after creating records in KE42 ,
    My main objective is to create records in KE42 and see the same in KE30 report,
    Please let me know Do I have to do anything or Missing something?
    Regards,
    Paartha

    Hi
    Let me give you some example
    in KE42
    Freight Condition type Sales org           Price    Unit
    ZXXX                           XYZ                  $1.5     Ton
    in KE30
    Sales order          Sales Org             Sales Qty          Freight Costs Value field
    12345                   ZYX                     100 Ton                $150
    You can't there compare KE42 and KE30.
    Further in Planning, you can still plan based on the Sales order and Billing Documents as they are planning record types in COPA that you are targeting $ value of Sales documents and Billing Documents.
    Regards,
    Suraj
    Edited by: Suraj Ramamurthy on Nov 10, 2008 7:12 PM

  • In Journalization updates are not reflecting in target

    Hi,
    I am using Simple Journalization for SQL Server tables. Only the newly added rows are reflecting in the target but not the updated ones. I have inserted two rows and updated one row in source. Only the inserted records are reflecting in target and updated record is not reflecting in target.
    When I check the intermediate(I$_CDC_TARGET) and JKM tables(J$CDC_SOURCE) both inserted and updated rows are there. Why this updated records are not reflecting in target?
    Both Source and Target tables have Primary keys.
    Thanks,
    Naveen Suram

    Hi,
    I am using IKM SQL Incremental Update (row by row). Updated are coming to I$ table and I am able to see these records in Journal data also.
    One doubt here: In my source no extra columns are coming...I refered JohnGoodwin blog...according to the blog...there should be some extra columns related to journal right? Why they are not coming?
    First I have added the JKM to my model..then added the data source to CDC...then created a sunscriber...then started a Journal....
    Any thing I am missing?
    Thanks,
    Naveen Suram

  • Delta records are not loading from DSO to info cube

    My query is about delta loading from DSO to info cube. (Filter used in selection)
    Delta records are not loading from DSO to Info cube. I have tried all options available in DTP but no luck.
    Selected "Change log" and "Get one request only" and run the DTP, but 0 records got updated in info cube
    Selected "Change log" and "Get all new data request by request", but again 0 records got updated
    Selected "Change log" and "Only get the delta once", in that case all delta records loaded to info cube as it was in DSO and  gave error message "Lock Table Overflow" .
    When I run full load using same filter, data is loading from DSO to info cube.
    Can anyone please help me on this to get delta records from DSO to info cube?
    Thanks,
    Shamma

    Data is loading in case of full load with the same filter, so I don't think filter is an issue.
    When I follow below sequence, I get lock table overflow error;
    1. Full load with active table with or without archive
    2. Then with the same setting if I run init, the final status remains yellow and when I change the status to green manually, it gives lock table overflow error.
    When I chnage the settings of DTP to init run;
    1. Select change log and get only one request, and run the init, It is successfully completed with green status
    2. But when I run the same DTP for delta records, it does not load any data.
    Please help me to resolve this issue.

  • Procedure require in DB:i want insert records in temp table(if that records are not exist in temptable)

    i have one "LOGTABLE"  having 3000 records, in this table daily 3-4 records will be inserted.this table have 11colums no primary key columns.
    i created another temp table as "LOGTABLEMONITOR". (copied from "LOGTABLE" having 3000 records)
    i am expecting which are inserting daily 3-4 records in "LOGTABLE" will be inserting temp table "LOGTABLEMONITOR" (if that 3-4 records are not exist in "LOGTABLEMONITOR")

    SELECT <columns>
    FROM LOGTABLE
    EXCEPT
    SELECT <columns>
    FROM LOGTABLEMONITOR;
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Records are not showing

    what is wrong with this code? it is not showing my 7 records.
    the output is this [LINK|http://i338.photobucket.com/albums/n440/googleb0y/album1/output01.jpg]
    when i use the debugger, intenal table is empty. [LINK|http://i338.photobucket.com/albums/n440/googleb0y/album1/debug01.jpg]
    where did i go wrong? thanks for the help.
    data: begin of WA,
            MYUID       type ZADDRESSBOOK-USERID,
            MYFNAME     type ZADDRESSBOOK-FNAME,
            MYLNAME     type ZADDRESSBOOK-LNAME,
          end of WA.
    data: ITAB like table of WA.
    select USERID FNAME LNAME
          from ZADDRESSBOOK
          into corresponding fields of table ITAB.
    write: / SY-DBCNT, 'records found'.
    if sy-subrc eq 0.
        loop at ITAB into WA.
          write: / WA-MYUID, WA-MYFNAME, WA-MYLNAME.
        endloop.
    endif.

    Ur problem is with the select statement...
    data: begin of WA,
            MYUID       type ZADDRESSBOOK-USERID,
            MYFNAME     type ZADDRESSBOOK-FNAME,
            MYLNAME     type ZADDRESSBOOK-LNAME,
          end of WA.
    data: ITAB like table of WA.
    select USERID FNAME LNAME
          from ZADDRESSBOOK
          into corresponding fields of table ITAB.
    write: / SY-DBCNT, 'records found'.
    if sy-subrc eq 0.
        loop at ITAB into WA.
          write: / WA-MYUID, WA-MYFNAME, WA-MYLNAME.
        endloop.
    endif.
    When u use corresponding fields of  addition SAP searches for same field names in table itab but u declare ur table with diff. field names..thats why these records are not entered into the table..as same field names are not found in the table...so to avoid it pl. try.this code:
    data: begin of WA,
            MYUID          type ZADDRESSBOOK-USERID,
            MYFNAME     type ZADDRESSBOOK-FNAME,
            MYLNAME     type ZADDRESSBOOK-LNAME,
          end of WA.
    data: ITAB like table of WA.
    select USERID FNAME LNAME
          from ZADDRESSBOOK
          into table ITAB.
    write: / SY-DBCNT, 'records found'.
    if sy-subrc eq 0.
        loop at ITAB into WA.
          write: / WA-MYUID, WA-MYFNAME, WA-MYLNAME.
        endloop.
    endif.
    Regards,
    Joy.

  • Records are not displaying as expected.

    Hi,
    Records are not displaying as expected.
    i designed an query as
    SELECT '1. Emp Detail ' Details from dual
    UNION ALL
    SELECT empno||ename details dfrom emp
    UNION ALL
    SELECT '1. Dept Detail ' Details from dual
    UNION ALL
    SELECT deptno||dname details from dept
    UNION ALL
    SELECT '1. Location Detail ' Details from dual
    SELECT locationcode||locationname details from location
    The above query is displaying the data as follow
    EMP Detail
    Dept Detail
    Location Details
    John
    Kathy
    Prem
    IT
    Customer care
    Boston
    Texas
    but i need to display the data as follows
    EmpDetail
    JOHN
    Kathy
    Prem
    Dept Detail
    IT
    Customer care
    Location Detail
    Boston
    Texas
    Please help me on this.
    Thanks,
    Naz

    Records are not displaying as expected.Well, it certainly displays as I'd expect it to. i.e. in an undetermined order.
    If you want the output of a query to come out in a specific order then you must use an ORDER BY clause in your query, otherwise you are just querying a set of data and there is no order to it. You cannot assume that the order of the data will be the order in which you union your queries together, that's not how SQL works, as SQL is designed to work on sets of data, not on sequential rows.
    So, in your example you would want something along the lines of:
    select Details from (
      SELECT 1 as ord, '1. Emp Detail ' Details from dual UNION ALL
      SELECT 2, empno||ename details dfrom emp UNION ALL
      SELECT 3, '1. Dept Detail ' Details from dual UNION ALL
      SELECT 4, deptno||dname details from dept UNION ALL
      SELECT 5, '1. Location Detail ' Details from dual UNION ALL
      SELECT 6, locationcode||locationname details from location
    order by ord, details

  • 2LIS_02_S896 issue : Records are not updated into BW

    Hi
    ->We use 2LIS _ 02 _ SCL extractor to pass PO's to BW with delta.
    ->Works fine until the last week ,
    ->But on a particular date ( 15.05.2007) some of the records are not updated in BW.They are even not found at PSA level itself
    ->For eg for a material X there are two entries found in R/3 ( MB51 ) for the same posting date, now one of them is picked in BW (PSA level ) and other is not.
    ->There was no load fail / delta fail in BW and there is no exit written for this.
    ->Records are not updated in RSA3.
    Let me know how this issue can be fixed.Its urgent.
    Thanks,
    Jilani

    Hi Gagan,
    Kindly check if you have any selection restrictions in your infopackage.
    Regards.

  • DNS records are not 100% correct

    For a while now we've been noticing that some DNS records are not correct. The records are pointing to incorrect IP addresses. One by one I open the record, update the IP, then replicate across all domain controllers.
    What would cause the hostname of one machine to point to another IP address?

    I believe what you're seeing is from DHCP-DNS registration. You may have duplicates, or incorrect data for records that can't be updated by DHCP service or the DHCP client due to permissions on the record. You may also not have scavenging in place.
    In summary:
    Configure DHCP Credentials. The credentials only need to be a plain-Jane, non-administrator, user account. But give it a really strong password.
    Set DHCP to update everything, whether the clients can or cannot.
    Set the zone for Secure & Unsecure Updates. Do not leave it Unsecure Only.
    Add the DHCP server(s) to the Active Directory, Built-In DnsUpdateProxy security group. Make sure ALL other non-DHCP servers are NOT in the DnsUpdateProxy group. For example, some believe that the DNS servers or other DCs not running DHCP should be in it.
    They must be removed or it won't work. Make sure that NO user accounts are in that group, either. (I hope that's crystal clear - you would be surprised how many will respond asking if the DHCP credentials should be in this group.)
    On Windows 2008 R2 or newer, DISABLE Name Protection.
    If DHCP is co-located on a Windows 2008 R2 or Windows 2012 DC, you can and must secure the DnsUpdateProxy group by running the following:
    dnscmd /config /OpenAclOnProxyUpdates 0
    Configure Scavenging on ONLY one DNS server. What it scavenges will replicate to others anyway. Set the scavenging NOREFRESH and REFRESH values combined to be equal or greater than the DHCP Lease length.
    For specifics and step by steps, and good discussions on what's going on in the background and what to expect:
    DHCP Service Configuration, Dynamic DNS Updates, Scavenging, Static Entries, Timestamps, DnsUpdateProxy Group, DHCP Credentials, prevent duplicate DNS records, DHCP has a "pen" icon, and more...
    http://msmvps.com/blogs/acefekay/archive/2009/08/20/dhcp-dynamic-dns-updates-scavenging-static-entries-amp-timestamps-and-the-dnsproxyupdate-group.aspx  
    Good summary
    How Dynamic DNS behaves with multiple DHCP servers on the same Domain?
    http://social.technet.microsoft.com/Forums/en-US/winserverNIS/thread/e9d13327-ee75-4622-a3c7-459554319a27
    Another good Summary:
    Thread: "DNS problem" December 18, 2013
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/37b8b6b3-6cb1-496c-8492-09ded13bab18/dns-problem?forum=winserverNIS
    Ace Fekay
    MVP, MCT, MCITP/EA, MCTS Windows 2008/R2 & Exchange 2007, Exchange 2010 EA, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Technical Blogs & Videos: http://www.delawarecountycomputerconsulting.com/
    This post is provided AS-IS with no warranties or guarantees and confers no rights.

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Records are not inserted into MTL_TXN_REQUEST_HEADERS,MTL_TXN_REQUEST_LINES

    Hi all,
    I am doing pick release thru API.
    For that , I have run following API successfully but my problem is that records are not inserted into MTL_TXN_REQUEST_HEADERS , MTL_TXN_REQUEST_LINES
    tables so that I cannot generate the move order. Can anyone suggent the valuable tips for that ?
    1. WSH_DELIVERIES_PUB.create_update_delivery
    2. WSH_DELIVERY_DETAILS_PUB.Detail_to_Delivery
    3. WSH_PICKING_BATCHES_PUB.CREATE_BATCH
    4. WSH_PICKING_BATCHES_PUB.RELEASE_BATCH
    5. WSH_DELIVERIES_PUB.Delivery_Action ( for p_action_code = 'PICK-RELEASE' )
    regards with thanks in advance
    sanjay

    Hi All,
    To create a Move Order Header & Line Record in MTL_TXN_REQUEST_HEADERS and MTL_TXN_REQUEST_LINES tables , I have run an seeded
    procedure : CreateMoveOrderHeader & CreateMoveOrderLines .
    The procedure : CreateMoveOrderHeader was run successfully but when I am going to run procedure : CreateMoveOrderLines , it is giving
    an error .
    Following Error text I am receiving at the time of running of procedure : CreateMoveOrderLines .
    Initialized applications context: 5707 50138 660
    *==========================================================*
    Calling INV_MOVE_ORDER_PUB.Create_Move_Order_Lines API
    *==========================================================*
    Return Status: E
    Error Message :INV
    *==========================================================*
    From the above error message, no specific error is received . Only " Error Message : INV " is received hence it is very difficult to judge & resolve an error .
    Can anyone guide me that what is an error ? For what reason I am getting an error ?
    regrads
    Sanjay

  • Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Date and time mentioned in my mails are not correct.Similar date for all mails

    The date and time shown in my mails are not correct for all the mails its showing 31/5/12 but these mails were received on 1/11/14.

    Incorrect date or time displayed in various applications

Maybe you are looking for