Data consistency

Hi,
I am new in Abap. Could anyone help me to solve the query.
I have a requirement to modify 2 abap Z*  programs and 1
Z* table in DV.  But the sources do not exist in DV, only exist in PD.  
Is it possible to modify in PD directly without the  transport request?  I have no access in PD.  How can I do ?
Please advise!
Thanks
Larry

Hi,
Thank you for all quick replies !.  This is not normal practice!.
These issues are not little, but it seems that it is their practice,  More customized programs and tables are
only exist in PD or there are latest version in PD.
Due to urgent project, I have a chance to have access
in PD.  I see some version of the programs which
transport requests are D01Kxxxxx, PRDKxxxxxx.
Assume that the abapper have the right of SYSADMIN:
1. As I know, D01Kxxxx is transported from DV, how
   about PRDKxxxx ?
   As Swastik says, it may ask the basis person to create a
   transport package for that Program in PD and transport the
   request to DV.  Is PRDKxxxx the TR that may transport
   from PD to DV ?
2. If modifying the source in PD directly, the system may
    request to enter the access key, may I have the log
    to check this? 
3.  Also, it is possible to
     - transport the first source from DV to PD.
     - then delete the source in DV, and transport again
       from DV to PD
     Now, PD will have an up-to-date version, and DV
     will not.
     May I have a log to check these?
Thanks all again,
Larry

Similar Messages

  • Logical Standby Data Consistency issues

    Hi all,
    We have been running a logical standby instance for about three weeks now. Both our primary and logical are 11g (11.1.0.7) databases running on Sun Solaris.
    We have off-loaded our Discoverer reporting to the logical standby.
    About three days ago, we started getting the following error message (initially for three tables, but from this morning on a whole lot more)
    ORA-26787: The row with key (<coulmn>) = (<value>) does not exist in table <schema>.<table>
    This error implies that we have data consistency issues between our primary and logical standby databases, but we find that hard to believe
    because the "data guard" status is set to "standby", implying that schemas' being replicated by data guard are not available for user modification.
    any assistance in this regard would be greatly appreciated.
    thanks
    Mel

    It is a bug : Bug 10302680 . Apply the corresponding Patch 10302680 to your standby db.

  • Data Consistency when Copying/ Refreshing ECC 6.0 and SRM-SUS 5.0 Systems

    Hello,
    We are planning a refresh / system copy of an ECC 6.0 and SRM-SUS 5.0 system
    The refreshes will be completed from backups taken of production systems refreshed onto the QA Landscape.
    I have referenced the following SDN thread that provides some guidelines on how to refresh R/3 and SRM systems and maintain data consistency between the systems using BDLS and changing entries that correspond to backend RFC destinations:
    [Is there a process/program  to update tables/data after System Refresh?;
    This thread is fairly old and relates to earlier versions of R/3 (4.7) and SRM (3.0).  We have heard that at higher system versions there may be technical reasons why a refresh canu2019t be performed.
    Does anyone have experience of completing successful refreshes of landscape that contain ECC and SRM systems at higher SAP versions (ideally ECC 6.0 and SRM-SUS 5.0)  Does anyone know whether it is technically possible?
    Are there any additional steps that we need to be aware of at these higher SAP versions in completing the copy to ensure that the data remains consistent between ECC and SRM?
    Thanks
    Frances

    I have seen this somewhere in the forum: See if this helps you
    BDLS: Convertion of logical system (SRM).
    Check entry in table TWPURLSVR.
    Check RFC connections (R/3 and SRM)
    SPRO, check or set the following points:
    Set up Distribution Model and distribute it
    Define backend system
    Define backend sytem per product category
    Setting for vendor synchronization
    Numbe ranges
    Define object in backend sytem
    Define external services (catalogs)
    Check WF customizing: SWU3, settings for tasks
    SICF: maintain the service BBPSTART, SAPCONNECT
    SE38:
    Run SIAC_PUBLISH_ALL_INTERNAL
    Run BBP_LOCATIONS_GET_ALL
    Update vendor BBPUPDVD
    Check Middleware if used.
    Run BBP_GET_EXRATE.
    Schedule jobs (bbp_get_status2, clean_reqreq_up)
    Convert attributes with RHOM_ATTRIBUTE_REPLACE

  • Error in SP12 TemSe: Data Consistency Check

    Dear All,
    I am getting error in SP12 TemSe: Data Consistency Check. Please suggest. this are only few there are may in thousands. There is one thing more we have resent upgrade from ecc5 to eccc6.
    Please suggest how to solve it.
    Consistency check of table TST01 / TemSe objects
    System PRD 12.04.2010 12:22:02
    Clt TemSe object name Part Comments
    700 BDCLG001048620722636 1 Unable to access the related file
    700 BDCLG001336417070836 1 Unable to access the related file
    700 BDCLG366544424095583 2 Unable to access the related file
    700 BDCLG366544424095583 3 Unable to access the related file
    700 JOBLGX23303906X11768 1 Object reset
    0 JOBLGX_ZOMBIE_X00304 1 Object reset Length = 0
    0 JOBLGX_ZOMBIE_X00773 1 Object reset Length = 0
    0 JOBLGX_ZOMBIE_X01080 1 Object reset Length = 0
    0 JOBLGX_ZOMBIE_X02571 1 Object reset Length = 0
    Regards,
    Kumar

    hi
    what is the solution for this post?/
    please tell me i am also getting same error
    shell i delete all these things??
    regards
    krishna

  • How to enforce table data consistency at Db level?

    How would I enforce the follwing data consistency, consider the following table:
    PK_COL     | SOME_ID | COL1 | COL2 | COl3  | some other columns ...
    1       |       1 |  X   |  B   | C
    2       |       1 |  X   |  B   | C
    3       |       2 |  A   |  G   | G
    4       |       2 |  A   |  G   | G
    5       |       2 |  X   |  G   | G
    .For every same value in SOME_ID column I need to have same values in colums: COL1, COL2, COL3 (same values across rows within same_id value).
    Row 5 is not consistent so raise application error.
    I am not able to achieve this using triggers, when I query the table after insert/update i get mutating tables error.

    DanielD wrote:
    How would I enforce the follwing data consistency, consider the following table:
    PK_COL     | SOME_ID | COL1 | COL2 | COl3  | some other columns ...
    1       |       1 |  X   |  B   | C
    2       |       1 |  X   |  B   | C
    3       |       2 |  A   |  G   | G
    4       |       2 |  A   |  G   | G
    5       |       2 |  X   |  G   | G
    .For every same value in SOME_ID column I need to have same values in colums: COL1, COL2, COL3 (same values across rows within same_id value).
    Row 5 is not consistent so raise application error.
    I am not able to achieve this using triggers, when I query the table after insert/update i get mutating tables error.There is no straightforward way to implement constraints on table data that span across the rows (in oracle).
    Before discussing any possible solutions, few questions for you
    1) If the requirement is to have same value in all 3 columns for a SOME_ID value, why are you even allowing the user to enter values for COL1, COl2 and COL3? Or is this table populated using data coming from external sources?
    2) How is this table being accessed? Is the data maintained using OLTP-style transactions/queries (i.e. INSERT/UPDATE/DELETEs and SELECTs that affect only small number of rows) or DWH-style transactions (i.e. data loaded in bulk at defined intervals and queries access large amount of data)?

  • How to guarantee data consistency in two tables

    Hi
    I have two tables debits and credits:
    create table debits
    client_id number;
    amount_debit number
    create table credits
    client_id number;
    amount_credit number
    Now I need to guarantee that sum(amount_debit) - sum(amount_credit) > 0 for a specific client. Now I created a trigger on the credit table:
    TRIGGER DEBIT_TRIGGER BEFORE INSERT ON "PPC"."DEBIT"
    REFERENCING OLD AS OLD NEW AS NEW
    FOR EACH ROW
    declare
    v_amount_debit number;
    v_amount_credit number;
    begin
    select sum(amount_debit)
    into v_amount_debit
    from debit
    where client_id = :new.client_id;
    select sum(amount_credit)
    into v_amount_credit
    from credit
    where client_id = :new.client_id;
    if (v_amount_debit - v_amount_credit) < 0 then
    raise_application_error(-20005, 'Error');
    end if;
    end;
    But this won't work with multiple sessions because in two separates sessions an account can go below 0 before commit and the trigger won't detect the error.
    In this case how can I guarantee data consistency?

    You could SELECT...FOR UPDATE on the related row (in the "clients" table?) referenced by client_id, but...
    1. Do you also need UPDATE and DELETE triggers on the "debits" table? How would you prevent an update of a debit (or a delete of a negative "debit) that causes the balance to drop below zero?
    2. Do you also need (INSERT, UPDATE, and DELETE triggers) on the "credits" table? How would you prevent a delete or update of a credit (or an insert of a negative "credit") that causes the balance to drop below zero?
    3. It might be easier to use a "refresh on commit" materialized view to enforce the constraint, something along the lines of the discussion at:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:21389386132607
    4. Another alternative might be to revoke insert, update, and delete privileges on the two tables and instead provide stored procedures (maybe named "debit" and "credit") that provide the necessary functionality and enforce the constraints (with locking).
    Hope this helps.

  • Are Hyper-V 2012R2 checkpoints Application data consistent? How does it use now VSS during checkpoints creation?

    Are Hyper-V 2012R2 checkpoints Application data consistent? How does it now use VSS during checkpoints creation?
    I read the article http://www.aidanfinn.com/?p=15759 that now WS2012 R2 Hyper-V changed how it performs backups.
    That is unofficial info.  I can't find any info about that in msdn, but that is critical.That is about data integrity.
    Is this true?
    Does this mean that we may not use VSS in Hyper V server backup application any more? Will Hyper-V 2012R2 host (WM is running) when i am creating a checkpoint, 
    call VSS to only flush VSS Writers during this process for target child VM to make application data crash consistent state? And the volume shadow copy wn't be used? Only vhdx +avhdx witch will merged later, right?

    There are a couple levels to this.
    If VSS is there, VSS is used (just as before).  But there is a bit more happening around it than simply placing the VM in a saved state.
    However, if VSS is not there (in the VM) then the behavior is slightly different (this is really to support Linux VMs that don't support VSS) - and still achieve a good VM backup.
    In another blog post I see it described this way:
    Online backup for Linux VMs – even though there’s no VSS support in Linux, the Microsoft guys have managed to find a way to create consistent backups of running Linux VMs, by doing what’s essentially called a “filesystem freeze” by briefly stopping
    all disk I/O inside the VM until the host creates a VSS snapshot of the VM on the host (thus getting a file-consistent backup with zero downtime) – once the snapshot is done, disk I/O is resumed and then the actual file copy process starts
    Flemming shows the process here:
    http://flemmingriis.com/changes-in-hyper-v-backup-in-2012-r2/
    If you are calling VSS to perform a backup - nothing has changed.  The only thing that has changed is the 'how it happens' , not the 'what you get' and 'how you call it'
    This way if you call VSS to perform a backup - you get a good one, no matter the OS in the VM, nor its support for VSS.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

  • BAM HA and Data Consistency

    Hi,
    I need to preserve consistency of data in BAM, meaning BAM itself does not require HA, but I need to guaranty there is no loss of data (even if BAM server is down). I have a single BAM server and a cluster of WLS where the JMS queues/topics are located (data is read from JMS queues/topics using EMS).
    Using any kind of topic means loss of data once BAM server is down.
    BAM does not reconnect after migration so using a queue on a migrate-able JMS server also does not seem to work.
    Using a distributed queue requires a queue for each EMS instead of using a single queue with Message Selector. And anyways there is no reconnection in this case as well.
    What would be the preferred approach than?
    Any suggestions are highly appreciated.
    Regards.

    Hi
    There are a couple of performance related measures that you can take :
    - Creating aggregates on 3-4 objects which are very popular in terms of reporting. Make sure that you don' t go overboard with this as maintaining  too many aggregates could create a memory issue.
    - If you are using BI 7.0 , then go for 'Repartitioning'.  Would help you in loading data as well as during reporting.
    - Compressing / Indexing are always available and good options.
    - I don't compressing could create an issue even if your Info cube gets populated thru multiple data sources.
    Cheers
    Umesh

  • What does "Documents and Data" consist of on the iPhone?

    I'm syncing my iPhone and I have a lot of "Documents And Data" Does anyonen know what this consists of on my phone? And what it consists of?

    You can probably get a good idea by going to Settings > General > Usage > Documents & Data.
    Also see the first section ("Other on your iDevice") here:
    https://discussions.apple.com/docs/DOC-5142

  • Cube data consistency

    Hai,
            I am working with an ODS and a Cube. The data is loaded from PSA to ODS and then to the cube. When I get into the PSA, ODS and cube sperately, for each record( as per the primary keys), the data in PSA is consistent with ODS. But the data in ODS is not consistent with ODS. The values of one object(amount) is made zero for some of the records and not for some. I checked the mappings of the objects. They are all one to one, meaning the data in ODS have to come to Cube as it is atleast for that object.The data is thus inconsistent I believe. I did the RSRV test and it says the data has been loaded into cube with out units. There is no fix for this. THe only way is to reload data. but reloading all the data into production system as you know is hell and not that recommended.
    Can somebody please tell me how I can fix this issue?
    Thanks a lot.

    Hi Visu ,
    Do you mean Inconsistency between the ODS and cube?
    Do you have any start routine in the update rules??
    How did you determine the Inconsistency of few records ..pls let me know..Just trying to understand the problem..
    Ashish

  • What does documents and data consist of on the iPhone when iCloud is already shut off?

    My icloud is turned off, and i dont use documents and data, and if i do i dont even know what it is on my iphone 5, but it takes up more than a third of my storage space. how do i turn it off?

    Welcome to the Apple Community renee_sween.
    you can see exactly what is in Documents and data at settings > iCloud > storage & back up > manage storage > documents & data.

  • Bridge Thumbnail Metadata doesn't show dates consistently

    I tried changing my Thumbnail preferences in Bridge to add Additional Lines of Thumbnail Metadata.
    I checked show Date Modified.
    However, not all pictures were giving the same data.
    I.e. some show Jan 13 and others 1-1-2013.
    How can I change this so it's just Jan 13 or 1-1-2013 and not both?

    When you set up the download you can specify what style of date you want.  Not sure how you change once in system.

  • Maintanance of category data in R/3 for SRM

    Hi Gurus I am getting the message  below , where in R/3 can I maintain the settypes for product categories( material groups) ?
    *You can only change the category data in the original system RFD100*
    *Message no. COM_PRCAT010*
    *Diagnosis*
    *This category has been imported from another logical system. To ensure data consistency in the different systems, the category data may be changed only in the original system. However, further set types may be assigned to the category.*
    *Procedure*
    *Change the category data in the original system and reimport the data.*

    Hi,
    If you post this issue to SRM thread, you can get good answer, still i will give some idea.
    In SRM, Product is equal to material group in R/3 system (Back end system). You can import the data from Back end system to SRM or vise versa depending on your logical systems definition. You may miss some config setting.
    Subrahmanyam

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Data mart status has beed reset by del the in between req in source ODS

    Hi SDN,
    We have a situation where in which there is a daily delta laod going from ODS to Cube and we have reset data mart status accidentally by deleting the request which is in between multiple number of requests in the source ODS, now when we deleted the requests in ODS we got a POP asking for 'do you want to delete the data in cube' we went with on that POP up, after that when we see in the manage of the source ODS the datamart status is not seen for all the requests that have been loaded to target cube. Then we have reconstructed the data for the deleted request from the PSA.
    Next day when a new laod comes into ODS then will the ODS send the correct delta update to target Cube.
    If correct delta is not updated to cube is there any method that we can follow to maintain the data consistency with out deleting the data in the target cube.
    Thank you,
    Prasaad

    Hi,
    You dekleted Data in Cube and loaded,but Ddata Mart is not appearing in ODS. So if you have all data in ODS, delete data in Cube and then just delete datamart layer in ODS and right click and click on Update data in DataTarget, then fresh request will update to Cub , this is Init only then Deltas will go from net day onwards.
    Thanks
    Reddy

Maybe you are looking for